Demanding Transparency Without Practicing It

In every other keynote on content ethics there’s a sentence that sounds too reasonable to consciously read or even question. Companies should please clearly communicate how their content was created. Whether written by a human or synthetically generated by AI. The terminology is transparency, honesty toward the customer, building trust, all of which sounds right.

Then you look at who’s saying it. Consulting firms, agencies, thought leaders and responsible companies. When you then search for an indication, on their websites for example or in their concepts, in their newsletter and so on. Somewhere it should say how their own content was created. Whether parts were generated with AI, whether entire passages were revised and which were entirely human-written. There’s nothing anywhere.

Some hint that AI doesn’t always deliver the desired quality for their work. But what does that mean exactly? Is AI used for research, for drafts, for wording that then gets reworked or for structuring, summarizing studies and so on. None of it is disclosed. These are usually people who demand full transparency from others.

The demand for transparency is easy as long as it applies to others. The moment it applies to yourself, it gets complicated. Because suddenly it’s the nuances, small subtleties that can no longer be separated like black and white. Where exactly you draw the line becomes a spectrum and therefore a discussion you’d rather avoid.

I know companies that loudly demand transparency but don’t like to talk about how they themselves work. When experts want to enforce standards that they don’t deliver themselves, they should actually make themselves lose credibility. But that’s not the case.

They even recommend specific wording that could contribute to credibility. Labels like “created with AI assistance” or “this content was partially auto-generated” sound practical and plausible at first glance.

But how many consulting firms label their own content that way? Nobody can know because nobody says. And they don’t say, even though they regularly explain why you should.

This isn’t hypocrisy in the malicious sense. And I don’t think these people are deliberately lying either. It’s something more subtle, a blind spot that forms when you make rules for others and don’t see yourself as affected. You’re the consultant who stands next to the system. You analyze it, you recommend it but you don’t see yourself as part of it.

The reality is that today every document is under suspicion of being wholly or partially machine-generated. That applies to blog posts, to news articles and particularly to the texts of AI consultants themselves. Because anyone who writes about AI and doesn’t disclose the role AI played in the writing process doesn’t just miss the opportunity to follow their own advice, they undermine everything.

Transparency is not a feature you recommend to others, it’s a stance that starts with you. And if it doesn’t start with you, the recommendation is nothing more than an exercise in irony.

How these texts are written is explained here.