Demanding Transparency Without Practicing It
In every other keynote on content ethics, there’s a sentence that sounds so reasonable you almost don’t look twice. Companies should clearly communicate how their content was generated. Transparency. Honesty toward the customer. Building trust. All correct.
Then you look at who’s saying it. The consulting firms, the agencies, the thought leaders. And you look for an indication. On the website. In the whitepaper. In the newsletter. Somewhere it should say how their own content came into being. Which parts were generated with AI. Which passages were revised, which were entirely human-written. There’s nothing.
Some hint that AI doesn’t always deliver the desired quality for their work. But what does that mean exactly? Is AI used for research? For drafts? For wording that then gets reworked? For structuring? For summaries of studies? None of it is disclosed. By people who demand transparency from others.
The demand for transparency is easy. It costs nothing as long as it applies to others. The moment it applies to yourself, it gets complicated. Suddenly there are nuances. Suddenly it’s not black and white. Suddenly it’s a spectrum, and where exactly you draw the line is a conversation you’d rather postpone.
I know this from companies. The ones who demand transparency the loudest are often the ones who say the least about how they themselves work. It’s a game. You demand a standard that you enforce as the expert for that standard. And because you’re the expert, nobody challenges you on it, even though you don’t deliver yourself.
The industry even recommends specific wording. Labels like “created with AI assistance” or “this content was partially auto-generated.” Practical. Actionable. For others.
How many consulting firms label their own content that way? I don’t know. And that’s exactly the point. I don’t know because they don’t say. And they don’t say, even though they regularly explain why you should.
This isn’t hypocrisy in the malicious sense. I don’t think these people are deliberately lying. It’s something more subtle. It’s the blind spot that forms when you make rules for others and don’t see yourself as affected. You’re the consultant. You stand next to the system. You analyze it, describe it, make recommendations. But you’re not part of it. Or at least you believe you’re not.
The reality is that today every article, every whitepaper, every text is under suspicion of being wholly or partially machine-generated. That applies to blog posts, to news articles, to academic papers. It also applies to the AI consultants’ own writing. Especially that. Anyone who writes about AI and doesn’t disclose the role AI played in the writing process doesn’t just miss the opportunity to follow their own advice. They undermine it.
Transparency is not a feature you recommend to others. It’s a stance that starts with you. And if it doesn’t start with you, the recommendation is nothing more than an exercise in irony.
On the websites are the names of the experts. In their talks is the demand for clarity about the origin of content. Between those two things lies a silence that says more than any ethics section.