The Anonymous Successes

There is one thing that almost never appears in an AI case study: a failure.

I looked for one. A cancelled project, an implementation that cost more than planned, a team that refused. I found nothing.

In the guides and whitepapers of the AI industry there are dozens of case studies across industries and applications. They describe exclusively successes. The improvements run at 25, 35, 40, 60 percent. Implementation takes three to six months. Everyone is excited and the numbers speak for themselves. Is that reality?

In real practice, projects fail. Not all of them, but some. McKinsey puts the failure rate for AI implementations at over 50 percent. Gartner says most AI projects never make it past the pilot phase. The consulting firms whose clients read these guides know that.

But in the published examples this reality does not exist. All you find is the success story where everything works out.

What interests me is not the question of whether the people writing these reports are lying, because I cannot imagine that they are lying. I think it is more likely that they are reproducing a deeply ingrained pattern that is standard in the tech consulting industry.

In reality it works like this: a consulting firm supports an AI implementation and if it works, it becomes a case study. If it does not, nothing comes of it and the failure disappears. Not actively suppressed but quietly swept under the rug.

The result is an industry that only knows success stories. Not because there are only successes but because failures are not a viable business narrative. No consultant wins a contract by telling the story of the last client who failed. So they select. And the selection creates a picture that is not necessarily wrong but incomplete. Incompleteness in consulting can be seen as neutral.

But whoever only sees successes misjudges the risk and plans with numbers that are not representative. Whoever then fails has no reference material to understand what went wrong because it is documented nowhere.

The anonymity of the examples makes the problem worse. When no name is given there is no verification. You cannot check with the company whether the numbers still hold two years later or whether the costs paid off. The anonymity does not protect the companies, it protects the claim or the non-claim.

I know consultants who tell me privately that most AI projects fall significantly short of expectations. That data quality was not sufficient, resistance in teams was underestimated and costs exploded after the pilot is what gets told over beers in the evening.

What does it say about an industry that only publishes one half of its experience? That it sells with an illusion, not through honest assessment as a basis for real consulting. Consulting actually means telling the whole truth. Because whoever only lets themselves be convinced by one side of the coin decides blind.

How these texts are written is explained here.