The Anonymous Successes

There’s one thing that almost never appears in an AI case study: a failure.

I looked for one. A project that was cancelled. An implementation that cost more than planned. A team that refused. A result that was worse than before. I found nothing.

In the guides, whitepapers, and presentations of the AI industry, there are dozens of case studies across industries and applications. They describe exclusively successes. And not just any. The improvements run at 25, 35, 40, 60 percent. Implementation takes three to six months. The team is excited. The numbers speak for themselves.

This isn’t practice documentation. It’s a catalog.

In real practice, projects fail. Not all of them. But enough. McKinsey puts the failure rate for AI implementations at over 50 percent. Gartner says most AI projects never make it past the pilot phase. These aren’t obscure sources. These are the consulting firms whose clients read these guides.

But in the published examples, this reality doesn’t exist. There’s only success. Only improvement. Only the story where everything works out.

What interests me about this isn’t whether the people writing these reports are lying. I don’t think they’re lying. I think they’re reproducing a pattern that’s standard in the AI consulting industry. A pattern so deeply embedded that it’s no longer recognized as distortion.

The pattern works like this: a consulting firm supports an AI implementation. If it works, it becomes a case study. If it doesn’t work, nothing comes of it. No paper, no talk, no report. The failure disappears. Not actively suppressed. Just not told.

The result is an industry that only knows success stories. Not because there are only successes. But because failures aren’t a business model. No consultant wins a contract by telling the story of the last client who failed. No whitepaper generates leads by saying: half the time it doesn’t work.

So they select. And the selection creates a picture that isn’t wrong, but incomplete. And incompleteness in consulting isn’t neutral. It’s a recommendation. People who only see successes misjudge the risk. People who only see numbers between 25 and 60 percent plan with numbers between 25 and 60 percent. And when they end up at minus 10 percent, they have no reference material to understand what went wrong. Because it’s documented nowhere.

The anonymity of the examples makes the problem worse. When no name is given, there’s no verification. There’s no way to check with the company: is that true? What does it look like two years later? Did the numbers hold up? Did the costs pay off? The anonymity doesn’t protect the companies. It protects the claim.

I know consultants who tell me privately that most AI projects fall significantly short of expectations. That data quality in most companies isn’t good enough. That resistance in teams is underestimated. That costs explode after the pilot. They tell these stories over dinner. Not on stage. Not in the report.

What does it say about an industry that only publishes one half of its experience? It says the industry sells products, not advice. Advice means telling the whole truth. Sales means telling the half that convinces.

The industry tells the convincing half. Good enough to justify an investment. Not good enough to make a decision.