The Anonymous Case Studies
A mid-sized company implements AI in customer service. Within six months, customer satisfaction increases by 35 percent. A company in the consumer goods industry uses AI for workforce planning. Efficiency increases by 40 percent. A company deploys AI in marketing. The conversion rate doubles.
No name. No company. No industry beyond a generic category. No contact person. No source. No footnote. Nothing you could verify.
At some point I started collecting these examples. In guides, whitepapers, conference talks. They all follow the same pattern. A company has a problem. It deploys AI. Within three to six months, the numbers improve by 25 to 60 percent. No complications. No resistance. No phase where it didn’t work. No employee who had concerns. No IT department that was overwhelmed. No costs that spiraled out of control.
That doesn’t smell like practice. That smells like a template.
In my career, I’ve participated in many implementations. Technology, processes, strategies. Not a single one went the way these examples describe. In reality, there’s always a moment where something doesn’t work. Where the data isn’t clean. Where the employees don’t get on board. Where the costs are higher than planned. Where someone stands up in a meeting and says: this was a bad idea.
That moment is missing from every single one of these examples.
There are two explanations. Either only companies where everything went perfectly were consulted. Or the examples were tailored to support the argument. The first explanation is unlikely. The second is a problem.
Anonymous examples aren’t inherently bad. Sometimes companies don’t want to be named. That’s understandable. But when every example is anonymous, an important check disappears. Nobody can follow up. Nobody can verify. Nobody can say: that’s not right, I know the company, it was different.
That check isn’t an academic luxury. It’s the difference between a case study and a claim. You can examine a case study. A claim you have to take on faith.
And the claims all follow the same arc. Before: slow, manual, error-prone. After: fast, automated, efficient. The numbers always land in the same corridor. Never below 20 percent improvement, because that would be unspectacular. Never above 70 percent, because that would be unbelievable. The sweet spot of plausibility.
I wonder who produced these numbers. A consultant who supervised the implementation? The software vendor? The company itself, which has to justify its investment? The source is missing, so the question stays open. And open source questions in texts that want to influence purchasing decisions are not a detail.
There’s a simple test for case studies. Do they contain something that contradicts the argument? Something that wasn’t planned? Something that went wrong? If yes, they sound like practice. If no, they sound like sales.
These examples contain nothing that went wrong. Not once. In no company, in no industry, in no application. Either AI is the first product in the history of technology that always works. Or what didn’t fit the picture was left out.
I know where I’d put my money.