The End of Decision-Making

Data analysis is the epitome of logic. And the process of a data analysis should naturally be logical: first, the data situation is analyzed, meaning what has happened so far. The second stage forecasts what will happen, and the third is called Prescriptive Analytics. It tells you what to do. Each stage is a step forward and the last stage is the goal: the recommendation for action.

So far, so normal. But Prescriptive Analytics can’t be solved with a technical upgrade. It’s something fundamentally different, and that gets overlooked in the tech frenzy.

From information to instruction

The first two stages, Descriptive and Predictive Analytics, deliver information. You get an analysis and a forecast, and then you decide. You can take the data, assess it based on your experience, discuss it with your team and reach a conclusion that may differ from what the numbers suggest. That’s entrepreneurial judgment and the core of what leadership means.

Prescriptive Analytics does something different. It decides. Formally, there’s still a person who signs off on the recommendation. But the recommendation comes from a model that considers more variables than any human ever could, that calculates faster and knows neither emotions nor politics nor ego. And when the machine’s recommendation is better than the human’s in most cases, what does the human’s final decision even mean?

It becomes a formality. The person only confirms what an algorithm recommends, not because they’re forced to, but because disagreement would be irrational. Who would risk their position to decide against the recommendation of an algorithm that demonstrably delivers better results?

That’s exactly what makes decisions questionable. I claim they’re quietly disappearing. Not with a bang and not through a creeping change of habit, but quite rationally, because the numbers are simply better. Decision-making requires that you could also decide differently. That there’s a real alternative. If the alternative means deciding worse than an algorithm, it’s no longer an alternative. It’s more like refusal, and not a decision.

Prescriptive Analytics, we’re told, helps companies decide faster and better. That’s a description that hides the actual point. It doesn’t help with deciding, it replaces it. And the decision migrates from you to the software without anyone questioning it.

I spoke with a CFO who told me exactly that, quite openly. He regularly receives new recommendations from an optimization system on fundamental topics like pricing strategy, assortment and workforce planning. He stopped questioning the recommendations long ago because they proved right every time. Not approximately right, but exactly right. And he told himself, I just need to sign off on this.

And then he told me something we initially laughed about, but that hasn’t left me since: I don’t know anymore whether I’m still doing my job or just showing up.

Essentially an identity crisis born from optimization. That’s nothing new. But the underlying question here is different: what happens to people whose job was the decision, when the decision is now made better by a computer? What does that do to the decision-makers who were initially promised immunity from the spread of AI? If software judges better, what’s left of executives?

The answer so far has been: the human retains the final authority, because they can always decide differently. But a different decision without a rational basis is no longer a decision, it’s stubbornness. And stubbornness has never been in demand as a qualification.

How these texts are written is explained here.