Efficiency Without Judgment

I’d heard it several times in recent meetings and I was about to join in with the general sentiment that we now want to redefine efficiency with a chatbot. But I really had to think about what that actually means for a while. And I think it means nothing. Efficiency is efficiency. More output per unit of time. That’s not new. That’s the same definition since the industrial revolution. I wasn’t around for that, but even before AI there were an enormous number of software innovations that promised even more efficiency. What’s new here, though, is the speed. And that’s exactly where the problem lies.

When everything gets faster, the slowest part becomes the bottleneck. What’s the slowest part? The human. Not as labor, that’s replaceable. But as someone who has to judge whether what was produced is actually good.

A colleague showed me a report last week that he’d created with AI. Ten pages, cleanly formatted, with charts and recommended actions. He needed twenty minutes for something that used to take a day. I asked him if he’d read the report. Of course, he said. Skimmed it.

Skimmed it. Ten pages of recommended actions on which decisions will be made, he had skimmed before sending them out. Not because he’s lazy. But because the speed of production has changed the expectation. When something is created in twenty minutes, you treat it like a draft, not a result. But it gets passed along as a result.

This is what happens when efficiency increases without judgment keeping up. You produce more and more but check less and less. And the ratio shifts further with every acceleration.

AI can accomplish in minutes what takes teams weeks. Ok, that’s exaggerated, but it feels true anyway. For some tasks at least. But the teams didn’t spend the weeks just producing. They spent them thinking. Discussing, conceptualizing and discarding. The time wasn’t waste. It was the space in which judgment formed.

Judgment takes time. Not because humans are slow, but because a good assessment requires you to let something sink in. To sleep on it, because you see a draft differently in the morning than the evening before. To talk to someone about it and notice the gaps. This process can’t be sped up. It’s human. And it’s the only thing that remains when everything else is already fully automated.

I know companies that have quintupled their content output since introducing AI. Newsletters, blog posts, social media, reports. Everything five times what it was before. If that’s even enough. But quality control has stayed the same. There’s still one person sitting there who now has to skim five times as much. The result is predictable: errors, shallow interchangeable content and statements that nobody checked or even saw. But the consensus is clear: the output is right.

Efficiency without judgment is the faster production of noise. That’s not progress. That’s volume that annoys.

In companies, they call this “productivity gain.” And the numbers confirm it. They go up. After all, more was produced but less time invested. So costs have theoretically gone down. But who measures whether what was produced is actually any good? Because more quantity produced is not at the same time more value.

I’ll probably wait a long time for someone to describe slowness as a competence. For someone to say: yes, AI can create a report in twenty minutes. But the real question is who sits down afterward and checks whether it’s right. And whether what it says should actually be implemented. Because what is efficiency? Does producing faster mean deciding faster? And what is the risk of deciding faster when the basis was only skimmed?

How these texts are written is explained here.