How much automation is acceptable?

There’s a question that comes up often in the AI debate but mostly goes unanswered. How much automation is acceptable and when does it lose credibility or even become dangerous?

It’s the right question. Not whether AI works or how to implement it. But how much of it we want.

The answer could take the whole discussion in a different direction. An honest direction. One that acknowledges that more technology isn’t automatically better. That the answer to the question could be: less than we think.

But the answer that always comes is a hybrid model. Humans and machines working together and sharing the tasks. AI handles the routine work, people the important decisions. The point being to find the right balance.

That sounds reasonable. It sounds like a compromise you can really only agree with.

But it doesn’t answer the question.

The direction of automation

A hybrid model doesn’t say how much automation is acceptable. It says: somewhere in between. It shifts the decision from the fundamental level to the implementation level. Not whether, but how. Not how much, but what mix. The question about the limit gets replaced by the question about configuration.

I’ve worked on projects where hybrid was always the beginning. First the machine replaces only the boring tasks. Then the repetitive ones. Then the ones nobody wants to do anyway. And at some point, hybrid is just another word for automated, with a human who occasionally presses a button.

The direction is always the same: more automation. Never less. No company implements a hybrid model and then decides on less machine and more human. The economic logic pulls in one direction. Hybrid is always a transition phase but never the actual goal.

Everyone who works with AI knows that, regardless of the industry. The idea that the process stops somewhere and stays at its sweet spot, where humans and machines collaborate as equals, contradicts everything the history of automation shows.

What bothers me about this answer is that it ignores this contradiction. History is more likely to repeat itself here than the principles of automation are to suddenly stop applying.

Reality only knows one direction: more automation. Machines cost less than people. The question of how much fails to recognize this reality.

Take customer service. AI handles the simple inquiries. For complex cases, a human takes over. Hybrid sounds good here. But what happens when the AI can handle the complex cases too? Will the human be brought back in, on principle, or rationalized away because the machine is cheaper and performs at least as well?

The question answers itself.

So, how much automation is acceptable? The honest answer would be: We don’t know. We have no criteria for it. We have no framework that says: Here’s the line, beyond it a human should stand, not because the machine can’t do it, but because it wouldn’t be right.

As long as there’s no honest answer, we’ll keep hearing: hybrid. And hybrid is the word you use when you don’t want to seriously answer the question. Or when you ignore reality.

How these texts are written is explained here.