Trusting technology
A sentence caught my eye because it sounds so self-evident: humans must decide how far they want to trust AI.
The word trust held me and wouldn’t let go.
Trust is something that develops between people. It has to do with intent and with the assumption that someone is well-meaning toward you or at least not deliberately working against you. Trust isn’t a technical term but a very human one. It presupposes that on the other side there’s someone who chooses how to treat you.
An algorithm doesn’t choose, it computes. That’s a fundamental difference and the word trust makes it invisible.
If I suspect my calculator gave me a wrong result, I don’t say: I don’t trust it. I say: it’s broken. Or: I typed in something wrong. The tool has no intent, it works or it doesn’t. Trust is the wrong category here.
But with AI we use the word more and more. We trust in AI, AI is trustworthy. The entire regulatory debate revolves around it. And with the word we import a whole framework of expectations.
When I trust someone and they disappoint me, it feels like betrayal. When a tool fails, it feels like a malfunction. The emotional charge is completely different. But when we place trust in an AI chatbot and it delivers a wrong result, we don’t react the way we would to a broken tool. We react the way we would to a betrayal. We feel deceived. Even though there was nobody there who wanted to deceive us.
The common argument treats trust as a decidable criterion. Low trust, high trust. More trust after good experiences, less after bad ones. What sounds rational at first glance misses what happens when you trust something that has no consciousness.
I see this in companies more and more regularly. Teams that rely on AI results because they look good also stop questioning the results because the machine has been right so far. That’s not trust. That’s habituation. It feels the same but the mechanism behind it is different. Habituation makes you sluggish, trust makes you vulnerable, but both can become dangerous in their own way.
What I’m missing is a language that draws the distinction. Instead of “trust in AI” we could say: how reliable is this tool in this context? That sounds less elegant, isn’t as human and reduces the risk of confusing people with algorithms.
The personification of technology is nothing new: we give cars names and curse at computers. But with AI, personification has consequences because the outputs are linguistic and therefore sound like they come from a person. The form is human even when the content is mechanical.
I often hear that we build trust over time as the results keep getting better. Small tasks first, then bigger ones and when the AI proves itself we give it even more responsibility. That’s the same pattern you use to onboard a new employee. The language no longer distinguishes between a person and a tool.
I think the real point isn’t whether we should trust AI but that we’ll soon stop distinguishing between a tool and a counterpart. And that a word like trust quietly erases that difference. So the question isn’t how far we want to trust AI but whether a different term might be a better fit for the technology: distrust.
How these texts are written is explained here.