Demanding Explainable AI, Recommending Black Boxes

There’s a demand that shows up in nearly every AI strategy and sounds reasonable. AI systems must be explainable. Companies should understand how their models reach decisions. Transparency is critical for trust. Without explainability, no responsible use.

I agree. Explainable AI is a sensible demand. The problem is what happens at the same time.

The concrete tools everyone recommends are ChatGPT, GPT-4, DALL-E, Midjourney. Tools built on large language models whose inner workings even their developers don’t fully understand. Nobody at OpenAI can explain to you in detail why GPT-4 gives a particular answer to a particular question. Not roughly. In detail. It’s not possible. The models are too complex. They have billions of parameters whose interactions are beyond any human comprehension.

These are black boxes. By definition. Not as an accusation, but as a technical fact. Input in, output out, and in between something that no one on earth can fully explain.

And the same people recommend these tools on one hand and demand explainability on the other. Sometimes in the same document. Sometimes in the same paragraph.

I’ve thought about whether this is ignorance or tactics. I think it’s neither. It’s habituation. We’ve gotten used to using tools we don’t understand. All of us. I don’t understand in detail how my phone works. I use it anyway. But my phone doesn’t make decisions about credit approvals, hiring, or medical diagnoses.

The difference between a tool you don’t understand and a tool that makes decisions you don’t understand is fundamental. With the first, you trust the result because you can check it. The text is good or bad. The image fits or it doesn’t. You are the corrective. With the second, you trust the result because you have no choice. The machine says: This applicant is qualified. Based on what? The model says so.

The whole industry moves between both worlds without marking the difference. It recommends ChatGPT for writing and calls that productivity. It recommends AI-based analytics for business decisions and calls that transformation. And then it demands explainability, as if that were a switch you could flip.

Explainable AI exists. There are models whose decision paths can be traced. Decision trees, linear models, rule-based systems. They’re less powerful than neural networks. They can do less. But you understand what they do and why. Nobody talks about them. Because they’re not impressive. Because they don’t deliver the headlines that ChatGPT delivers. Because a strategy built on decision trees doesn’t sound as exciting as one built on GPT-4.

Demanding explainability while recommending black boxes is like a nutritionist telling you to watch your ingredients and then recommending a product whose label has been blacked out.

What bothers me most about this is how casual it is. The contradiction isn’t discussed. It isn’t even noticed. The demand is in one section, the recommendation in another, and enough distance in between that nobody holds both in their head at the same time.

But holding both in your head at the same time is exactly what’s needed. Because the question isn’t whether AI is useful. The question is whether you’re prepared to make decisions on foundations you can’t trace. And whether you call that transparency just because you wrote the demand for it down.