What Is the Right Ethical Question?

In the AI industry, a question keeps coming up: What is the right ethical question? I stopped at that question.

Not because of the answer. The answer, when it comes, is as vague as you’d expect. But because of the question itself. It doesn’t ask: How do we solve the ethical problem? It asks: What is the problem in the first place?

After years of development. After countless texts about AI in marketing, AI in communication, AI in customer retention. After all the examples and best practices. At the end, the question is asked: What exactly is the ethical question we should be asking?

It’s the most honest moment in the entire discussion. And I’m not sure the people asking it realize that.

Honest, because it shows that all the technology was developed without a clarified ethical foundation. First they built. Then they sold. And at some point, at the end, someone asks: Was that okay? Not as an answer. As a question.

I’ve seen this in companies. The ethics discussion arrives when the product is finished. When the investors are on board, the roadmap is set, the first customers are in place. Then a workshop is called where smart people sit around a table and ask: What are the ethical implications, actually? The question is sincere. But it comes too late.

The industry does the same thing. For years it describes what’s possible, what’s profitable, what works. And then, almost at the end, comes the question: Should we?

The sequence matters. If you build first and then ask whether you should have built, the question is no longer a real question. It’s a reflection exercise. The product exists. The question changes nothing about that.

What stays with me about this question is its openness. Nobody claims to know the answer. Nobody even claims to know the question. They’re searching for the right framing. That’s unusual in an industry that otherwise brims with certainties. AI will transform marketing. AI will revolutionize customer retention. AI will increase efficiency. Everything will. Everything is certain. Only with ethics does it suddenly become uncertain.

That says something. It says the industry has expertise in the technology but not in the ethics. It says they know how to build AI but not whether they should. And instead of naming that as a problem, they name it as an open question. As if it were normal to develop a technology without knowing whether the ethical questions have even been formulated.

Maybe it is normal. Maybe that’s exactly how technology comes into being. First the possibility. Then the application. Then the market. And at some point, when enough damage has been done, the regulation. Ethics doesn’t come at the beginning. It comes at the end, when it’s too late, and it’s framed as a question, not an answer.

The question is an admission. Not an intentional one. But an honest one. It says: We don’t know what the right question is. And if you don’t know the question, you can’t have an answer. And if you don’t have an answer, you’re building without an ethical foundation.

That’s what happened in the years before it. They built. Without a foundation. And at the end there’s a question asking what the foundation should have been.

The question doesn’t get answered. Considerations are listed. Approaches are sketched. Regulation and transparency are pointed to. But a clear answer, there isn’t one.

That is, in a strange way, the most valuable thing in the entire discussion. Not the certainties. The one place where it’s admitted there aren’t any.