What Is the Right Ethical Question?
In tech, and especially now around AI, ethics keeps showing up as a topic. The discussion is moving in a specific direction: What is actually the right ethical question to ask? But something about this bothers me beyond the question itself.
Not so much because of the answer. The answer will stay as vague as we’re used to with ethics. I think the issue isn’t answering this one question. It’s the framing. Moving from “How do we solve the ethical problem?” to “What is the problem in the first place?”.
After years of extremely fast development in tech, the changes in marketing and communication between companies and customers, the question of ethical boundaries may have fallen so far behind that we’ve lost the thread. Today the real question has to be: What exactly is the ethical question we should be asking?
That would be the most honest approach in the entire discussion right now. But I’m not sure when the right moment is, given the leaps in AI development and the ethical risks that probably only a few people can fully grasp. One thing is clear, we can’t afford to sleep on this. For many, the red light is already flashing. There are constant reports of massive violations, like Grok, the AI chatbot that has no sense of decency whatsoever.
Entire technologies were developed without a clarified ethical foundation. First they built, then they launched, then came problems, then excuses but no solutions. Solutions are now being sought in courtrooms, where Mark Zuckerberg seems to appear more and more often. When it was already too late and our children had already been served content that not even we adults want to see, courts discuss with platform operators whether that was acceptable.
I’ve seen this in smaller companies too. The ethics discussion arrives when everything is finished and the investors are already rubbing their hands. Only when there’s a compelling reason does a workshop get called where smart people sit around a table and ask: What are the ethical implications, actually? The question is certainly sincere. But it comes too late, and usually only under external pressure.
Many industries do exactly this. For years they plan what’s possible, what’s profitable, what works. Then they implement, adjust, improve, and then, far too late, comes the comment: We haven’t forgotten about ethics, we’ll do that now.
What bothers me so much about this is the legal freedom most companies enjoy in doing it. This approach is standard by now. Nobody even claims to do it differently anymore. They argue with market pressure and the simplest answer of all: everyone else does it too. Only when everything is finished does the search for the right question begin. That’s not unusual in an industry that otherwise brims with certainties. AI will transform marketing. AI will revolutionize customer retention. AI will increase efficiency. Everything will. Everything is certain. Only with ethics does it suddenly become uncertain. And it’s not just about children but also about many other groups in our society who either need protection or simply can’t handle certain information. I won’t go into specific examples here because that would distract too much from the principle.
What we learn here is that the industry has expertise in the technology but not in the ethics. It says they know how to build and implement AI but not whether they should. Yet it’s not named as a problem but as an open question. As if it were normal to develop a technology without knowing whether the ethical questions have even been formulated.
Maybe it is normal by now. Maybe that’s exactly how technology comes into being. First the possibility. Then the application. Then the market. And at some point, when enough damage has been done, the regulation. Ethics doesn’t come at the beginning. It comes at the very end, when it’s often too late, and then it’s still framed as a question. Not as an answer, because the question never existed.
How these texts are written is explained here.