Ethics Sections as Liability Disclaimers

It always follows the same pattern. An ethical problem is named. Systematic bias in algorithms, discrimination through deep fakes and a lack of transparency. Then something is quickly stammered in between that’s supposed to suggest responsibility. Things like regular audits, ethical guidelines and responsible handling of data. But responsible handling lasts exactly until the next technical feature starts the cycle all over again. It’s not enough that you hand over all your personal data to the platform operator when you register. Data continues to be collected covertly and faked as much as possible, and everyone accepts it helplessly.

Most ethics sections read like terms and conditions. You know they’re there and you know you should have read them. But you also know that nobody seriously expects you to. One click on confirm is enough.

I’ve written terms and conditions. In earlier professional projects, as a consultant and co-responsible party. I know how they’re set up. A lawyer drafts what protects the company. Not what protects the customer. The language sounds neutral and often convoluted. So the goal is not understanding but clean documentation. Because if a dispute ever arises, it should be provable that everyone was adequately informed. And everyone did confirm, after all.

The ethics sections in AI projects work the same way. They inform, but they don’t address. That’s a crucial difference.

Addressing would mean: here is a specific problem, because we learn more than we’re allowed to store. And here are the questions we still can’t answer, because the chatbot doesn’t behave reliably. That would be honest, but it’s uncomfortable, because it would have to get ahead of the technical innovation and slow it down first. Because it would clearly show that the problems are real and that there are no easy solutions.

Instead it says: companies should conduct regular audits to identify and fix bias in their AI systems. It’s often about serious discrimination against genders and minorities. Yet nothing gets specified. Which audits should be conducted? At what intervals? By what criteria? Who conducts them? Who pays for them? What happens when the audit finds a gap or a direct violation? Does the system get shut down? Adjusted? Kept running?

None of that gets answered concretely enough to mean protection or consequence. Because the answers wouldn’t just be inconvenient, they would restrict operations to the point where competitiveness is often at stake. Anyone who honestly writes about the limits of AI ethics and implements the corresponding technical parameters gets less attention than anyone who pretends ethics is a matter of configuration.

I started looking more closely at the ethics sections in these kinds of documents, because they weren’t verifiable anyway. So I tried to trace whether concrete actions could be derived from the words. The answer tends to be no. Ethics change nothing substantial, because it would conflict with the business model. Like a fire extinguisher hanging on the wall that has never been inspected.

What bothers me about this is not the superficiality. It’s the function. The ethics sections are absent or inadequate, not only because tech engineers tend to rank ethics low in their requirements. They’re mostly missing because an AI application with ethical safeguards is much more complicated and expensive to implement. The result is very vague formulations that are never legally robust enough to offer protection. Like a disclaimer at the end of an ad. May contain risks and side effects.

Real ethical engagement is hard work. It demands tolerating contradictions and saying openly: this tool can discriminate and we don’t yet know exactly how to prevent that. That would build trust. Not because the solution is there, but because the honesty is. What happens instead is a simulation of responsibility. The words are there. The substance is missing. And the reader who doesn’t look closely walks away feeling that ethics was considered. It wasn’t. It was only mentioned.

How these texts are written is explained here.