Manipulation Risk as a Subpoint

In most texts about AI in marketing, there’s a section on risks somewhere. It sits between the sections on monetization and scaling. It covers manipulation risk, psychological effects and moral or ethical concerns. Risks are rarely among the detailed sections. Relative to the rest of the text, they get about as much space as you’d give a known bug in a release notes document.

Can you read from that how the AI industry thinks about morality?

Moral questions are treated like technical problems. There’s an issue that gets logged, climbs the priority scale and someone writes a fix so it runs better in the next version. That works for software, but for moral questions it doesn’t.

The manipulation risk is real. Meta doesn’t get dragged into court over manipulation allegations for nothing. But morality can’t be patched. It’s not a bug in the system. It is the system. And when you build an AI that influences people and steers decisions, manipulation isn’t a bug that occasionally occurs. It’s a core function that can also go too far. And it does.

But the industry frames it very differently. It lists risks the way you list specifications. Manipulation risk in cases of low self-esteem and unrealistic expectations. Period. The suggestions that follow sell transparency, regulation, education and optimization.

In my work I’ve seen often enough how risk assessments function. Risks get written down and then evaluated. Then measures get assigned. These get checked off and everyone is satisfied. A documented risk is managed. It’s not automatically solved, but it’s on the radar and declared as managed. That has to be enough and the critics are quickly satisfied, at least on a political level. For whatever reason.

For a software application with too much downtime or suddenly appearing security gaps, that’s usually enough. For a “social” platform that shapes the self-image of young people, it’s not. And the people who build and use these systems apply the wrong standards here. They come from the tech world, and in the tech world there’s a solution for every problem. If the solution doesn’t exist yet, it will after the bugfix.

Ethics is not a checklist and transparency is not a feature. On top of that, regulation would cripple the speed of development and success.

What’s missing is the possibility that some problems aren’t solved by better technology but only by the decision to do something differently or more slowly or not at all. That possibility doesn’t sound business-friendly at first and doesn’t exist in the thinking of the tech world.

I think of a conversation I had years ago with an engineer. It was about a product that was technically possible but questionable. He said: if we don’t build it, someone else will. So we better build it, because we can build it better than the others. The argument sounds pragmatic. In truth it’s an abdication. Because it makes clear what nobody says plainly: I’m not responsible for what I build because someone else would build it too.

The entire tech industry argues the same way. The risks are there, but the technology is coming anyway. We can’t help that. So it’s automatically reasonable to say, better to shape it yourself than to ignore the competition. That even sounds responsible. But it assumes that shaping is enough. That you can shape a technology whose core function is influence in such a way that the influence only produces good outcomes.

I don’t know if that can work. Because the problem isn’t that risks are being hidden. They’re in the texts. The problem is the weighting. The risks are a subpoint. The opportunities are the main topic. The morality is being managed.

How these texts are written is explained here.