Selling Perfection and Warning Against It in the Same Breath

AI influencers are always perfect. They have no bad days, no hair loss, no dark circles. They show a life that never happens, at a quality that never wavers. That’s their selling point.

Three paragraphs later, in every other industry text: Perfect AI influencers can foster low self-esteem in followers.

I read that twice. Not because I don’t understand it. But because I want to see if the people who write it notice what they just did.

They described the advantage and the damage. In the same passage. As if they were two different topics. The advantage sits in the main text. The damage sits in a side remark. The advantage is explained, developed, backed with examples. The damage is mentioned and then things move on.

That’s the structure of the entire discussion about AI in marketing, but here it’s especially visible. There’s a main narrative: AI is good for business. And there are footnotes that say: There are problems. The footnotes don’t disturb the main narrative. They coexist, as though they had nothing to do with each other.

But they have everything to do with each other. The perfection that creates the advantage is the same perfection that causes the damage. You can’t have both. You can’t say: Perfect AI influencers are great for marketing, and oh by the way, they destroy the self-esteem of their followers on the side.

Yet that’s exactly what happens.

I know the pattern from product brochures. The main text sells. The fine print warns. Side effects may occur. Not suitable for everyone. For risks and side effects, consult your doctor. The warning doesn’t exist so the reader is informed. It exists so the sender is protected.

That’s how the entire discussion reads. The warning about low self-esteem is there so nobody can say the problem was ignored. But the warning doesn’t change the recommendation. AI influencers are described as innovation, as opportunity, as the next step. The warning is a hedge, not a correction.

What stays with me isn’t the contradiction itself. Contradictions are everywhere. What stays with me is the ease with which it just stands there. The industry seems to have no problem promoting and warning in the same breath. As if that were normal. As if it came with the territory.

Maybe it does. In an industry that calls people target groups and influence engagement. In an industry that has spent decades selling products it knows cause harm, as long as the profit outweighs the damage. In that industry, a contradiction between benefit and side effect isn’t a mistake. It’s the default.

But this isn’t about a medication with side effects. It’s about the self-esteem of people who follow a generated image that never existed. That’s not a side effect. That’s the effect. If you show someone every day an image that’s more perfect than anything they’ll ever be, and that image isn’t even real, then low self-esteem isn’t the side effect. It’s what happens.

The people who build this know it. They wrote it down. And then they kept writing as if they hadn’t.

That’s the most remarkable thing about this discussion. Not the information. It’s there. But the ability to have an insight and ignore it at the same time. The knowledge is there. The consequence is missing. All the facts are on the table. Nobody draws any conclusions from them.

Because the conclusions would be uncomfortable. Because the conclusion would be: Maybe you shouldn’t build perfect AI influencers. And that’s not in the business plan.