Real Was Already Fake, So Completely Fake Is Just Consistent

The argument for AI influencers always goes the same way. Human influencers stage their lives anyway. They show a life that doesn’t exist like that. So the step to fully generated influencers is just logical.

I’ve heard this argument many times by now. It sounds coherent. It has the shape of a good argument. But it normalizes something that is not normal by framing it as a continuation of something already accepted.

The pattern works like this: Step one is problematic but gets tolerated. So step two, which goes further, is presented as the logical consequence of step one. The tolerance that applies to step one gets transferred to step two. And suddenly something is normal that was recently still unthinkable.

The trick: Step 1 gets tolerated, Step 2 sold as consistent. The result is normalization.

Influencers stage their lives. That’s true. But a person who stages their life is still a person. They have a life outside the frame. They make decisions. They wake up in the morning and decide what to show and what not to. The staging is real, even if what’s shown isn’t. Someone is there.

With an AI influencer, nobody is there. There is an algorithm that generates a face that isn’t a face. That tells a story that has no story. That shows emotions that aren’t emotions. Not staged emotions, like with a human influencer. But calculated signals that simulate emotions.

The difference is not gradual. It is fundamental. But it gets treated as gradual. And that’s exactly where the shift happens.

If real was already fake, then completely fake is just consistent. That’s not analysis. That’s normalization. It says: The line was already crossed anyway. So it doesn’t matter if we go further.

I know this argument from other contexts. The news was already distorted, so fake news doesn’t matter. Politicians already lied, so a completely fabricated resume makes no difference. The air was already polluted, so a little more doesn’t count.

It’s always the same structure. An existing problem gets used to justify a bigger one. And whoever objects gets called naive. Come on. It was already like that before.

Yes. Before, it was already problematic. And the new thing doesn’t make it less problematic, it makes it more. But if you accepted the first step without questioning it, you lack the basis to reject the second. That’s the trick. Not the deliberate trick. The structural one. It happens without anyone planning it.

AI influencers get described as a business model. As innovation. As the next level. The question of whether that level leads in a direction we want is never asked. Because in this logic, the only thing that counts is that it works. That it generates reach. That someone makes money from it.

But this isn’t just about influencers. The argument applies everywhere. If human customer support was already bad, automated support isn’t worse. If political speeches were already hollow, it makes no difference whether AI writes them. If job interviews were already theater, you might as well let algorithms run them.

Each time, something shifts. Not much. Just enough to sound logical. And at some point you’re standing at a place where nothing is real anymore, and someone says: It never was.

The question is not whether it was real before. The question is whether we’ve stopped trying.