Real Was Already Fake, So Completely Fake Is Just Consistent
AI influencers are actually a logical consequence. Because real, human influencers stage themselves artificially anyway. You can already check off the A for artificial. Influencers show a life that doesn’t exist like that. So the step to influencers completely generated by artificial intelligence isn’t a big one anymore.
I’ve heard this argument many times by now and at first glance it’s coherent. You’d think these are the right arguments, but it’s fundamentally wrong. Because something is being normalized that is not normal at all. And it doesn’t become more normal by continuing something that was accepted everywhere.
The pattern works like this: step one is problematic but gets tolerated. So step two, which goes further, is presented as the logical consequence of step one. The tolerance that applies to step one gets transferred to step two. And suddenly something is normal that was recently still unthinkable.
Influencers stage their lives. That’s true enough. But a person who stages their life is still a person. They have a life outside the staged image. They wake up in the morning and make decisions about what to show and what not to. The staging is real, even if what’s shown isn’t. But someone is there. A real person.
With an AI influencer, nobody is there. There is an algorithm that generates a face that isn’t a face. That tells a story that has no story. That shows emotions that aren’t emotions. Not staged emotions, like with a human influencer, but precisely calculated signals that simulate emotions.
The difference is not gradual, it’s fundamental. Because the declared goal is that staging has to look real. Authenticity is a buzzword that gets used a lot in this context. And this is where the mistake begins and the misunderstanding that fake becomes real when it’s presented authentically.
So we’re being told: if real was already fake, then completely fake is just consistent. But that’s not logic. That’s dangerous normalization. It says: the line was already crossed anyway. So it doesn’t matter if we go further.
I know this argument from other contexts. The news was already distorted, so fake news doesn’t matter. Politicians already lied, so a completely fabricated resume makes no difference. The air was already polluted, so a little more doesn’t count.
It’s always the same structure. An existing problem gets used to justify a bigger one. And whoever objects gets called naive. Come on. It was already like that before.
Yes. Before, it was already problematic. And this doesn’t make it less problematic, it makes it more. But if you accepted the first step without questioning it, you lack the basis to reject the second. That’s the trick. Not a deliberate one, but a structural one that obscures and blurs the truth. The trick happens without anyone consciously planning it.
AI influencers get described as a business model. As innovation. As the next level. The question of whether that level leads in a direction we want is never asked. Because in this logic, the only thing that counts is that it works. That it generates reach. Reach that makes money. Counterfeit money, basically.
But this isn’t just about influencers. The argument applies everywhere. If human customer support was already bad, it doesn’t get worse by being automated. If political speeches were already hollow, it makes no difference whether AI writes them. If job interviews were already a staging of a better self, you might as well let algorithms run them.
Each time, something shifts. Not much. Just enough to sound logical. And at some point you’re standing at a place where nothing is real anymore, and someone says: it never was anyway.
The question is not whether it was real before. The question is whether we still believe ourselves.
How these texts are written is explained here.