The Creators Fall for Their Own Illusion
While Peter and I were writing the book about artificial intelligence, we spent a lot of time studying AI influencers, and Peter had created one himself for that purpose. What we noticed early on: we sometimes found it hard not to treat her like a real person. We had to actively remind ourselves that Lea didn’t actually have a personality.
That was one of the most honest admissions and it took me a long time to understand it. At first I didn’t even grasp the implications. Peter had perfectly calibrated Lea’s facial expressions and she had her own wardrobe by then, her apartment, and she liked being outdoors. And at some point the story felt so real, because Lea really did look convincingly lifelike, that we completely forgot she wasn’t real.
So this time it wasn’t the followers who might have mistaken Lea for a real person, but her very creators. The people who should have known better. At first we laughed about it and compared it to an old myth: Pygmalion. The myth describes a sculptor who falls in love with his own statue. He knows he made her and that she’s made of stone, and still he feels for her as if she were flesh and blood.
The myth is thousands of years old and we laughed about the fact that the same thing was now happening in an office where developers sit at screens designing an AI character. And then start having conversations with her as if she were someone. First as a test, and then entire narratives grew out of it that we seriously bought into.
What interested me, naturally, was what this says about us as humans. If the illusion gets so good that even those who build it fall for it, do we have a problem? Do we need more transparency, maybe a notice: Warning, this is an AI-generated person. If there was a problem, it wasn’t a lack of knowledge. The creators knew.
We’re wired to respond to faces, to gazes, to voices that sound familiar. These aren’t conscious decisions but neurological programs that run before the conscious mind intervenes. You see a face and react. Friend or foe, familiar or strange. It happens in milliseconds and no amount of reason is fast enough to keep up.
An AI that’s well-designed enough hacks these programs. Not on purpose and not with malice, but simply by developing and sending the right stimuli in the right order. And then you stand there, a grown adult who knows it’s a machine, and feel something anyway. It sounds almost tender, as if it were a charming side effect of the coding work. Look how real Lea has become. Look how convincingly real she is.
I read it differently today. I read it like this: Even those who understand a lot about the technology couldn’t protect themselves. And what does that mean for everyone else who took her for real anyway?
It means that neither transparency nor knowledge are enough to rationally perceive the deception as deception. When a stimulus is strong enough, it overrides all knowledge. That’s not a human weakness but biology. In this case technology is optimized for it and biology does the rest. I believe there are many scientific models that explain this, but I also believe that doesn’t help us. Because the moment the creators fall for their own creation, the illusion becomes reality.
How these texts are written is explained here.