The Book That Refutes Itself
I wrote a book about AI in business. About possibilities, limitations, practice. Of course I used AI. For research, for drafts, for cross-checking.
The texts that came back sounded like me. Right words, plausible structure. But they weren’t true. Not in the sense of wrong. In the sense of: that’s not what I meant. That’s not my experience. It sounds like me, but it doesn’t come from me.
Correct and true are not the same thing. An AI can produce correct sentences. But a true sentence needs someone who means it. Someone who has had an experience and puts it into words. Not because the words are optimized, but because they are right.
Writing is thinking. When you write, you notice what you haven’t understood. When you delegate, you don’t.
I wrote the book myself. AI was a tool, not an author. And that became the most important insight of the whole project: the machine can produce the output, but not the thinking behind it. If that’s true for a book about AI, it’s true for every other application too.
A book about AI that refutes itself through its own creation process. Not because it’s wrong. But because it shows what the technology cannot do.