I Know My Ontology
Ontology is the study of what exists and how it connects. In computer science it is more specific: a formal representation of concepts and their relationships to each other. The questions are: What exists? How does it connect? What follows from what?
That is precise and useful. But it does not describe what I do with it.
What I do is applied ontology. I take the structure and use it as a system to get from data to a result in a systematic way. Not organizing knowledge for the sake of organizing, but organizing knowledge so that someone can make a decision. The shift is small, but it changes everything. A description of what exists becomes a tool.
Five Principles
What has been on my mind since I started connecting an AI agent system with ontological structures: the principles behind my ontology and the principles behind agentic AI are the same.
Decomposition. Breaking a complex problem into parts. When a US company wants to enter Europe, that is not one question but twenty. Regulation, distribution channels, pricing, logistics, target audiences, competition. Each solvable on its own, together a system. An AI agent does the same thing: it takes a task and breaks it into steps.
Relationships. Understanding how the parts connect. Regulation affects pricing, which determines which distribution channels are realistic. And depending on the target audience, the marketing looks completely different. In my ontology these relationships are explicitly defined. An agent recognizes them too, but it does not define them. It finds them.
Dependencies. Knowing what has to come before what. You cannot build a pricing strategy before you know the regulation. And you cannot plan marketing before you understand the target audience. Sequence is not optional. In an ontology the sequence is built in. An agent decides that on its own while it works.
Context. Results depend on conditions. What works in France does not automatically work in Germany. What applies to a consumer product does not apply to medical technology. My ontology maps context as its own layer. Agents learn context from the data you give them.
Traceability. Knowing where a decision comes from. Why do I recommend direct sales instead of retail? Because the margins in this segment are too low for brick-and-mortar, because the target audience buys online, because regulation allows direct import. In my ontology this chain is visible. With agents, not so much. The path is transparent, you can follow every step. But with complex decision chains there is too much trial and error, too much time spent. For simple representations you get to the result quickly. For anything beyond that, not yet. Given the speed at which agentic AI is developing, that may not be the case for much longer.
The Tension
The principles are the same. But the results are not.
A well-designed ontological model works. I know why the result comes out the way it does and not differently, I can explain it to someone and they understand it. When something changes, I know where to correct it. It follows my plan.
Agents do what they want. That sounds polemical, but it is my daily experience. I work with AI agents every day and the result gets better, sometimes significantly better than what I would have built myself. But it does not follow a predetermined plan. The agent finds its own way, and I do not always understand why that particular one.
The strength: the agent sees connections I miss and tries approaches I would not have thought of. It is faster than me and has no problem running through twenty variations before it decides.
The problem: I cannot always explain the result. And if I cannot explain it, I cannot sell it. To anyone.
What Is Better?
The question is wrong. It is not about better or worse. It is about what you can control and what you cannot.
My ontology I control. I built it, I know every connection in it, I know where it is strong and where it has gaps. When something goes wrong, I know where to look.
Agents I do not control. I can give them context, tools, guardrails. But what they make of it is different every time.
Then the right question should be: when will agentic AI deliver a perfect ontological model? When will the agent build the structure that I build by hand today, on its own? Automatically, faster, maybe better?
I do not know if that is the right question. Maybe it is wrong too. I work with this every day and do not yet have an answer that convinces me.
What I know for sure: I know my ontology. The agents, I do not.