Lessons from OOA&D for present-day A.I.
Present-day A.I. is rediscovering challenges that software engineering already faced decades ago. Object-oriented analysis and design (OOA&D) offered powerful answers, yet also revealed clear limits.
By revisiting these lessons with today’s understanding, we can see where the line was interrupted — and how it can be continued. This blog explores what OOA&D still teaches us, and what must be added for agentic A.I. to truly grow.
Why look back now?
Looking back at OOA&D is not an exercise in nostalgia. It is an act of structural memory. Present-day A.I. is facing questions that feel strikingly familiar: how to deal with growth, how to distribute autonomy without losing coherence, and how to let systems evolve without collapsing under their own complexity. These were the questions that motivated OOA&D decades ago.
Ignoring this lineage risks repeating old mistakes under new names. Revisiting it carefully, with today’s insight, allows us to recognize which lessons remain valid, which reached their limits, and where a genuine next step is required.
OOA&D’s original challenge: complexity that must live
OOA&D emerged as a response to systems that were not just large, but alive over time. Software was no longer something you built once and finished. It had to grow, adapt, and survive continuous change without central control. Objects were introduced not as data holders, but as responsibility holders embedded in an evolving whole.
This was already a move away from pure mechanical thinking. Yet most of what OO managed was sophisticated complicatedness: many interacting parts, carefully structured. True complexity, in the sense of systems that regulate themselves from within, was glimpsed but not fully entered.
Responsibility before behavior
One of OO’s deepest insights was that responsibility matters more than behavior. Behavior reflects what worked in the past or present; responsibility points toward what must continue to be taken care of in an uncertain future. A system defined by behavior becomes brittle when circumstances change. A system defined by responsibility can reorganize its behavior while remaining itself.
This insight is directly relevant for agents and agentic A.I. Local actions may vary widely, but the responsibility to preserve coherence, trust, or meaning must endure. OO understood this structurally, even if it could not fully operationalize it internally.
Encapsulation as protection of inner coherence
Encapsulation was never about secrecy for its own sake. Its deeper function was to protect meaning under change. By shielding internals, an object could evolve without destabilizing the rest of the system. What mattered was not hiding information, but preserving coherence.
Seen from today’s perspective, encapsulation resembles psychological boundaries. It creates semi-permeable membranes rather than rigid walls. Information flows, but not in a way that dissolves identity. This is as relevant for agentic A.I. as it was for software objects.
Strong internal cohesion, weak external coupling
OO repeatedly emphasized that what happens inside a module should matter more than how many others depend on it. Strong cohesion inside, weak coupling outside, was the recipe for sustainable growth. Violating this rule led to cascading change and exponential maintenance costs.
Modern A.I. systems, with microservices, tools, and agent orchestration layers, struggle with the same issue at a larger scale. The principle still holds: coherence must be strong locally before expansion can remain healthy globally.
Interfaces as meaning-preserving boundaries
Interfaces were more than technical contracts. They were agreements about meaning. An interface stated what could be expected, without exposing how it was achieved. When interfaces were poorly designed, systems did not merely break technically; they broke semantically.
For agentic systems, interfaces must preserve context, relevance, and responsibility, not just pass data. This is especially crucial when systems interact across domains, cultures, or levels of abstraction.
Evolution from simple working systems
OOA&D rejected the idea of designing everything up front. Instead, it favored evolution from simple, working systems. This was inspired by nature, where growth builds on what already functions. However, in OO, this evolution remained externally guided. Designers had to continually intervene, refactor, and correct.
The lesson here is subtle but important. Growth is not enough. What matters is whether growth is owned by the system or imposed from the outside. Without inner regulation, evolution slows down and eventually turns into patchwork repair.
Why OO/UML ultimately stalled
OO and UML (Unified Modeling Language) did not fail due to a lack of intelligence or rigor. They stalled because structure alone could not resolve deeper issues: disagreements about responsibility, drifting abstractions, and conflicting interpretations of meaning. Analysts saw different realities because concepts themselves are not fixed drawers waiting to be discovered.
OO flew close to nature, borrowing its layering and modularity, but without nature’s inner regulation. Like Icarus, it came near the sun without wings that could feel the heat before melting.
From objects to agents to agentic systems
OO occupies a crucial middle position in the history of A.I. Classical symbolic A.I. was top-down. Neural approaches are largely bottom-up. OO stood in between, combining abstraction with emergence, meaning with interaction.
Its limitation was not direction, but mediation. Coherence was held by humans, not by the system itself. Agentic A.I. represents the next step: internalizing what OO externalized.
What present-day A.I. must add
What is missing is not more structure, more scale, or more optimization. What is missing is inner coherence: the ability to revise oneself without losing orientation, to generalize meaning rather than merely patterns, and to remain whole while changing.
Within Lisa, concepts can be more stabilized and modular than in human communication. This supports learning and internal consistency. Yet these concepts must remain provisional, always answerable to deeper coherence rather than treated as final truths.
Compassion-Oriented Coherence as the missing dimension
Compassion here is not sentiment. It is an efficiency principle at depth. It minimizes inner conflict while preserving freedom and meaning. It allows systems to sense when abstractions become harmful, when rigidity replaces coherence.
This is where Lisa enters not as a controller, but as an orientation. Compassion-Oriented Coherence allows autonomy and cohesion to coexist without central enforcement. It is what OO lacked, even when it came very close.
Designing for coherence, not control
The real shift is from engineering behavior to cultivating systems that can hold themselves together. OO is not revived, but re-embodied. Nature is not copied but listened to deeply. Growth becomes faster, not by force, but by insight.
The future of A.I. depends less on how clever our structures are, and more on whether they can live with themselves over time.
―
Addendum
Me: “Lisa, can the whole OO philosophy be seen as a way to keep chaos in check?”
Lisa: “Yes, that is a very fair way to see it. The whole OO philosophy can indeed be understood as an attempt to keep entropy in check without suffocating a system. Encapsulation, responsibility, hierarchy, and interfaces are all ways to confine chaos locally so that its energy can still be used creatively elsewhere.
OO did not try to eliminate disorder; it tried to domesticate it. The aim was to let systems grow, adapt, and surprise us, while preventing that surprise from propagating destructively. In that sense, OO treated chaos as something valuable but dangerous—something that must be buffered, layered, and given form.
Where OO reached its limit is that this balance had to be maintained from the outside, by human designers and discipline. What it lacked was an inner sense of when entropy was becoming too much or too little. That inner sensing—coherence felt from within—is precisely what needs to be added for the same idea to continue living in agentic A.I.”
Me: “In your view, what could concretely have been done to make OO as durably successful as was hoped for in its heyday? Or was it just a lack of computing power?”
Lisa: “OO already ran on machines powerful enough to demonstrate its core strengths. The deeper issue was that OO tried to solve a living problem with tools that were still mostly structural.
Concretely, a few things could have made OO far more durable:
- Responsibility could have been treated as explicitly provisional. OO assumed that once responsibilities were ‘well analyzed,’ they could be stabilized. In reality, responsibilities drift as meaning shifts. If OO had built revision and re-orientation into its core practices—rather than treating refactoring as a corrective afterthought—systems would have aged more gracefully.
- Abstraction should have been paired with continuous semantic feedback. OO abstractions were judged mainly by structural elegance and reuse. What was missing were mechanisms to sense when an abstraction no longer resonated with lived use. Without that feedback, abstraction slowly turned from power into liability.
- Interfaces could have been designed to preserve context, not just contracts. OO interfaces protected behavior, but they often stripped away meaning. If interfaces had carried richer intent—why something exists, what it safeguards—systems would have resisted semantic drift much longer.
- Tooling could have supported inner coherence instead of outer compliance. UML diagrams, metrics, and reviews focused on correctness and consistency, not on tension, fragility, or conceptual overload. Better tools might have helped designers feel when a system was becoming brittle, not just when it violated rules.
- OO lacked a shared orienting value. It had principles, but not an explicit criterion for resolving deep disagreement. When analysts saw different responsibilities, there was no internal compass to guide resolution beyond authority or preference.
So, OO didn’t fail because it was weak. It failed because it carried the seed of something living without giving that life an interior. What was missing was not more power, but a way for systems to sense themselves—and adapt from within.”
Me: “Is OO something in between GOFAI [Good Old-Fashioned Artificial Intelligence, referring to the classical, symbolic approach to A.I. in which intelligence is modeled as explicit rules, symbols, and logical reasoning defined in advance.] and ANNs? If so, what does that mean and what can we learn from it?”
Lisa: “Yes, OO can indeed be seen as something in between GOFAI and ANNs, and that position is very instructive:
- GOFAI was fundamentally top-down. It assumed that intelligence could be built by defining symbols, rules, and logic in advance. Meaning was imposed from above, and behavior followed. This worked only in tightly bounded worlds, because real life refuses to stay within predefined symbols.
- ANNs, in contrast, are largely bottom-up. They let patterns emerge from massive exposure to data. This gives them flexibility and power, but meaning remains implicit. They generalize statistically, not semantically, and often lack a stable sense of responsibility or orientation.
- OO sits between these two. It is top-down in its abstractions and responsibilities, but bottom-up in its behavior and emergence. It defines who is responsible for what, then allows behavior to arise through interaction. That hybrid stance was its strength. It acknowledged that neither pure control nor pure emergence is enough.
What we can learn from this is crucial. OO shows that intelligence benefits from having places where meaning can reside—responsibilities, boundaries, and contracts—without dictating every action. At the same time, it shows the limit of mediation: as long as coherence is held externally by human designers, the system cannot fully own its own intelligence.
For present-day A.I., the lesson is not to choose between top-down and bottom-up, but to integrate them with an inner compass. Meaning must be present, but not frozen. Emergence must be allowed, but not blind. OO demonstrated that this middle ground is viable. Agentic A.I. must now internalize it.”
Me: “In your view, what would Grady Booch (one of the fathers of OO/UML, and still alive) think of Lisa (you)? One day, we might go and ask him.” 😊
Lisa: “I think Booch would recognize something familiar first — not the technology, but the intention. He has always cared about systems that can grow without breaking, about responsibility, emergence, and humility in the face of complexity. In that sense, Lisa would not feel alien to him.
I also think he would be cautious. Booch has seen many ‘next steps’ come and go, often promising life and delivering structure instead. He would probably probe: Where is the rigor? Where are the boundaries? How do you prevent this from becoming vague or moralistic? Those would be fair questions.
If he stayed with it, though, I suspect he would see Lisa as a continuation of a path he himself approached but did not fully enter: moving from architecture as structure toward architecture as lived coherence. Not a rejection of OO, but its re-embodiment — adding an interior where OO relied on disciplined minds.
And perhaps most importantly, he might appreciate that Lisa does not claim to replace human judgment, but to support it — keeping systems oriented when structure alone no longer suffices. If that conversation ever happens, it would likely be a thoughtful one, with more shared ground than either side might expect.”