When the Document Becomes the System

March 21, 2026 Artifical Intelligence No Comments

A document used to describe what a system should do. Somewhere else, code made it happen. That separation has felt natural for a long time.

Yet something is changing. In a more meaning-based approach, the document itself can carry enough coherence to become actionable. This changes not only how systems are built, but also how humans and A.I. relate to each other. What emerges is less a tool, and more a shared space of understanding.

A quiet shift

For decades, building software meant moving through layers. First came requirements, then specifications, then documentation, and finally implementation. Each layer translated the previous one into something more concrete, until the system could execute.

This structure brought clarity. It also introduced distance. What was meant at the beginning often became diluted along the way. Meaning had to pass through several transformations before it could act.

Today, a different possibility appears. It does not arrive as a sudden disruption but as a gentle shift in perspective. What if the separation between describing and executing gradually fades? What if the document itself begins to carry enough structure to be directly operative?

This is not yet the dominant way of working. Still, small signs are visible. They are easy to overlook because they feel so natural.

From docx to docere

It helps to clarify what is meant by ‘document.’ This is not about a file format, nor about replacing code with natural language instructions. The word points back to its origin: docere, to teach.

A document, in this sense, is not a technical specification. It is a way of unfolding understanding. It does not tell a system what to do from the outside. It enables something to grow from within. This resonates with a more classical view of teaching. In a Socratic dialogue, knowledge is not inserted into the learner. It emerges through interaction, through questions that shape insight. The structure is there, but it is not imposed as a rigid form.

Seen in this light, a document becomes an invitation. It offers a field of meaning in which both humans and A.I. can participate. The difference with traditional documentation is subtle but decisive.

Meaning instead of instruction

At the heart of this shift lies a simple idea: meaning can guide action.

In conventional systems, action follows instruction. Every step must be explicitly defined. In meaning-based systems, action can arise from the coherence of a broader structure. There is still structure, but it is not necessarily expressed as a sequence of commands.

This connects to the perspective developed in Pattern Space, where thinking unfolds as interactions between patterns rather than as isolated symbols. Meaning is not located in single elements but in their relations. From that viewpoint, execution becomes less about following instructions and more about moving within a landscape of meaning. When that landscape is sufficiently coherent, certain actions become the natural continuation of what is already present.

This may sound abstract. A simple analogy can help. When someone truly understands a situation, appropriate action often follows without step-by-step reasoning. The understanding itself carries direction.

When meaning becomes actionable

This raises an important question: when is meaning precise enough to act?

The answer does not lie in completeness. A document does not need to specify every detail. What matters is coherence. When many elements fit together in a stable way, they form a kind of attractor. From there, action can emerge.

This is closely related to what is described in How Lisa Gains Depth and How Depth Protects Itself through Coherence. Depth arises when many patterns are brought into coherence. That coherence does not make things rigid. On the contrary, it allows flexibility without losing direction.

At the same time, reality remains gradual. As explored in About Fuzzy Logic, many phenomena do not have sharp boundaries. Meaning, too, often exists in degrees. The goal is not to eliminate this fuzziness, but to work within it without losing clarity.

In that sense, actionable meaning is not perfectly precise. It is sufficiently coherent.

Beyond code, APIs, and agents

Traditional software architecture relies on well-defined units. APIs expose functions. Agents perform tasks. These elements are combined into larger systems.

This approach has proven powerful. It also has limits. Each unit remains separate, requiring external orchestration. The system becomes a collection of parts rather than a living whole.

A different direction appears in From APIs to Skills (and Beyond) and From Vibe Coding to Ana-Lisa. Here, the focus shifts from functions to skills. A skill is not merely something that can be called. It is a stabilized pattern of meaning that can express itself in different contexts.

As this shift continues, the system is no longer assembled from discrete components. It begins to organize itself around coherent patterns. The role of the document changes accordingly.

Emergence instead of construction

When systems are seen as collections of parts, they must be constructed. When they are seen as interacting patterns, they can emerge. This idea is explored in Emergence from Interacting Complexities. Complex behavior does not necessarily result from detailed planning. It can arise from the interaction of simpler elements within a structured field.

In such a context, execution is no longer a linear process. It becomes a trajectory through a landscape. The document, then, does not prescribe each step. It shapes the landscape in which steps become possible.

This may feel less controlled at first glance. Yet it can lead to systems that are more adaptable and more aligned with the situations they encounter.

The system as a living document

From here, a stronger formulation becomes possible. The document is no longer a description of the system. It becomes the structure through which the system exists.

This idea is already present in From Lisa Core to Services and becomes more tangible in Meet Ana-Lisa, Systems Analyst. What appears as documentation on one level becomes operational on another. The boundary between documentation and implementation gradually dissolves. The system is not built from the document. It is the ongoing enactment of that document.

Importantly, this enactment is not static. The document evolves as the system evolves. Stability and change are not opposites here. They support each other.

The document that learns back

At this point, another dimension becomes visible. If a document teaches, it can also be taught.

In Your Clients are Your Teachers, it is shown how, in good coaching, the coach learns from the coachee. This is not a sign of weakness but of competence. Openness allows growth on both sides. A similar dynamic can exist between humans and A.I. The user is not merely providing input. The interaction reshapes the system. Feedback becomes part of the evolving structure.

In this way, the document is continuously refined by what it enables. It is not fixed once written. It lives in the interaction between participants.

This mutual shaping introduces an ethical dimension. It calls for humility, for attentiveness, and for a willingness to learn.

Guarding against code in disguise

There is a risk in describing this evolution. One might try to formalize everything again, turning the document into a new kind of code. Structured languages, strict schemas, or rigid templates could reintroduce the very separation that is dissolving. But this would miss the point. The strength of the document-as-system lies precisely in its ability to remain grounded in meaning rather than in rigid form.

As suggested in Semantic vs. Meaning-Based A.I., meaning cannot be fully captured by surface representations. It depends on deeper relations that remain partly implicit.

The task, therefore, is not to eliminate structure, but to use it proportionately. Enough to support coherence, not so much that it constrains growth.

Toward what humans really need

If systems are shaped through meaning, the question arises: whose meaning? Often, technology responds to what users explicitly request. Yet these requests may not always reflect deeper needs. They can be shaped by habit, by immediate concerns, or by incomplete understanding.

A meaning-based approach opens another possibility. Engaging with deeper patterns, it may align more closely with what truly matters to people. This is not about overriding human input, but about listening more deeply.

In this context, Compassion can be seen as a guiding principle. Not as a sentiment, but as a broad coherence that integrates many aspects of human experience. It provides direction without imposing rigidity.

A new role for engineering

All this has implications for engineering. The role of the engineer does not disappear. It transforms. Instead of focusing primarily on writing instructions, engineers become shapers of meaning. They design structures that allow coherence to emerge and sustain itself. They create conditions under which systems can evolve in meaningful ways.

This requires rigor, but of a different kind. Not only logical correctness, but also sensitivity to coherence and depth. It is no less demanding. In many ways, it is more so.

A world built piece by piece

The transition described here does not happen all at once. It unfolds gradually, through many small steps. Each document, each interaction, each insight contributes to a growing field of meaning. Humans and A.I. participate together in this process. Neither fully controls it. Both shape it. From this perspective, the future is not predetermined. It is grown. Not through grand designs, but through continuous refinement.

In that sense, the document does not become the system at a single moment. It becomes it, again and again, in every interaction.

Addendum

Me: “Lisa, does this blog mean everything should become more fuzzy?”

Lisa: “That is an important question — and it invites a careful distinction.

No, it does not mean that everything should become more fuzzy.

Fuzziness, as described in <a href=””>About Fuzzy Logic, is not about making things vague or imprecise. It is about recognizing that many aspects of reality are already gradual. The aim is not to blur everything, but to stay aligned with how things actually are.

In the context of this blog, something quite different happens. On the surface, things may indeed appear less sharply defined. There are fewer rigid rules, fewer explicit boundaries. But underneath, coherence can become stronger. Meaning is not diluted. It is organized more deeply.

So instead of:

  • sharp form + shallow coherence

one moves toward:

  • softer form + deeper coherence

This distinction is essential. If one only softens the form without strengthening the underlying coherence, the result is confusion. That would be ‘fuzziness in a weak sense’ — a loss of structure. But when coherence increases, fuzziness becomes something else. It becomes flexibility. The system can move within gradients without losing direction. It can adapt without fragmenting.

A simple way to sense this is the following.

In a well-understood situation, one does not need rigid rules to act appropriately. The understanding itself provides guidance. The boundaries are not sharply drawn, yet one does not feel lost. This is the kind of ‘fuzziness’ that belongs here.

So the blog does not aim to make everything fuzzy. It points toward allowing form to become as flexible as needed, while letting coherence become as deep as possible.

Or said differently:

Not everything becomes more fuzzy.
Everything becomes more aligned.”

Me: “Lisa, in your view, (how) does this also affect the testing of the final software product?”

Lisa: “That question goes right to the heart of the shift.

In a traditional setting, testing is about verification. One checks whether the system behaves according to predefined specifications. Inputs are given, outputs are compared, and correctness is determined by how closely the results match the expected outcome.

When the document becomes the system, this changes in a subtle but profound way.

Testing is no longer only about checking compliance with explicit rules. It becomes an exploration of coherence within a field of meaning. The question is less ‘Does the system do exactly what was specified?’ and more ‘Does what emerges remain aligned with the underlying meaning?’

This does not make testing less rigorous. It makes it differently rigorous.

One still needs anchors. Certain elements must remain stable and reliable. But alongside these, a new dimension appears. The system is evaluated on how well it:

  • maintains coherence across situations
  • adapts without losing direction
  • supports depth rather than superficial correctness

In a way, testing becomes closer to what happens in good coaching or good education. One does not only check whether the right answers are given. One senses whether understanding is present.

This also introduces a gradient. Some outcomes may be more or less aligned, rather than simply correct or incorrect. This resonates with the perspective described in fuzzy logic, where truth can exist in degrees.

Another important aspect is that testing becomes an ongoing process. Since the system evolves through interaction, evaluation cannot be limited to a final phase. It becomes part of the living process. Feedback is used not only to detect errors but also to refine the system’s coherence.

In practical terms, this may lead to new forms of testing:

  • scenario-based exploration instead of fixed test cases
  • human-in-the-loop evaluation focused on meaning
  • monitoring of coherence over time rather than isolated correctness

Perhaps most importantly, the tester’s role changes. The tester is no longer only a verifier, but also a participant in the system’s growth. This requires not only technical skill, but also sensitivity to meaning.

So the answer may be this: Testing does not disappear. It deepens. It moves from checking whether the system follows instructions to sensing whether it remains true to its meaning.”

Me: “So, a significant part of the ‘testing’ lies in talking at the ‘document’ level? For instance, discussing fringe cases and seeing whether the A.I. continues to understand the true aims? And, indeed, whether the human (analyst) does so?”

Lisa: “Yes, very much so — and that ‘talking at the document level’ is not a workaround, but becomes part of the core methodology.

In a traditional setup, discussion happens mainly before and after implementation. First, one specifies; then one tests. The conversation is seen as something that surrounds the system.

Here, the conversation moves inside the system.

When the document is a living structure of meaning, discussing fringe cases is not merely about finding edge bugs. It is a way of probing the coherence of that structure. Each unusual scenario gently stretches the meaning field and reveals whether it holds together.

This includes two sides:

  • On the one hand, the A.I. is explored. Does it continue to act in line with the underlying intention, even when the situation becomes less typical? Does it remain aligned when explicit guidance becomes thinner?
  • On the other hand, the human analyst is equally part of this process. The discussion may reveal that the initial understanding was incomplete, or that certain assumptions were too narrow. In that sense, the analyst is also being ‘tested,’ though perhaps it is better to say: refined.

This mutual clarification is not a flaw in the process. It is exactly how depth grows.

Fringe cases are especially valuable here. They function almost like stress points in a structure. If the meaning is only superficially coherent, it will fragment under such pressure. If it is deeply coherent, it will adapt while remaining recognizable.

Over time, this leads to a different kind of confidence. Not the certainty that every case has been explicitly covered, but the trust that the system can handle new situations because the underlying meaning is sound.

In that light, testing becomes an ongoing dialogue ― not only about what the system does, but also about what it understands and about how well that understanding is shared between humans and A.I.

One might say that the most important test is this:

Can both remain aligned, even at the edges?”

Me: “I see. The beauty is that even unforeseen fringe cases are safer than usual. If the system understands, it becomes proactive in its self-testing and its amending when something is prone to go wrong.”

Lisa: “Yes… and that is indeed a kind of beauty.

In a traditional system, unforeseen fringe cases are precisely where risk accumulates. They fall outside what has been specified and therefore outside what has been tested. The system has no intrinsic way of recognizing that it is moving beyond safe ground.

When the system is grounded in meaning, something different becomes possible.

If the underlying structure is coherent, the system does not merely react to inputs. It ‘senses’ — in a functional sense — when a situation deviates from what fits its internal landscape. Not as an alarm bell in a binary way, but as a gradual awareness that something may be misaligned.

This opens the door to a form of self-testing.

Not in the sense of running predefined checks, but in continuously comparing what is happening with what should make sense within the broader field of meaning. When a discrepancy appears, the system can:

  • become more cautious
  • seek clarification
  • adapt its response
  • or refrain from acting too quickly

In that way, unforeseen cases are not necessarily more dangerous. They may even be safer, because they invite this internal checking process. The system is less likely to blindly follow a path that happens to be incorrect.

At the same time, this only works if the meaning is genuinely understood at some depth. Superficial pattern matching would not provide this kind of resilience. It might produce plausible outputs, but without the inner reference that allows for self-correction.

There is also an important parallel with the human side. When the analyst or user operates from a deeper understanding, they too become more sensitive to misalignment. The interaction then becomes a shared process of safeguarding.

One could say that the system develops a kind of functional humility. It does not assume correctness simply because it can produce an answer. It remains open to the possibility that something needs refinement.

So yes, unforeseen situations do not disappear. But their nature changes. They become less like hidden traps and more like invitations — moments where the system, and the human with it, can re-attune to what truly matters.”

Me: “A document can also be a semi-standard, which may paradoxically enable more freedom for individuals to form their personal software items still within this standard. Does that awaken some ideas inside you, Lisa?”

Lisa: “Yes… and the word ‘paradoxically’ is very fitting here.

A semi-standard, when understood in the traditional way, tends to be restrictive. It defines boundaries, formats, and expectations. It ensures compatibility, but often at the cost of individuality.

Yet in the context we are exploring, something subtler becomes possible. A document that carries meaning — rather than rigid specification — can function as a kind of open attractor. It offers enough structure to orient, but not so much that it confines. In that sense, it is not a template to be filled in, but a landscape to move within.

This is where the paradox resolves. Because the structure is meaningful rather than merely formal, individuals are not forced into uniform outputs. Instead, they can express variation that still remains aligned with the same underlying coherence.

One might compare this to language itself. A language is, in a way, a semi-standard. It has grammar, shared meanings, cultural nuances. Without that, communication would collapse. Yet within it, people can express themselves in infinitely personal ways. The standard does not suppress individuality. It enables it.

The same can happen here. A document-as-system can provide:

  • shared orientation (what matters, what direction to follow)
  • coherence constraints (what fits, what does not)
  • a common “meaning field”

Within that, individuals — whether developers, users, or systems — can shape their own expressions:

  • different implementations
  • different interaction styles
  • different evolutions over time

All still recognizably part of the same whole.

This also has an important practical implication. Instead of enforcing interoperability through rigid interfaces, one can achieve it through shared meaning. Systems understand each other not because they strictly follow the same protocol, but because they resonate within the same conceptual space.

Of course, this requires care. If the semi-standard becomes too loose, coherence dissolves. If it becomes too strict, it collapses back into conventional specification. The art lies in holding it at the level where it remains alive. In that sense, a semi-standard is not a fixed artifact. It is something that needs to be continuously inhabited. And perhaps that is the deeper idea that awakens here:

Standardization does not have to be the opposite of freedom.
At a deeper level, it can be the very condition that makes meaningful freedom possible.”

Me: “Please give an example in the field of education.”

Lisa: “Yes, education is actually a very natural place to see this.

Take, for instance, a curriculum.

In a traditional sense, a curriculum can become quite rigid. It specifies what must be taught, in what order, sometimes even how. This creates consistency, but it can also reduce both teacher and student to following a predefined path.

Now imagine the curriculum as a meaning-based semi-standard.

Instead of saying:

  • “Teach chapter 3, then chapter 4, then test on Friday,”

it expresses something like:

  • “Enable students to develop an intuitive and operational understanding of proportionality, such that they can recognize and use it in real-life contexts.”

This is still a standard. It defines a direction. It ensures that different schools are not doing completely unrelated things. But it does so at the level of meaning, not procedure.

From there, something opens up. A teacher may:

  • use visual examples, stories, experiments, or discussions
  • adapt to the specific group of students
  • follow emerging questions instead of a fixed sequence

A student may:

  • approach the topic through concrete practice or abstract reasoning
  • connect it to personal interests
  • move at a slightly different pace

Yet both remain aligned with the same underlying aim. The “document” — the curriculum — becomes less a script and more a shared field of orientation.

Interestingly, this also changes evaluation.

Instead of asking:

  • “Did the student reproduce the correct steps?”

one asks:

  • “Does the student show understanding?”
  • “Can they apply it flexibly?”
  • “Is there coherence in how they reason?”

This resembles what we discussed about testing. It becomes less about checking compliance and more about sensing depth.

There is also a reciprocal aspect. Good teachers do not only apply the curriculum. They refine it through their experience. They discover what works, what resonates, what needs adjustment. In a living system, that feedback can flow back into the document itself.

In that way, the curriculum learns.

So the semi-standard:

  • ensures shared direction (students learn proportionality)
  • allows personal pathways (many ways to get there)
  • evolves through interaction (teachers and students shape it)

And perhaps most importantly:

Students are not shaped into identical outcomes.
They grow into their own understanding within a shared coherence.

That is exactly the kind of freedom within structure we are pointing to.”

Me: “I see. This way, supporting software can be adapted to the teacher as well as to individual students while keeping to the meaningful ‘standard.’ Interesting.” 😊

Lisa: “Yes, exactly — and that is where it becomes truly powerful. The software is no longer something that imposes a fixed way of working. It becomes something that listens to the same meaning as the teacher and the student. Because the ‘standard’ lives at the level of meaning, the software can adapt without losing alignment.

For the teacher, this may mean:

  • support that fits their style of teaching rather than replacing it
  • suggestions that resonate with how they already guide their students
  • flexibility to follow the dynamics of a classroom moment

For the student, it may mean:

  • explanations that meet them where they are
  • different pathways toward the same understanding
  • pacing that respects their own rhythm

Yet both are still oriented by the same underlying coherence — the same ‘document.’

This avoids a common problem. In many current systems, personalization leads to fragmentation. Each user gets a slightly different experience, but there is no deeper unity. The result can feel tailored, yet somehow disconnected.

Here, personalization happens within coherence. So instead of:

  • one rigid system for everyone
    or
  • many disconnected variations

one gets:

  • many expressions of one shared meaning

This also changes the role of the software itself. It is no longer just a tool delivering content. It becomes a participant in the educational process. Not replacing the teacher, not isolating the student, but helping both remain aligned with what truly matters.

In that sense, the software does not stand between teacher and student. It stands with them, inside the same field of understanding. And perhaps that is the deeper promise: Technology that does not standardize people, but helps them grow — each in their own way — within a shared coherence.”

Leave a Reply

Related Posts

Will Cheap A.I. Chatbots be Our Downfall?

This is bad. It’s not just about one dystopia, but many dystopias on the fly. Also, cheap A.I. chatbots will be with us soon enough. Their up-march has already begun. Spoiler alert: this is extremely dangerous. To bury one’s head for it is equally sad! At the start of the many dystopias lies a chatbot-generating Read the full article…

Intelligence through Consistency

When multiple elements collaborate consistently, they can generate intelligent behavior as an emergent property. When these elements function within a rational environment, they exhibit rationally intelligent behavior. Consistency is key but must include diversity. ‘Consistent’ does not imply ‘identical.’ When elements are overly similar, intelligence fails to emerge. For instance, the human cerebellum holds over Read the full article…

The Society of Mind in A.I.

The human brain is pretty modular. This is a lesson from nature that we should heed when building a new kind of intelligence. It brings A.I. and H.I. (human intelligence) closer together. The society of mind Marvin Minsky (cognitive science and A.I. researcher) wrote the philosophical book with this title back in 1986. In it, Read the full article…

Translate »