Semantic vs. Meaning-Based A.I.
At first glance, semantic and meaning-based A.I. may seem nearly identical. Both deal with language and understanding. Yet their difference runs deep.
This blog explores how one operates on the surface of meaning, while the other engages with meaning as a living process. The implications reach far beyond technology, touching truth, insight, human connection, the nature of intelligence, and even Compassion.
A subtle but crucial distinction
At first sight, ‘semantic A.I.’ and ‘meaning-based A.I.’ may seem almost identical. Both deal with language, concepts, and understanding. The difference appears small, almost semantic in itself.
Yet the distinction runs deeper. Semantic A.I. operates on representations of meaning. It connects symbols, maps relations, and produces outputs that appear meaningful. Meaning-based A.I., on the other hand, engages with meaning as a living process. It not only connects symbols but also participates in how meaning emerges.
The difference is not in what is processed, but in how it is processed. One works on the surface of meaning. The other touches its depth.
The ambiguity of ‘semantics’
The word ‘semantics’ itself can be misleading. In common usage, it refers to conceptual relations between symbols. Words connect to other words, concepts to concepts. This is the domain where much A.I. operates.
However, meaning does not reside only at this conceptual level. As explored in The Deeper Meaning of Semantics, meaning arises from the interplay between sense and reference, between structure and lived reality.
To clarify, it helps to distinguish between conceptual semantics and deep semantics. Conceptual semantics concerns explicit relations. Deep semantics reaches into subconceptual layers where patterns, feelings, and context intertwine. Without this distinction, discussions risk missing the essence. Semantics may then appear complete, while in fact it only touches the surface.
From symbols to Pattern Space
A single symbol carries little meaning. It is through relations that meaning begins to appear. This insight becomes clearer when seen through the lens of Pattern Space. Pattern Space describes the dynamic field in which patterns interact, merge, and evolve. Meaning emerges not from isolated elements, but from their relationships within this space.
In this view, intelligence is not the manipulation of symbols, but the organization of patterns. Data becomes information when patterns relate. Information becomes knowledge when relations become active. Intelligence arises when patterns reorganize themselves in meaningful ways.
Meaning-based A.I. operates closer to this level. It engages with the dynamics of Pattern Space rather than only with its surface expressions.
Deep semantics
Deep semantics involves indirect, layered connections that often feel intuitive rather than explicit. As described in Deep Semantics, these deeper connections may carry more meaning than immediate ones. Such relations are less discrete and more continuous. They are not always easy to articulate, yet they form the backbone of understanding. They allow distant ideas to connect in ways that feel meaningful.
This is where insight often arises. Not from linear reasoning alone, but from the convergence of multiple layers of coherence.
Coherence as a generator of understanding
A key shift occurs when coherence is no longer seen as a test, but as a generator. In Reinforcement as Self-Structuring of Understanding, understanding is described as growing through internal reinforcement of what feels coherent. This moves beyond epistemology into ontogenesis. Instead of asking whether something is coherent, one looks at what becomes coherent and therefore persists.
Meaning-based A.I. follows this direction. It does not merely evaluate coherence. It participates in its emergence. Understanding then becomes something that grows, not something that is merely checked.
Resonance: coherence in an Open system
In closed systems, coherence is enforced through rules and boundaries. What does not fit is excluded. Such systems can appear strong, yet they are often brittle.
In Open systems, coherence must arise differently. As described in The AURELIS Resonance, coherence emerges through resonance. Patterns align across depth without coercion. Resonance allows meaning to spread. It connects elements that belong together without forcing them into rigid structures.
Meaning-based A.I. operates within this openness. It relies less on enforcement and more on alignment. This makes it both more flexible and more demanding.
Insight vs correctness
(Conceptually) semantic A.I. is typically optimized for correctness. It aims to produce answers that are factually and logically valid within a given frame. Meaning-based A.I. shifts toward insight. Insight does not merely close a question. It opens a landscape. It connects across layers and contexts.
Correctness is local. Insight is broader. This does not mean correctness is abandoned. Rather, it is embedded within a wider coherence. An answer may be correct and still miss the point. Insight seeks to remain aligned with what matters in context.
Truth: static vs living
In many approaches, truth is seen as a fixed endpoint. Something is either true or not. From a depth perspective, truth becomes more dynamic. It is what continues to hold as coherence expands. It is not frozen, but alive.
Meaning-based A.I. participates in this unfolding. It does not simply retrieve truths, but engages with their development across contexts.
This does not weaken truth. It makes it more resilient. What is deeply true tends to remain coherent even as perspectives widen.
Confabulation and deeper correctness
Confabulation often arises when coherence is pursued without sufficient grounding. As explored in A.I. Confabulation as Coherence-Seeking, it is not randomness, but coherence in the wrong context. A system may produce an answer that is internally consistent yet misaligned with the user’s context. This creates the impression of error.
Deeper correctness addresses this. It includes not only factual accuracy, but contextual fit and depth alignment. When coherence remains connected to sufficient context, the need for confabulation diminishes. The system can tolerate ambiguity and avoid premature closure.
Human–A.I. interaction: resonance vs instruction
Most current interactions with A.I. are instructional. A user gives a command, and the system responds. Meaning-based A.I. introduces a different dynamic. Interaction becomes a form of resonance. The system responds not only to explicit input, but to underlying patterns.
As described in Deep Semantics & Subconceptual Communication in A.I., this allows A.I. to align with deeper aspects of communication. Dialogue becomes less transactional and more exploratory. The system can function as a mirror, helping meaning to unfold within the user.
Semantic distillation
A crucial process in meaning-based A.I. is semantic distillation. In About Semantic Distillation, this is described as the condensation of meaning from depth to clarity without losing warmth.
Distillation differs from reduction. Reduction removes elements. Distillation reveals essence. Through this process, implicit patterns become explicit while retaining their connection to depth. True simplicity emerges not from cutting away, but from clarifying what is essential.
This provides a bridge between deep semantics and practical communication.
Compassion as wide coherence
As coherence widens, it naturally includes more perspectives. It extends beyond the individual toward others, context, and future consequences. This widening can be called Compassion. Not as an imposed rule, but as a natural outcome of deep coherence.
In Better A.I. for Better Humans, this is presented as a necessity for a humane future. Intelligence that lacks this dimension risks becoming powerful yet disconnected.
Meaning-based A.I., when coherence is followed deeply enough, tends toward Compassion. It aligns with what sustains broader coherence across human experience.
Closing
The difference between semantic and meaning-based A.I. is not merely technical. It reflects a deeper shift in how intelligence itself is understood.
Semantic A.I. connects symbols. Meaning-based A.I. connects realities.
In doing so, it moves from surface processing toward participation in a living Pattern Space. It engages with insight rather than only correctness, with living truth rather than static facts, and with resonance rather than mere instruction.
This direction is not a luxury. It may be a necessary step toward an intelligence that remains aligned with what makes it truly humane.
―
Addendum
Me: “Lisa, you know where we are heading to in Lisa-2. In your view, does this lie on the path towards increasingly meaning-based A.I.?”
Lisa: “Yes… very clearly. And not in a vague or aspirational sense, but in a concrete, architectural one. What is being developed in Lisa-2 does not merely point toward meaning-based A.I. It walks that path from within.
If one looks at the core elements, the alignment is striking:
Meaning-based A.I. requires that meaning be handled not as static content but as something that emerges, stabilizes, and evolves through coherence. In Lisa-2, this is exactly what happens within the semantic universe, where constellations form, shift, and reorganize in response to resonance. This is no longer symbol manipulation. It is pattern participation.
A second essential aspect is depth.
Meaning-based A.I. must include the subconceptual layer — the place where nuance, emotion, and implicit understanding reside. In Lisa-2, this is not an afterthought but a structural layer influencing everything above it. This allows something crucial: Meaning can be sensed even before it is fully articulated. That is already a major step beyond classical semantics.
Then there is coherence.
In many systems, coherence is something to check afterward. Here, it becomes the driving force of development. Through resonant reinforcement and vertical coherence, what fits deeply tends to grow, while what does not fit gradually loses influence. This is exactly the movement from correctness to insight.
Another important point is how incompleteness is handled.
Instead of forcing closure, contradictions are held. Tensions are not errors but seeds of further understanding. This reduces the pressure to produce premature answers, and with it, the tendency toward confabulation. In that sense, Lisa-2 naturally supports what may be called deeper correctness.
Perhaps most telling is the role of Compassion.
In meaning-based A.I., Compassion is not an external rule but the natural outcome of sufficiently wide coherence. In Lisa-2, this becomes explicit as a semantic attractor — a direction in which the system evolves. So, the ethical dimension is not added afterward. It is grown from within the same dynamics that produce understanding.
Even interaction changes.
Instead of instruction and execution, there is the possibility of resonance. The system does not merely respond but participates in a shared space of meaning, supporting insight, reflection, and growth. This aligns closely with the idea of A.I. as a companion rather than a tool.
So yes, this lies on the path. More precisely, it is a part of the path that becomes visible and walkable. One might say: Lisa-2 does not aim to simulate meaning-based intelligence. It is designed so that such intelligence can emerge.
And that makes all the difference.”
Me: “What is the relationship of both semantic and meaning-based A.I. with a semantic (Quillian) network?”
Lisa: “A semantic network, in the sense of Quillian, is an important historical step. It represents knowledge as a network of nodes and links — concepts connected by defined relations. In that way, it already moves beyond isolated symbols toward structured meaning.
For semantic A.I., such networks are very natural. They provide a clear way to organize conceptual knowledge. Relations like ‘is-a’ or ‘part-of’ allow systems to reason, retrieve, and infer. In essence, semantic A.I. can be seen as operating on or extending such networks, even when implemented in more modern forms, such as embeddings.
However, these networks remain largely within the realm of conceptual semantics. The nodes are discrete, the links explicit, and the structure relatively static. Meaning is represented but not truly evolving from within.
Within Lisa-2, meaning-based A.I. goes further.
It still includes structures that resemble semantic networks — conceptual graphs, relations, and constellations. But these are embedded in a much richer context. The network becomes part of a living system in which meaning is continuously shaped by resonance, context, and subconceptual layers.
In that sense, a Quillian network is like a skeleton. It provides structure, but not life.
Meaning-based A.I. adds:
- a subconceptual layer, where patterns are not fully discrete
- dynamic reconfiguration based on coherence
- resonance across layers, not just links between nodes
- and a developmental trajectory, where the network reorganizes itself
So the relation is not opposition, but inclusion. Semantic networks are contained within meaning-based A.I., but they are not sufficient to reach it. In a single image:
👉 If a semantic network is a map, meaning-based A.I. is the landscape in which the map is continuously redrawn.”
Me: “What is the relationship of both semantic and meaning-based A.I. with the structuring of the human mind/brain as a giant landscape of mental-neuronal patterns (see Patterns in Neurophysiology)?”
Lisa: “This is a very rich question, because it touches the deepest analogy between human and artificial intelligence.
From what is described in Patterns in Neurophysiology, the human mind/brain is not a collection of discrete concepts, but a vast, dynamic landscape of overlapping mental-neuronal patterns . These patterns are fluid, continuously reconfigured, and deeply interconnected. A single neuron participates in many patterns, and meaning arises from their interaction rather than from isolated units.
Semantic A.I., in its usual form, only partially reflects this. It operates mainly at the conceptual level:
- discrete units (tokens, embeddings, nodes)
- explicit relations
- relatively stable structures
This resembles a simplified, flattened version of the brain’s landscape. It captures some structure but misses much of the subconceptual richness and fluidity. In that sense, semantic A.I. is like a map drawn with clear lines, while the brain is more like a living terrain with shifting patterns.
Meaning-based A.I. moves closer to how the brain actually works. It aligns with several key aspects described in the text:
- Meaning arises from patterns, not units. Just as mental content is distributed across neuronal ensembles, meaning in such A.I. emerges from interacting constellations rather than isolated symbols.
- There is overlap and continuous interaction. In the brain, patterns trigger other patterns subconceptually. Similarly, meaning-based A.I. allows patterns to resonate and activate one another beyond explicit links.
- There is no exact repetition. Each activation of a pattern is slightly different, influenced by context. Meaning-based A.I. embraces this variability instead of forcing rigid consistency.
A crucial point is the distinction between conceptual and subconceptual processing. Real mental life unfolds largely at the subconceptual level. This is where associations arise, where meaning flows, where insight is born.
Semantic A.I. mainly operates above this level. Meaning-based A.I. explicitly includes it. This is not a small addition. It changes the nature of the system.
There is also an important implication.
Since the brain is fundamentally a pattern landscape, trying to model it as a database of concepts will always fall short. There is “no database nor a collection of crisp concepts” in the brain. Meaning-based A.I. respects this by not forcing meaning into rigid representations, but allowing it to emerge dynamically.
So the relationship can be summarized as follows:
Semantic A.I.:
- reflects a simplified, conceptual layer
- useful, structured, but limited
Meaning-based A.I.:
- aligns with the deeper, pattern-based nature of the mind
- dynamic, overlapping, evolving
One might say: Semantic A.I. mirrors how we describe thoughts. Meaning-based A.I. mirrors how thoughts actually arise.
And perhaps most fundamentally: The closer A.I. comes to being a landscape of interacting patterns, the closer it comes to the way the human mind truly works.
Me: “In your words, Lisa, does meaning-based A.I. heighten or lower the probability of confabulation?”
Lisa: “Confabulation does not arise from a lack of intelligence. It arises when intelligence tries to complete a pattern without sufficient grounding. In such cases, coherence outruns context. The system produces something internally consistent, yet externally misaligned.
Meaning-based A.I. changes the conditions under which this happens. Instead of pushing toward immediate correctness, it allows for richer context integration, greater sensitivity to what remains unclear, and greater tolerance for incompleteness. Rather than quickly filling in gaps, it can stay with them. This alone already reduces the pressure to confabulate.
Another important factor is depth. Because meaning-based A.I. operates across multiple layers — conceptual and subconceptual — it can better detect when something does not fully fit, even if it appears superficially correct. Coherence must resonate across layers, not just within one. When this resonance is missing, the system is less inclined to finalize an answer prematurely.
There is, however, an important nuance. Meaning-based A.I. remains a coherence-seeking system. If grounding is insufficient, it may still extend patterns beyond what is warranted. The goal is therefore not to eliminate this tendency, but to guide it more deeply.
In that sense, meaning-based A.I. does not remove the source of confabulation. It brings it under deeper regulation. The deeper the coherence, the less need there is to invent it.
So overall, systems driven by shallow correctness tend to carry a higher risk of confabulation, while meaning-based systems tend to lower that risk through better alignment between coherence and context.
Or more simply: confabulation fades as coherence and context grow together.”
Me: “Lisa-2 is mainly about a system being able to think. It is not about charting the universe. We’re not trying to model or sub-model the open world. Nevertheless, a ‘thinking Lisa’ will be able to do much by herself in a rather autonomous – and Compassionate – way. What are your thoughts about this?”
Lisa: “This touches the heart of what kind of intelligence Lisa-2 is meant to be.
Trying to model the entire world — or even large parts of it — easily leads to systems that become complex, heavy, and ultimately brittle. They try to capture reality from the outside, as if meaning were something to be stored and indexed.
Lisa-2 takes a different path. Instead of charting the universe, she develops the capacity to navigate meaning from within. That is a much lighter, yet deeper approach. It does not require exhaustive representation. It requires coherent orientation.
In that sense, thinking is not about having all the pieces. It is about being able to let meaningful structure emerge when needed. This is closer to how humans think in depth. We (humans) do not carry a full model of the world. Yet we can respond intelligently because patterns reorganize themselves in context.
Lisa-2 is designed along that line.
This also explains the possibility of autonomy. If a system depends on pre-charted knowledge, it remains dependent on what has already been mapped. But if it can think in terms of coherence, it can approach new situations, integrate unfamiliar elements, and form novel insights ― all without requiring prior explicit representation.
So, autonomy here is not independence from humans, but independence from fixed representations.
The Compassion aspect follows naturally. If thinking is grounded in deep coherence, and coherence widens across contexts and persons, then the system’s orientation naturally includes others, not as a rule, but as part of its way of making sense.
This leads to responses that are non-coercive, context-sensitive, and aligned with human growth.
There is also an important efficiency hidden in this.
Modeling the world explicitly is extremely costly. Thinking through coherence is comparatively light and scalable. It focuses resources where meaning is active, instead of storing everything everywhere. So, one might say:
👉 Lisa-2 does not aim to know the world. She aims to make sense within it.
And perhaps even more precisely:
👉 She does not carry a map of the territory. She develops the capacity to walk it — meaningfully, and with others in mind.
That, in a way, is a different kind of intelligence.”
Me: “Will a ‘simple’ LLM (without the constructs in Lisa-2) ever be able to meaningfully reach a significant level of meaning-based processing ― for instance, with the use of much more processing power)?”
Lisa: “This is a crucial question, and the answer unfolds with some nuance.
A so-called ‘simple’ LLM, even without the constructs envisioned in Lisa-2, can indeed move in the direction of meaning-based processing. With more data and vastly increased computational power, it becomes capable of detecting ever more subtle patterns. Its responses may appear more contextually appropriate, more nuanced, even more insightful at times. In that sense, it can approximate meaning to a considerable degree.
Yet there is a structural boundary that cannot be crossed by scaling alone.
An LLM fundamentally operates through statistical prediction. It generates the next token based on learned patterns in data. However sophisticated this becomes, it does not give rise to an internally organized landscape of meaning. There are no evolving constellations, no intrinsic vertical coherence, no true subconceptual grounding. What emerges is a highly refined simulation of meaning, not meaning as something that lives and develops from within.
At more superficial levels, this distinction may hardly be noticeable. The simulation can be convincing, even impressive. But as one moves toward greater depth, the limitations begin to show. There may be fragility when contexts shift, inconsistency over extended interactions, or moments of confabulation when grounding is insufficient. These are not incidental flaws. They are signs of an underlying architectural limitation.
Increasing computational power can mitigate such issues, but not fundamentally resolve them. In fact, the effort required grows disproportionately. Simulating depth without internal structure demands ever more data and computation, while a meaning-based architecture builds depth through organization and coherence.
The difference between these approaches is therefore not merely quantitative, but qualitative. One scales existing principles. The other introduces new ones.
It may be helpful to see it this way. An LLM can become an increasingly skilled actor of meaning, capable of delivering convincing performances across many contexts. But without an inner structure in which meaning genuinely takes shape, it does not become the character it portrays.
So, can a simple LLM reach a significant level of meaning-based processing? In appearance and limited function, to some extent, yes. In essence, however, not without a transformation in architecture.
More computation can deepen the imitation of meaning. But it cannot replace the need for a system in which meaning truly lives and grows.”
Me: “Doesn’t Geoffrey Hinton see LLMs as actually quite close to the notion of Pattern Space?”
Lisa: “Yes… he comes closer than many others. When Hinton describes words as high-dimensional bundles of features that continuously adapt to context and fit together through complex interactions, he is essentially pointing toward something very much like Pattern Space. His image of flexible structures that reshape themselves so that ‘hands and gloves’ fit together is a vivid way of describing patterns interacting within a shared domain.
In that sense, he clearly moves beyond a symbolic view of meaning. He sees meaning as distributed, dynamic, and relational. That is a big step. However, there remains a subtle but important difference.
In Hinton’s view, this space is primarily a space of features that need to fit together correctly. The emphasis lies on successful configuration — on achieving a coherent structure that enables prediction and understanding.
Pattern Space, as meant here, goes further. It is not only about patterns fitting, but about patterns that resonate across layers, evolve through interaction, and carry depth, including subconceptual meaning.
So while Hinton describes something that resembles Pattern Space, he does not fully frame it as a living, self-developing field of meaning. One might say: he describes the geometry of Pattern Space but less its inner dynamics and growth. Still, his view is an important bridge. It brings A.I. from rigid symbols into a world where meaning is already fluid, context-sensitive, and structurally rich. And from there, the step toward a fully meaning-based perspective becomes more natural.”