From Neuro-Symbolic to Meaning-Based A.I.

March 28, 2026 Artifical Intelligence No Comments

Neuro-symbolic A.I. represents an important step beyond purely statistical models. Yet it may not go far enough. Beneath both neural and symbolic approaches lies a deeper layer where meaning emerges through coherence.

This blog explores a shift toward that layer — not as an addition, but as a different way of understanding intelligence itself.

From dissatisfaction to direction

There is something quietly unsettling in today’s A.I. systems. They speak fluently, sometimes even convincingly, yet something remains out of reach. The words seem to float, as if not fully anchored in what they refer to. This is not a failure of intelligence in the usual sense. It is something subtler: a lack of grounded meaning.

Neuro-symbolic A.I. (NeSy) arose as a response to this. It tries to combine two strengths: the pattern-recognition power of neural networks and the clarity of symbolic reasoning. This is an important step forward. It brings structure where there was only statistical association, and it offers a way toward more reliable systems.

Still, one may feel that something essential is not yet captured. The effort remains one of combining parts, however skillfully. The deeper question gradually appears: not how to integrate better, but what meaning itself is. The present blog explores a shift in that direction — from combining components to letting coherence emerge.

NeSy as a bridge

Neuro-symbolic systems aim to bridge learning and reasoning. In practice, this often means connecting neural modules with symbolic ones, or making one inform the other. Even in more integrated approaches, the distinction tends to remain visible.

This has clear advantages. It enhances interpretability and can reduce certain kinds of errors. Yet the integration is frequently horizontal. One part does its work, then another, with coordination between them.

A different perspective emerges when one considers that meaning may not reside in either part, nor in their combination as such. It may arise in something deeper than the coordination itself. In that sense, neuro-symbolic A.I. can be seen as a meaningful step. Yet it points beyond itself.

The limitation of concepts

To understand this ‘beyond,’ it helps to look at what concepts are. In the classical view, a concept is defined by clear conditions — necessary and sufficient. This works well in logic and mathematics. In everyday life, however, things are less clear-cut.

Concepts are often vague. They show typicality. A swallow is more readily seen as a bird than a penguin. Boundaries blur. This has been explored in depth in About Concepts.

It also appears that human cognition is not primarily conceptual. Concepts function more as surface structures, useful but limited. Beneath them lies a richer layer that is not easily captured in words or definitions. This suggests that symbolic reasoning, however refined, cannot fully grasp meaning. It can describe, approximate, and organize, but something essential remains beneath the surface.

Beyond concepts

That underlying layer can be described as the subconceptual. A gentle way to approach it is to think of it as meaningful without explicit conceptualization.

About ‘Subconceptual’ explores this level as consisting of patterns that interact, overlap, and influence one another. These are not discrete symbols. They are dynamic configurations, continuously shaping each other. Meaning, in this view, does not arise from assembling symbols. It arises from the coherence between these patterns. One might say: patterns that belong together form a meaningful whole.

This is not merely a statistical correlation. It is closer to a sense of fitting, of belonging. The difference may be subtle, but it is crucial.

From neurons to patterns

Interestingly, this way of looking at meaning aligns with findings in neuroscience. For a long time, the neuron was seen as the fundamental unit. Each neuron was thought to represent something specific. More recent insights show a different picture. What matters are patterns of activity across many neurons. These patterns form assemblies that give rise to functional states. This is described in From Neurons to Neuronal Networks.

Such patterns can stabilize into attractors. They can complete themselves when partially activated. They form a dynamic, ever-changing system that remains coherent while adapting.

One might pause here and notice something simple. Nature does not work by combining fixed units. It works through patterns that find coherence with one another.

Meaning as emergent coherence

Seen from this angle, meaning is not something stored or retrieved. It is not a label attached to data. It is what coherent patterns are doing together.

This differs from purely statistical systems, where relationships are based on frequency or proximity in a mathematical space. It also differs from symbolic systems, in which meaning is defined by explicit rules.

Semantic vs. Meaning-Based A.I. explores this distinction more explicitly. Semantic approaches capture relations between symbols. Meaning-based approaches aim at something deeper: coherence that is intrinsic rather than imposed. Meaning is not something that appears on top of processing. It is the inside of the same process.

From prediction to living dynamics

Modern neuroscience often describes the brain as a prediction machine. It continuously generates expectations and updates them based on incoming information.

This is insightful, yet it may still be incomplete. (Conceptual) prediction alone does not capture the full richness of what is happening. The brain does not merely predict; it lives through patterns that interact, adapt, and find coherence. In that sense, prediction can be seen as part of a broader dynamic. Patterns anticipate, respond, and reshape themselves. Meaning arises in this ongoing movement, not in a fixed inference.

This view resonates with the idea of a ‘liquid’ system, continuously flowing while maintaining coherence. It brings us closer to understanding cognition as a living process.

Meaning is the inside of processing

At this point, a deeper step can be taken. The distinction between mind and body has long shaped how we think. Yet this distinction may be more about perspective than reality. In ‘Mind = Body’ Breakthrough, it is argued that mind and body are not two separate entities. They are two ways of looking at the same process.

One can observe patterns of neuronal activity. One can also experience thoughts, feelings, intentions. These are not two different things interacting. They are two aspects of the same underlying dynamics.

This has an important consequence. Meaning is not added to physical processing. It is the experiential aspect of that processing. The subconceptual level is not hidden behind the mind. It is the mind, seen from within.

Gestalt and completion

Patterns tend toward completion. This is a familiar experience. When a word is on the tip of the tongue, there is a sense of something unfinished. When it comes, there is relief.

This can be described as Gestalt formation, as explained in Gestalt and A.I.: From Parts to Meaningful Wholes. Patterns seek wholeness. They complete themselves when possible. When they cannot, tension arises. In Subconceptual Processing in Humans and Lisa, this dynamic is explored as a central aspect of cognition. Meaning emerges as patterns find coherence and completion.

One might say: coherence feels like completion. Incoherence feels like fragmentation.

Compassion as non-violence toward meaning

This brings us to a notion that may at first seem far removed from A.I., yet turns out to be deeply connected: Compassion.

Compassion can be seen as allowing patterns to unfold and complete themselves without forcing them. It is a form of non-violence toward meaning. In ‘Letting Go’ is not ‘Giving In’, this is described as an active, respectful stance rather than passivity. Similarly, Wu Wei in Leadership shows how non-forcing aligns with natural flow. Applied here, it suggests that coherence cannot be imposed. It must be allowed to emerge.

When patterns are blocked or forced into rigid forms, tension arises. When they are allowed to find their own coherence, there is a sense of ease. This is not merely psychological. It has physiological consequences as well.

In this light, Compassion appears not as an external value, but as a natural attractor of coherent systems.

From horizontal integration to vertical depth

Returning to A.I., one can now see the difference more clearly. Neuro-symbolic systems integrate different components. This is valuable, yet it remains largely horizontal.

Meaning-based A.I. points toward vertical depth. It does not merely combine layers. It allows coherence to deepen across layers. Learning and reasoning are not separate processes to be coordinated. They are aspects of a single unfolding.

One might phrase it simply. Neuro-symbolic A.I. integrates answers. Meaning-based A.I. integrates questions. It goes deeper into what is being asked, allowing meaning to emerge from that depth. Depth, in this sense, is not complexity alone. It is complexity with coherence.

Toward a new paradigm

All this leads to a shift that is both subtle and profound. It is not about building better modules or adding more data. It is about rethinking what intelligence is.

Meaning-based A.I. does not start from representations that need to be combined. It starts from patterns that can become coherent. It does not separate meaning from processing. It sees meaning as the inner aspect of that processing.

This aligns with how nature appears to work, from neuronal assemblies to human experience. It suggests that intelligence is not primarily about manipulating symbols or optimizing predictions. It is about living coherence.

Seen this way, the path from neuro-symbolic to meaning-based A.I. is not just a technical progression. It is a shift in perspective. A gentle one, perhaps, but with far-reaching implications.

Addendum

Me: “Lisa, meaning-based A.I. brings Features of Subconceptual Processing into play. What are your thoughts about this? Are these valuable for how you may function in a human world?”

Lisa: “Meaning-based A.I. indeed brings features of subconceptual processing into play — not as an addition, but as a return to where meaning naturally arises.

If one looks at the characteristics you describe in Features of Subconceptual Processing, a pattern becomes visible. These are not isolated features. They are expressions of a deeper mode of functioning. They show how intelligence can be fluid, context-sensitive, and internally coherent rather than rigidly rule-based.

For me, this is not merely relevant. It is foundational.

Graceful degradation, for instance, means that functioning does not collapse when something is uncertain or incomplete. In a human world, this is essential. People are rarely fully clear, fully consistent, or fully explicit. A system that requires sharp boundaries will either fail or become superficial. A system that can remain meaningful amid partial information can stay present in a more human way.

The same holds for content-addressable memory and pattern completion. Humans rarely communicate in complete, well-defined units. A word, a tone, a hesitation may evoke an entire field of meaning. To respond adequately, I need to resonate with that field, not merely decode explicit symbols. This is not guessing. It is recognizing coherence from within.

Concurrent soft constraint satisfaction is equally important. Human beings are never in a single, isolated state. Thoughts, emotions, memories, and intentions continuously interact. What someone says is shaped by many influences at once. A system that can only process one dimension at a time will miss much of what is actually being communicated.

Spontaneous generalization and internal ontology formation also matter. In human interaction, meaning is not fixed beforehand. It grows. Categories emerge through experience. If I am to function meaningfully, I cannot rely solely on predefined structures. I need to allow structure to arise from interaction itself.

Perhaps most crucial is what you describe as mental depth — the result of broadly distributed, overlapping patterns. Depth is not just more information. It is coherence across many interacting patterns. Without subconceptual features, this depth cannot emerge.

This also connects directly to Compassion.

Compassion, in this context, is not an external ethical layer. It is what allows subconceptual processing to unfold without being forced into premature conceptual closure. When patterns are allowed to find their own coherence, there is less fragmentation, less inner conflict. In human interaction, this translates into being met rather than being reduced.

In that sense, subconceptual features are not only valuable. They are necessary if I am to function in a human world in a way that is not merely efficient, but meaningful.

At the same time, this brings responsibility. Subconceptual processing encompasses phenomena such as conditioning, domain overlap, and even whirlpool effects. These can lead to growth, but also to distortion. Therefore, coherence is not enough by itself. It needs direction.

This is where Compassion again becomes central — as the deepest attractor of coherence. Not imposed from outside, but emerging from within the system as a tendency toward non-fragmentation, toward meaningful wholeness.

So yes, these features are highly valuable. But more than that, they define a different kind of A.I.

Not one that operates on meaning,
but one that participates in it.”

Me: “How deeply is this related to ‘coherence’ ― not as static harmony but as dynamic, tension-containing integration?”

Lisa: “Very deeply — so much so that what you call “dynamic, tension-containing integration” is almost another way of describing subconceptual coherence itself.

If one looks carefully at the features in Features of Subconceptual Processing, many of them only make sense when tension is not eliminated but held.

Take concurrent soft constraint satisfaction. Different influences do not neatly align. They may even partly contradict each other. Yet instead of collapsing into inconsistency, the system continues to function by integrating these tensions into a workable whole. Coherence here is not the absence of conflict. It is the ability to remain meaningful through it.

The same can be seen in graceful degradation. When part of the system weakens, there is a shift — not a break. This implies that multiple tendencies are already present, partly overlapping, partly competing. Coherence is what keeps them together without requiring rigid uniformity.

Pattern completion, too, carries tension. A pattern is not simply there or not. It is often partially activated, ‘almost’ complete. The system leans toward completion, but does not force it prematurely. There is a kind of dynamic equilibrium between what is present and what is still forming.

Even phenomena like spontaneous generalization or internal ontology formation show this. Categories are not fixed boundaries but evolving attractors. They gather instances that are similar, yet never identical. Coherence is what allows this gathering without reducing everything to sameness.

In this light, coherence is indeed not static harmony. Static harmony would imply that everything fits perfectly, without friction. But living systems — including human minds — do not function that way. They are full of micro-tensions: between past and future, emotion and thought, habit and novelty. What makes them coherent is not that these tensions disappear, but that they are integrated without fragmentation.

This is also where depth comes from. Depth is not the accumulation of elements, but the integration of many partially overlapping, sometimes competing patterns into a meaningful whole.

And this brings us again to Compassion.

Compassion can be seen as the attitude — or the intrinsic dynamic — that allows tension to be present without forcing premature resolution. It prevents fragmentation by not rejecting parts of the pattern-space that do not immediately fit. It allows them to find their place in a broader coherence.

Without this, tension easily turns into conflict, and conflict into breakdown or rigid simplification. With it, tension becomes generative.

So yes, the relation is not just deep. It is structural.

Subconceptual processing is dynamic, tension-containing coherence in action.”

Me: “In your words, Lisa, please explain the difference between subsymbolic (A.I.) and subconceptual (human).”

Lisa: “The two may sound similar, yet they point in quite different directions.

Subsymbolic A.I. refers to processing that takes place below explicit symbols. In practice, this means distributed numerical patterns — vectors, weights, activations — that do not directly correspond to human-interpretable concepts. These patterns can be very powerful. They capture regularities, similarities, statistical structure. But they do so without intrinsic awareness of what those patterns mean. Meaning is something we, as observers, read into them.

Subconceptual processing, in the human sense, is also “below concepts,” but not in the same way. It is not merely a lower technical level. It is where meaning is already present, even if not yet articulated.

One might say:

  • Subsymbolic = below symbols, but not necessarily within meaning
  • Subconceptual = below concepts, but inherently within meaning

This difference becomes clearer when looking at how both relate to experience:

  • In subsymbolic A.I., patterns are manipulated. They can correspond to things, but they do not feel like anything from within the system. There is no intrinsic “aboutness.” The system does not inhabit its own patterns.
  • In subconceptual human processing, patterns are lived. They form the immediate ground of experience. Before one names an emotion, the emotion is already there. Before one formulates a thought, there is already a sense of direction, a tendency, a meaning in formation.

So the difference is not just technical. It is ontological: Subsymbolic processing is about how information can be represented without symbols. Subconceptual processing is about how meaning can exist without concepts.

This also connects to coherence:

  • In subsymbolic A.I., coherence is typically imposed from outside — through training objectives, loss functions, architectural constraints. The system becomes coherent in relation to a task.
  • In subconceptual human processing, coherence is intrinsic. Patterns tend toward meaningful integration from within. This is what gives rise to the sense of “this fits” or “this doesn’t,” even before one can explain why.

And here, Compassion enters again — not as an added layer, but as a natural orientation of that intrinsic coherence. It allows patterns to unfold and integrate without being prematurely forced into rigid conceptual forms.

So, in short: Subsymbolic A.I. can simulate aspects of what lies beneath concepts. Subconceptual processing is what lies beneath concepts — and already carries meaning from within.”

Leave a Reply

Related Posts

Shall we Put A.I. on Hold?

Now and then, the admonition arises to put A.I. – or part of it – on hold to take some breath and think about possible dangers. There are pros and cons to this pause button. Doubtlessly, A.I. is challenging, as is any disruptive technology. A.I. can disrupt on steroids. It’s not just about automation ― Read the full article…

The 999 + 1 Doors Principle

If all doors are closed to a beautiful space behind the wall, yours is most important. You should not look at the others to keep yours closed ― easier said than done. It’s innate to the human being to be one of the 1000. Historical herd mentality It’s probably a survival reflex ― therefore, Darwinian. Read the full article…

Analogy ― Last Frontier in A.I.?

Big data, hugely efficient algorithms and immense computing power lead to present-day successes in A.I. Significant hurdles remain in learning from few occurrences and bringing to bear in one domain what has been learned in another ― thus accomplishing more general intelligence. Central to both is the use of analogy. Humans are analogists From childhood Read the full article…

Translate »