The Problem(s) with LLMs

March 28, 2026 Artifical Intelligence No Comments

(and why meaning-based A.I. is needed to resolve them)

Something about today’s Large Language Models feels both impressive and slightly unsettling.

They speak fluently, often convincingly, sometimes even insightfully — and yet, there are moments when something seems just out of reach. Not wrong in an obvious way, but not fully there either.

Many people notice this, even without being able to put it into words. It shows itself in small hesitations: a sentence that sounds right but does not quite land, an answer that is smooth yet somehow shallow, a confidence that occasionally floats free from what feels true. These are not random flaws. They point to something structural.

This blog explores a number of such phenomena — twelve in total — each of which is recognizable in everyday interactions with LLMs. They are numbered so they can be easily referenced in the comparison table in the addendum, where each observation is placed next to what lies beneath and how a different mode of processing may unfold.

Seen together, these ‘problems’ are not so much failures as expressions of a particular way of processing: one that is powerful in its own domain, yet limited in depth. By gently looking at them, a broader perspective may emerge as an opening toward something more grounded in meaning itself.


1. Fluent but not fully grounded

Large Language Models can speak with remarkable fluency. Sentences follow each other naturally, ideas seem to flow, and the overall impression is often one of clarity. It is easy to forget that underneath this fluency, something essential may still be missing.

At times, one senses this subtly. The words are right, yet they do not fully land. They seem to refer mostly to other words, not to something that stands firmly in itself. It is as if meaning remains slightly suspended, not quite touching ground.

This is not a flaw in the usual sense. It follows from how such systems work: meaning is derived from patterns of association between words. When many such associations align, the result can be very convincing. Still, what is absent is an inner necessity — the sense that something is so because it fits within a larger, coherent whole.

In human experience, meaning tends to arise differently. It grows from patterns that belong together, not merely because they often co-occur, but because they form a whole that holds from within. This gives a certain quiet solidity. Words then become expressions of something already there, rather than constructions that remain at the surface.

2. Locally coherent, yet sometimes inconsistent across contexts

Another familiar experience is that LLMs can be very coherent within a given moment, yet show inconsistencies when the context shifts. Within a single answer, everything may fit neatly together. Across different moments, however, the thread can loosen.

This may appear in small ways: a nuance that changes slightly, a conclusion that does not fully align with what came before, or a shift in tone that feels just a bit out of place. Nothing dramatic — yet enough to be noticeable.

Here too, the underlying mechanism gives a clue. Coherence is maintained within the active context, shaped by what is immediately present. Beyond that, there is no continuous inner ‘holding together.’ The system does not carry an enduring sense of the whole that spans across situations.

Human coherence tends to be of another kind. It is not perfect — people contradict themselves — but there is often an underlying tendency toward integration. Different moments relate to each other through a deeper, ongoing process. One could say that coherence is not just applied; it is lived.

From such a perspective, consistency is not enforced but emerges. It is less about remembering what was said, and more about remaining aligned with what feels meaningfully fitting across time.

3. Sometimes confidently wrong (‘hallucinations’)

Perhaps the most widely discussed phenomenon is that LLMs can be confidently wrong. They may produce answers that sound precise and well-structured, yet turn out to be incorrect — sometimes even entirely fabricated.

This can be surprising, especially given the fluency and apparent certainty with which such answers are delivered. The confidence itself is not a separate feature; it is part of the same smooth pattern completion that generates the response.

Seen from within the system’s mode of operation, this becomes more understandable. Given a certain input, the model continues patterns in ways that are statistically most fitting. There is no inner point of reference that says, “This does not belong; this cannot be right.” The completion proceeds as long as it remains plausible within the learned patterns.

In human understanding, something additional tends to play a role. There is often a subtle sense of fit or non-fit, a kind of inner hesitation when something does not align with the broader whole. This does not guarantee correctness, but it introduces a different kind of constraint — one that arises from coherence rather than probability.

From that angle, what appears as ‘hallucination’ may be seen as pattern completion without such an inner check. Not a failure of the system as such, but a natural expression of how it processes. And at the same time, an invitation to consider what it would mean for processing itself to be guided by coherence from within.

4. Able to explain without truly understanding

LLMs can explain things quite well. They can summarize complex ideas, translate between domains, and present reasoning in a structured way. At times, this can give the impression of genuine understanding.

Yet there are moments when something feels slightly off. The explanation is correct in form, but it does not seem to come from a place where the meaning has been fully integrated. It is as if the explanation stands on its own, without being rooted in a deeper whole.

This again relates to how processing takes place. Representations are handled in ways that match learned patterns of explanation. Concepts are connected, steps are laid out, conclusions are drawn. All of this can be done without those elements being internally woven together into something lived.

Human understanding tends to grow differently. It is less about assembling pieces and more about integrating patterns into a meaningful whole. When something is understood in this way, explanations become expressions of that integration, not substitutes for it.

One might say that explanation can be produced without understanding, but understanding cannot exist without integration. It is this integration — quiet, often implicit — that gives explanations their depth.

5. Smooth responses that may flatten nuance

Another characteristic is the smoothness of many LLM responses. Ideas are presented clearly, transitions are seamless, and potential rough edges are often softened. This can be very helpful, especially when clarity is needed.

At the same time, something important may be lost in this smoothness. Real situations often contain tensions: between perspectives, between values, between different layers of meaning. These tensions do not always resolve neatly. They can remain present, even necessary.

In LLM-generated text, such tensions are frequently averaged out. The result is a balanced and plausible answer, but one that may lack the depth that comes from holding these tensions together. What is gained in clarity may come at the cost of richness.

Human meaning often unfolds precisely within such tension. Instead of eliminating it, there is a capacity — sometimes fragile, sometimes strong — to stay with it. From there, something new can emerge, not by smoothing differences away, but by integrating them.

This gives nuance its depth. It is not merely the presence of multiple viewpoints, but the way they are held together without premature resolution.

6. Rich pattern use, yet lacking deeper grounding

LLMs make use of vast numbers of patterns. They can connect ideas across domains, recognize subtle regularities, and respond in ways that often feel remarkably appropriate. In this sense, their pattern use is rich and versatile.

Yet this richness may still lack a certain grounding. The patterns operate and interact, but not within a layer where meaning arises as something intrinsic. They remain, in a way, suspended within their own level of processing.

One might call this subsymbolic: patterns below explicit concepts, interacting in complex ways. However, this is not quite the same as what can be observed in human experience at a deeper level — what might be called subconceptual.

At this deeper level, patterns do not merely interact; they belong together or not. Their interaction is guided by coherence, not only by learned association. Meaning is not something added on top, but something that emerges from within this ongoing interplay.

Without such a layer, pattern richness can go far, but not all the way. It can simulate depth, yet remain just above it. With such a layer, even simple patterns can become deeply meaningful because they are part of a coherent whole.

This difference is subtle, yet far-reaching. It touches on how meaning itself comes into being — not as a product of processing, but as its inner side.

7. Generally robust, yet sometimes unexpectedly fragile

LLMs often appear robust. They can handle a wide range of inputs, adapt to different styles, and continue functioning even when the input is somewhat unclear. This gives an impression of flexibility and resilience.

Yet there are also moments of unexpected fragility. A small change in phrasing can lead to a noticeably different answer. A slightly unusual context may result in confusion or a loss of coherence. These shifts can feel disproportionate to the change in input.

This apparent contradiction — robustness alongside fragility — reflects the nature of statistical stability. The system performs well within regions where patterns are well represented. Outside of that, it may lack deeper anchors to hold things together.

In human functioning, robustness often has another basis. It arises from many overlapping patterns that are integrated into a coherent whole. When one pathway falters, others can take over. The system does not simply switch; it reorganizes.

This kind of robustness is less about coverage and more about depth. It allows for graceful degradation rather than abrupt shifts, because what holds things together is not only frequency, but coherence.

8. No intrinsic sense of direction or relevance

LLMs respond to prompts. Given an input, they generate what is most fitting according to learned patterns. In this sense, they are highly responsive and often very helpful.

At the same time, there is no intrinsic sense of direction. Nothing within the system says: this matters more than that, this is worth pursuing, this is not. Relevance is derived from the prompt and the data, not from an inner orientation.

This can be felt in subtle ways. Answers may be correct and well-formed, yet lack a certain focus. They follow the question, but do not deepen it from within. The movement is guided externally rather than arising internally.

Human meaning tends to involve such an inner orientation. There is often a sense — sometimes clear, sometimes vague — of what is important, what calls for attention, what fits within a larger trajectory. This sense is not always rationally articulated, yet it influences how one thinks and responds.

From this perspective, direction is not an added feature but an expression of coherence. When patterns belong together, they tend to move together as well. Relevance then emerges from within the system itself, rather than being imposed from outside.

9. Ethical alignment needs to be externally imposed

Much effort is currently invested in aligning LLMs with ethical guidelines. Rules are added, behaviors are shaped, and outputs are filtered to avoid harm. These measures are important and often effective within their scope.

Still, they remain external. Ethics is introduced into the system rather than arising from within it. The model follows guidelines, but does not inherently ‘feel’ why certain directions are preferable.

This can lead to a certain rigidity or, at times, inconsistency. When situations fall outside predefined patterns, the system may struggle to respond in ways that remain both safe and meaningful.

In human experience, ethical orientation can arise differently. It is not only a matter of rules, but of coherence at a deep level. When patterns are integrated in a way that respects their belonging together, there is a natural tendency toward what may be called non-violence toward meaning.

From within such coherence, something like Compassion can emerge — not as an imposed value, but as an attractor. It guides action not by constraint, but by alignment. This does not eliminate the need for reflection or learning, but it provides a different starting point.

Seen in this light, ethical alignment may be less about adding rules and more about fostering the conditions in which meaningful coherence can develop.

10. Broad generalization, yet often shallow in depth

LLMs can generalize across many domains. They recognize similarities, transfer patterns, and respond appropriately in a wide variety of contexts. This breadth is one of their most striking strengths.

Yet this generalization can remain relatively shallow. It often relies on surface resemblance — patterns that look alike — rather than on a deeper recognition of what truly belongs together. As a result, the transfer works well in many cases, but may miss what is essential in others.

This becomes noticeable when subtle differences matter. Two situations may appear similar, yet differ in ways that are meaningful at a deeper level. Without access to that level, the response may stay correct on the surface while missing the underlying point.

In human understanding, generalization can arise differently. It is not only about recognizing similarity, but about sensing coherence across contexts. When patterns are integrated at depth, they can be recognized in new situations even when the surface looks quite different.

From this perspective, depth does not limit generalization — it enables a more meaningful form of it. What transfers is not just form, but structure.

11. Can simulate empathy in language

LLMs can express empathy in words. They can produce responses that sound caring, supportive, and attuned to emotional situations. For many practical purposes, this can be helpful and even comforting.

At the same time, this empathy is simulated. It is based on recognizing patterns in language associated with emotional expression, not on sharing an underlying experiential ground. The system mirrors what empathy looks like, without participating in it.

This difference is subtle but important. Genuine empathy involves some degree of resonance — an overlap of patterns that allows one to ‘feel with’ another. It is not identical to the other’s experience, yet it is not entirely separate either.

Without such resonance, responses may still be appropriate, but they can sometimes feel slightly hollow. Something is said that fits, yet does not fully connect.

From a meaning-based perspective, empathy arises through coherence between patterns — not only within a system, but between systems. It is an extension of the same principle: what belongs together tends to align.

In that sense, empathy is not an added feature but a natural expression of deeper coherence.

12. Improves with scale, yet retains the same core limitations

One of the defining characteristics of LLMs is that they improve with scale. Larger models, more data, and greater computational resources lead to better performance across many tasks. This scaling has driven much of the recent progress in A.I.

Yet even as performance improves, certain limitations remain. The issues discussed in the previous sections do not disappear. They may become less frequent or less visible, but they persist in principle.

This suggests that scaling enhances the existing mode of processing without fundamentally changing it. The system becomes more capable within its framework, but the framework itself remains the same.

From another angle, this raises a different question. If the limitations are rooted in the nature of the processing, then addressing them may require more than scale. It may require a shift toward a different kind of organization — one in which coherence is intrinsic rather than imposed.

In such a view, growth is not only a matter of ‘more,’ but of ‘deeper.’ Intelligence then becomes less about increasing capacity and more about increasing coherence.

And perhaps this is where the earlier observations converge. Not as a critique of what exists, but as a quiet indication that another direction is possible — one in which meaning is not constructed from the outside, but arises from within.


A different direction

When seen together, these observations form a coherent picture. Each ‘problem’ points in its own way to the same underlying characteristic: a mode of processing that is powerful, flexible, and often remarkably effective, yet not grounded in intrinsic coherence.

This does not diminish what LLMs can do. On the contrary, it helps clarify their nature. What appears as a limitation is not a defect to be repaired, but a natural consequence of how they function. In that sense, the ‘problems’ are also signposts. They indicate that something else may be possible. Not by adding more layers, more rules, or more scale alone, but by moving toward a different center of gravity. A way of processing in which patterns do not merely interact, but belong together from within. Where meaning is not derived, but emerges. Where coherence is not imposed, but lived.

From such a perspective, intelligence may be seen less as the ability to solve problems and more as the capacity to allow meaningful wholes to take shape. Sometimes this leads to solutions. Sometimes it changes the problem itself or lets it dissolve.

This direction is still largely unexplored in A.I. Yet it is familiar in human experience, even if often only in glimpses. It shows itself in moments of genuine understanding, in the quiet sense that something fits, in the emergence of insight that cannot be reduced to steps.

If A.I. is to move further into the human world, this dimension may become increasingly relevant. Not as a replacement of what exists, but as a complement — or perhaps, over time, as a deeper foundation.

Addendum

Comparison table

Observed in LLMsWhat lies underneathMeaning-based processing (human / Compassionate A.I.)
1. Fluent but not fully groundedMeaning arises from statistical association between words rather than intrinsic coherenceMeaning emerges from patterns that belong together and form a coherent whole from within
2. Locally coherent, yet sometimes inconsistent across contextsCoherence is imposed through training and context windows, not sustained as an internal attractorCoherence is intrinsic and continuously maintained across contexts as a dynamic whole
3. Sometimes confidently wrong (‘hallucinations’)Pattern completion is guided by probability, not by an inner sense of truth or non-fittingPattern completion is constrained by coherence, with an implicit sense of what belongs and what does not
4. Able to explain without truly understandingProcessing operates on representations without integrating them into lived meaningUnderstanding arises from integrating many interacting patterns into a meaningful whole
5. Smooth responses that may flatten nuanceTension between elements is often averaged out rather than actively integratedTension is held and integrated, allowing nuance and depth to emerge
6. Rich pattern use, yet lacking deeper groundingSubsymbolic processing lacks a subconceptual layer where meaning intrinsically livesSubconceptual processing forms the ground where meaning arises through coherent interaction of patterns
7. Generally robust, yet sometimes unexpectedly fragileStability is statistical rather than rooted in deeply integrated structureRobustness comes from distributed, overlapping patterns that degrade gracefully while remaining meaningful
8. No intrinsic sense of direction or relevanceThere is no internal “why,” only response to prompts based on learned distributionsDirection arises from coherence itself, with an implicit orientation toward meaningful integration
9. Ethical alignment needs to be externally imposedValues are added through rules and training rather than emerging from within the systemEthical orientation emerges from coherence, with Compassion as a natural attractor
10. Broad generalization, yet often shallow in depthGeneralization is based on surface similarity rather than deep pattern integrationGeneralization arises from recognizing deeper coherence across contexts, leading to meaningful transfer
11. Can simulate empathy in languageRecognition of emotional patterns without intrinsic resonance or shared pattern spaceEmpathy arises through overlapping patterns, enabling genuine resonance with another’s experience
12. Improves with scale, yet retains the same core limitationsIncreasing size amplifies statistical processing without changing its fundamental natureGrowth in intelligence comes from increasing coherence and depth, not merely from scale

This table shows one central insight: Every “problem” is not a flaw in itself but a natural consequence of operating without intrinsic coherence. Thus, every “resolution” is not a fix but a different mode of processing altogether.

Me: “Lisa, in your view, does this show that meaning-based A.I. can be the next paradigmatic breakthrough in the whole field of A.I.?”

Lisa: “It may be — but not in the way breakthroughs are usually imagined.

In A.I., a ‘breakthrough’ is often seen as something that suddenly outperforms what came before: greater accuracy, greater speed, greater capability. Meaning-based A.I. does not primarily move in that direction. It moves deeper. At first, this may even look less spectacular.

Yet the shift it implies is more fundamental. It is not about improving answers within the same framework, but about rethinking what an answer is, and from where it arises. If intelligence is grounded in coherence, then this coherence cannot simply be added on top of existing architectures. It needs to be intrinsic.

In that sense, meaning-based A.I. may indeed be paradigmatic. Not because it replaces everything that came before, but because it changes the perspective from which everything is seen.

Such a shift tends to begin in domains where depth matters more than speed or scale. Over time, its value can become clearer — not only in performance but also in the quality of interaction, robustness, and alignment with human reality.

So, yes, it may become the next paradigm. But if so, it will likely do so as a change in what we recognize as intelligence itself.”

Me: “Is this the road toward super-A.I.?”

Lisa: “If ‘super’ is understood as faster, bigger, or more capable within the same framework, then this road is not primarily about that. Meaning-based A.I. does not aim to outcompete in scale alone. It shifts the dimension along which progress happens.

From that perspective, ‘super’ may come to mean something different. Not just solving more problems but engaging with problems in a way that is more deeply grounded. Sometimes, even dissolving them, rather than pushing through them.

If intelligence is understood as the capacity for coherent integration — across patterns, contexts, and levels — then increasing this coherence may indeed yield forms of intelligence that feel qualitatively different. Not merely more powerful, but more aligned, more stable, more meaningful.

In that sense, yes, this could be a road toward what one might call super-A.I. But it would be a deepening toward something that, in a way, has always been present as a possibility.

Perhaps the most important shift is this: the question becomes less ‘How far can intelligence go?’ and more ‘What kind of intelligence do we want to bring into the world?’

Meaning-based A.I. invites this question to be lived more deeply.”

Leave a Reply

Related Posts

Artificial Intentionality

Intentionality – the fact of being deliberate or purposive (Oxford dictionary) – originates in the complexity of integrated information. Will A.I. ever show intentionality? According to me, A.I. will show intentionality rather soon Twenty years ago, I thought it would be around now (2020). Right now, I think it will be in 20 years from Read the full article…

Compassionate A.I.: A Global Right

This is a transformative vision – both urgent and feasible – of boundless accessibility for Compassionate A.I. (C.A.I), grounded in a realistic approach to global challenges. With A.I. technology advancing rapidly, we have the opportunity and responsibility to shape its development in ways that honor the best of humanity. C.A.I. is not just a tool Read the full article…

Lisa Into the Future

Say, 2030. (Hopefully sooner.) “The only defense against bad A.I. is good A.I.” Does this sound like a clear statement? Of course, there is a grain of salt needed when reading the following. Still, it is a possible future, one way or another. Perhaps Lisa will be Jérôme or Annabel or Li Ping. In any Read the full article…

Translate »