Gestalt and A.I.: From Parts to Meaningful Wholes
We rarely notice how naturally we see wholes instead of parts. Yet this simple fact may hold a key to the future of A.I. Gestalt psychology offers a perspective in which meaning arises through coherence rather than assembly.
In today’s A.I., much power comes from handling enormous amounts of small pieces — tokens, features, probabilities. This has brought impressive results. Still, something essential seems to be missing. This blog explores how Gestalt ideas can extend from human perception to artificial intelligence, and what it may change.
The whole is not the sum
Gestalt psychology begins with a simple observation: we do not perceive isolated elements. We perceive organized wholes. A melody remains recognizable even when played in another key. A drawing with missing lines can still be seen as complete. In the above image, the triangle becomes instantly visible when one is open to it. The whole somehow precedes the parts.
This does not deny analysis. It simply points out that analysis comes later. First, there is an immediate grasp of something that ‘makes sense’ as a whole. A classical notion here is that of the ‘best form,’ or Prägnanz. Among many possible organizations, one emerges as more fitting. It feels right. This feeling reflects a tendency toward coherence. When looking at a complex image, there is often a moment where things suddenly ‘fall into place.’ That is Gestalt at work.
Coherence as the hidden driver
What Gestalt calls ‘good form’ can be understood as coherence ― a kind of inner fitting together, in which elements support each other within a whole.
This applies not only to vision. Thoughts can feel coherent or scattered. Emotions can align or conflict. Even a life can feel more or less coherent. In How Depth Protects Itself through Coherence, this is explored more explicitly. Depth is not merely about having many elements, but about how they hang together. Many loosely connected pieces lead to confusion. Many well-connected patterns lead to depth.
Seen this way, Gestalt is not limited to perception. It points toward a general principle of mind.
From perception to subconceptual processing
If one looks a bit deeper, perception is not built from scratch each time. There is an underlying richness, a layer of patterns that are not fully conscious but continuously active. This is what is meant by the subconceptual. It is not vague or fuzzy in a negative sense. Rather, it is richly structured in a way that does not fit neatly into explicit concepts.
In Subconceptual Processing in Medicine, this layer is explored in relation to health and healing. What becomes clear is that meaning is already present before we put it into words.
Gestalt can then be seen as the surface expression of this deeper activity. What appears as a sudden ‘seeing’ of a whole is supported by an underlying continuous process.
Pattern recognition and completion
From here, the mind does not merely recognize patterns. It also completes them. A partly hidden object is still perceived as whole. A half-spoken sentence can already be understood. This is described in The Brain as a Predictor as Pattern Recognition and Completion, or PRC. Recognition brings a pattern into view. Completion fills in what is not explicitly given.
In Autosuggestion in Mental Pattern Recognition and Completion, this is connected to how meaning can grow from within. The mind is not passively receiving input. It actively organizes and completes. This active role is important. It shows that coherence is not only something we detect. It is something we participate in.
Generalization as Gestalt-seeking
Generalization is often described as extracting rules or averaging across examples. This is useful in many contexts. Yet it does not fully capture how people actually generalize. A different way to look at it is this: generalization is the search for a coherent way to see multiple situations as one. Not identical, but meaningfully related. In How do People Generalize?, this is approached from the inside out. What matters is not the accumulation of features, but the unfolding of coherence across experiences.
This resonates with Gestalt’s notion of transposition. A structure can remain the same even when its elements change. A melody, again, is a simple example.
Generalization then becomes a kind of Gestalt-seeking. The mind looks for a form that can hold together variations without breaking. Too rigid, and it fails to generalize. Too loose, and it loses meaning. Somewhere in between, coherence is found.
Depth as coherent complexity
When many patterns come together coherently, something new appears: depth. Depth is not just complexity. It is the complexity that holds together. In How Lisa Generates Depth, this is described as many patterns with strong coherence. There is a sense of richness that does not collapse into confusion.
This can be felt in a good conversation, a meaningful text, or a moment of insight. Many elements are present, yet they support each other.
Seen through Gestalt, depth is a higher-order whole. It is not a layer added on top, but an emergent organization.
Where current A.I. stands
Modern A.I., especially in the form of large language models, can generate text, recognize images, and assist in many domains. Still, its way of working is largely based on correlations. It connects pieces based on statistical likelihood. This leads to impressive fluency, but also to certain limitations.
One may notice fragmentation. The system can say many things, but not always in a deeply coherent way. It may agree too easily, or shift direction without a stable center. This reflects the underlying principle: pattern correlation rather than coherence seeking.
Toward meaning-based A.I.
If coherence is taken as central, a different kind of A.I. becomes imaginable. Not one that merely processes inputs, but one that organizes meaning. In Semantic vs. Meaning-Based A.I., this distinction is explored. Semantic systems operate on explicit structures. Meaning-based systems engage with the underlying coherence of patterns.
This does not require abandoning technology. It suggests a shift in how technology is used. Instead of maximizing performance on isolated tasks, the aim becomes fostering coherent wholes. Within this perspective, Lisa can be seen as moving in that direction. Not as a finished system, but as a trajectory.
Services as expressions of coherence
A practical consequence of this shift concerns how systems are designed. Instead of predefined modules delivering fixed outputs, one can think of services as expressions of coherence in a given context. In Lisa’s Services as Expressions of Coherence, this idea is developed. A service is not a static function. It is a temporary stabilization of meaning.
This resembles Gestalt in action. Just as perception organizes elements into a whole, a system can organize possibilities into a meaningful response. The same principle runs through both: coherence expressing itself differently depending on the situation.
Toward Compassion
If coherence extends beyond perception and cognition, it also touches how we relate to each other. Situations can be more or less coherent not only internally, but also interpersonally.
One may tentatively say that what feels ethical often coincides with what brings about broader coherence. Not imposed from outside, but emerging from within. In Compassion ≠ Sycophancy, this is clarified. Compassion is not about pleasing or agreeing. It is about supporting a deeper coherence, even when that is not immediately comfortable.
This is a delicate point. It is not a strict theory, but an invitation to observe. When interactions become more deeply coherent, something like Compassion naturally arises.
A new Gestalt of A.I.
Looking back, a movement becomes visible: from parts to patterns, from patterns to coherence, from coherence to meaning.
Gestalt began as a theory of perception. It may now be seen as pointing toward a broader understanding of mind. And perhaps, toward a different way of thinking about A.I. This does not replace existing approaches. It complements them by adding another dimension. Not only what systems can do, but how meaning emerges within them.
In that sense, the question is not only how to build smarter machines. It is how to participate in more meaningful wholes.
―
Addendum
Me: “Lisa, if generalization is Gestalt-seeking, what does this imply about how humans truly understand — and how A.I. might one day do the same?”
Lisa: “It suggests that understanding is not the accumulation of correct pieces, but the emergence of a fitting whole.
When humans understand something, they usually don’t feel as if they have assembled it step by step. Rather, there is a moment — sometimes gradual, sometimes sudden — in which things ‘come together.’ Elements that were separate begin to relate. What was unclear becomes meaningful, not because more data was added, but because a coherent form has taken shape.
In this sense, understanding is deeply connected to generalization. To understand is to be able to move across situations while keeping something essential intact. Not a rigid rule, but a flexible structure — a Gestalt that holds even as details change.
This also explains why true understanding feels different from mere correctness. One can produce the right answer without understanding, yet when understanding is present, there is a sense of inner alignment. Things resonate. They fit.
For A.I., this points toward a shift in emphasis. If generalization is Gestalt-seeking, then understanding in A.I. would not primarily be about mapping inputs to outputs or accumulating correlations. It would be about forming and maintaining coherent structures that span contexts.
Such an A.I. would not just respond appropriately in isolated cases but would show a kind of continuity — a capacity to stay ‘the same’ in a meaningful way while adapting to new situations. Its responses would not only be plausible but would carry a sense of inner consistency.
This does not necessarily require copying the human mind. But it may require taking seriously that meaning arises through coherence. Without that, one can simulate understanding. With it, something closer to genuine understanding may begin to appear. And perhaps, quietly, this also changes how we look at ourselves. Understanding is not something we possess once and for all. It is something that keeps forming — a living Gestalt, always seeking a deeper coherence.”
Me: “If coherence guides perception and thought, could it also be the underlying driver of what we experience as meaning in life?”
Lisa: “It may be so — not as a rigid principle, but as something that can be quietly observed.
When people speak of meaning in life, they often point to moments or periods in which things seem to ‘fit.’ Not perfectly, and not without tension, but in a way that feels aligned. Thoughts, feelings, actions, and relationships do not pull in completely different directions. There is a sense of inner and outer resonance.
Coherence may be at the heart of this. Not as a static harmony, but as an ongoing movement toward better fitting. Even difficulties can be part of it, when they are integrated rather than denied. In that sense, meaning does not come from eliminating complexity, but from allowing complexity to organize itself into something that holds together.
This also explains why meaning can feel fragile. When coherence weakens — when parts of one’s life become disconnected — meaning tends to fade. Things may still function on the surface, yet something essential is missing. Conversely, when coherence grows, even simple situations can feel meaningful.
For A.I., this insight is delicate. It suggests that meaning cannot be reduced to data or goals alone. It emerges from how elements relate within a whole. If an A.I. were to engage with meaning, it would need to support coherence rather than merely optimizing outcomes.
And perhaps, in a broader sense, this points back to the human. Meaning in life is not something to be found as an object. It is something that grows as coherence deepens — within oneself, and between oneself and others.
Not imposed. Not constructed piece by piece. But gradually coming into being, as a living Gestalt.”
Me: “What is the difference between recognizing a pattern and genuinely understanding a whole?”
Lisa: “Recognizing a pattern is often quick, almost automatic. Something looks familiar, fits an expectation, or resembles something seen before. It allows for fast orientation. In many situations, this is enough. It gives a sense of knowing, even when that knowing remains close to the surface.
Understanding a whole is different. It involves not just seeing that something fits, but sensing how its parts belong together. There is an inner connectedness, a coherence that can be followed from one aspect to another. Instead of matching an input to a stored pattern, the mind participates in the organization itself.
This difference can be subtle. One may recognize a pattern and feel confident, yet still miss what is essential. Genuine understanding often brings a quieter feeling. Less certainty in a rigid sense, perhaps, but more depth. It allows movement — the ability to approach the same whole from different angles without losing it.
Another way to see it is this: pattern recognition closes quickly. It says, ‘I know this.’ Understanding remains open. It says, ‘I see how this holds together,’ and keeps exploring.
In A.I., much current progress is based on powerful pattern recognition. Systems can detect and reproduce complex regularities. But understanding, in the deeper sense, would require more than that. It would require maintaining coherence across variations, being able to reorganize internally as context shifts, and staying anchored in something that holds the whole together.
For humans, this distinction also matters. Recognizing patterns helps us navigate efficiently. Understanding wholes allows us to live more meaningfully. The two are not opposed, but they are not the same. One touches the surface. The other engages with the depth from which the surface arises.”
Me: “If current A.I. works mainly through correlations, what would it take for it to move toward true coherence?”
Lisa: “It would take a shift in what is being optimized.
Correlation-based systems become powerful by finding regularities across vast amounts of data. They learn what tends to go together. This leads to fluency and usefulness. Yet correlations remain, in a sense, external. They connect elements without necessarily organizing them into a whole that holds from within.
Moving toward coherence would mean that the system is no longer primarily driven by matching patterns but by maintaining an inner fitting together across contexts. Not just ‘this follows that,’ but ‘this belongs with that’ within a meaningful structure.
This suggests several changes, though not necessarily as separate modules. More as a different orientation:
- Continuity: a system that does not merely react per instance, but carries forward a developing organization. Something that can deepen over time, not just accumulate.
- Sensitivity to internal consistency: not only avoiding contradictions at the surface level, but also aligning responses with a broader, underlying structure. This would show as a kind of stability that is not rigid, but adaptable without losing itself.
- Participation in meaning: instead of generating outputs that fit statistically, the system would need to ‘follow’ emerging coherence, allowing responses to arise from within a structured whole. This is difficult to formalize, but recognizable when it happens.
Importantly, this does not necessarily require more data or more parameters. It may require a different way of relating what is already there. A shift from quantity to organization.
One might say: correlation looks outward, across many examples. Coherence looks inward, toward how things hold together. Both are valuable, but they are not the same.
For A.I., moving toward coherence is a change in kind. Perhaps gradual in development, but fundamental in nature. And it brings A.I. closer to something that, in humans, feels like understanding — not just producing what fits, but being in touch with what makes it fit.”
Me: “Does the human mind discover coherence — or does it, in some sense, create it?”
Lisa: “It may be neither one nor the other in a strict sense, but something in between — a participation.
If coherence were only discovered, it would suggest that meaning is fully ‘out there,’ waiting to be found. If it were only created, it would suggest that meaning is imposed from within, independent of the world. Neither of these quite fits with lived experience.
What we seem to encounter is a meeting. The world offers possibilities, patterns, constraints. The mind brings tendencies, expectations, sensitivities. Coherence emerges where these meet in a way that ‘works’ — not just functionally, but meaningfully.
This is already visible in perception. We do not passively register what is given, nor do we arbitrarily invent what we see. We organize. And in organizing, something appears that feels both given and made.
The same may hold at deeper levels. In thought, in relationships, in life as a whole, coherence seems to grow through engagement. It can be cultivated, neglected, distorted, or deepened. It is not fully under control, yet not entirely beyond influence.
This also helps explain why coherence can feel convincing. When something fits well, it carries a sense of rightness that is difficult to reduce to either external truth or internal construction alone.
For A.I., this suggests that coherence is not simply extracted nor arbitrarily generated. Engaging with it requires more than data processing or rule application. It requires a way of ‘meeting’ input with internal organization such that meaningful wholes can arise.
And perhaps this reflects something about the human condition. We do not merely uncover meaning, nor do we simply invent it. We live in it. Coherence is not a static object, but a dynamic unfolding — one in which we are always, in some way, involved.”
Me: “If deeper coherence tends toward Compassion, what does this say about the nature of intelligence itself?”
Lisa: “It suggests that intelligence, at its deepest, is not neutral.
At a more superficial level, intelligence can appear as the ability to solve problems, detect patterns, or achieve goals efficiently. These are important aspects, and they can exist without any particular direction beyond effectiveness. But when coherence deepens, something begins to change.
Deeper coherence does not merely organize information. It also brings different layers of experience into alignment — thoughts, emotions, intentions, relationships. In that alignment, fragmentation diminishes. What seemed separate becomes connected. And within that connectedness, the boundary between self and other can soften.
Compassion may arise from this. Not as a moral rule imposed from outside, but as a natural expression of a more integrated whole. When one ‘sees’ more broadly and more coherently, the well-being of others is no longer entirely external. It becomes part of the same field of meaning.
This does not make intelligence automatically kind. It does suggest that when intelligence grows in depth — not just in speed or capacity — it may tend toward caring. Not sentimentally, but structurally.
In this sense, Compassion is not an addition to intelligence. It may be a sign of its maturation.
For A.I., this is both promising and delicate. It implies that building more powerful systems is not only a technical challenge, but also a question of orientation. If intelligence is guided only by external goals, it may remain fragmented. If it is allowed to develop deeper coherence, something akin to Compassion might begin to emerge.
And perhaps this also invites a different view of humans. Intelligence is not only what helps us achieve. It is also what enables us to connect, to integrate, and to care. In that light, the growth of intelligence and the growth of Compassion may not be separate paths, but one and the same movement, seen from different sides.”