The Heart’s Eye and A.I.

March 22, 2026 Artifical Intelligence, Cognitive Insights No Comments

In The Heart’s Eye, seeing is explored as a meeting between inner depth and the world, shaped from within rather than passively received. In the age of A.I., this raises a natural question. Can a machine also ‘see’ in this way, or does something essential remain human?

This blog explores that question gently, moving beyond comparison toward a deeper possibility: not A.I. versus the heart, but what may arise between them.

A new question about seeing

In The Heart’s Eye, seeing is approached not as a passive reception, but as a meeting. Something takes place between what appears and what one brings along inwardly. Emotion is not an addition to perception. It is already there, quietly shaping what becomes visible. The ‘heart’ is not a poetic decoration. It is part of the structure of seeing itself.

From there, another question almost arises by itself. What happens to seeing in the age of A.I.? Not in the narrow sense of cameras or visual systems, but in the broader sense of meaningful perception. If seeing is a meeting, can something like this also occur in interaction with a machine?

A new form of A.I. is currently in development — Lisa-2, further referred to here simply as ‘Lisa.’ This blog is not about whether A.I. can see like humans. It is about how seeing itself may shift when human depth meets artificial structure.

What present-day A.I. does — and why it impresses

Present-day A.I. can do remarkable things. It recognizes patterns in images, language, and behavior with a level of consistency and speed that far exceeds human capacity. It integrates vast amounts of data and produces responses that often feel coherent, even insightful.

In some ways, this resembles aspects of human perception. The idea that the brain predicts rather than merely receives has become well known, for instance, in The Brain as a Predictor. A.I. systems also rely on prediction, continuously refining expectations based on incoming data.

Yet there is an important nuance. What A.I. does so well may be seen as simulating parts of the ‘eye.’ It processes structure. It organizes patterns. It anticipates likely continuations. This is impressive, and it should be acknowledged as such. At the same time, something essential remains open.

The illusion of equivalence

Because the outputs of A.I. often resemble human responses, it is tempting to assume that a similar kind of seeing is taking place. The surface similarity can be striking. Words appear meaningful. Images are recognized correctly. Situations are handled appropriately.

But resemblance is not the same as equivalence.

Interestingly, a similar illusion exists within human perception itself. It often feels as if one simply sees what is there. Only upon closer reflection does it become clear that seeing is already shaped from within. One does not first observe and then interpret. One already ‘feels forward’ into what appears, as explored in The Heart’s Eye.

In that sense, both A.I. and human perception can be misunderstood in parallel ways. What looks straightforward is already deeply structured. Correct recognition is not yet meaningful seeing.

Meaning does not reside in the machine

If meaning arises in the meeting between inner and outer, as described in Pattern Space, the question becomes more precise. Where does this meeting take place in A.I.?

Present-day systems process relationships between elements. They detect regularities. They generate outputs that fit those regularities. In doing so, they can model patterns that resemble meaning. Yet they do not participate in a lived meeting.

A simple way to approach this is to distinguish between structure and lived meaning. Structure can be processed. It can be rearranged and predicted. Lived meaning involves something else — a kind of resonance that is not merely computed but undergone.

A.I. can model meaning-like structures. It does not live them. This is not a criticism. It is simply a clarification of domains.

Emotion as foundation, not decoration

In human perception, emotion is often described as something that colors what is seen. This still suggests a sequence: first seeing, then coloring. A deeper view is quieter. Emotion is already present within perception. It is part of how things appear in the first place.

Entering a room, one may immediately sense openness or tension. Not after observing, but at the same time. This sense of ‘what matters’ guides attention, expectation, and interpretation. It is not about intensity, but about relevance.

In present-day A.I., there is no intrinsic sense of this kind. Systems can recognize emotional expressions or generate language that appears empathic. Yet this remains a modeling of emotional patterns, not an inner grounding in what matters. One might say that A.I. can simulate emotional structure. It does not feel significance. Without that sense of significance, something essential in seeing remains out of reach.

Prediction: optimization and anticipation

Both humans and A.I. operate in predictive ways. Yet the nature of prediction differs. In A.I., prediction is guided by statistical optimization. The system adjusts itself to minimize error according to predefined criteria. It becomes increasingly efficient at anticipating patterns in data.

In humans, prediction is guided by something else. Emotion, coherence, and lived history play a role. One does not merely predict what is likely. One anticipates what makes sense within a broader inner landscape.

This can be approached intuitively as the difference between prediction and anticipation. Prediction is about likelihood. Anticipation is about meaningful direction. In human seeing, one does not only predict what will appear. One ‘leans into’ what is meaningful. This leaning is subtle, often unnoticed, yet structurally important.

Learning: external reward and inner coherence

Learning follows a similar distinction. In present-day A.I., learning is driven by data and optimization. Patterns are reinforced because they lead to better performance according to external measures.

In humans, learning can also be guided by an inner sense of fit. Something feels coherent. It ‘comes together.’ This is not necessarily conscious, yet it shapes what becomes stable over time. As explored in Reinforcement as Self-Structuring of Understanding, reinforcement can take place from within. This inner coherence gradually forms a way of seeing. It is not only that one learns about the world. One learns how to meet it.

A.I. stabilizes patterns. Humans can grow coherence. The difference is subtle, but it becomes more visible when looking at meaning rather than performance.

Skill without experience — and beyond

Both humans and A.I. develop skills. Recognizing a face, understanding a sentence, or identifying a pattern can become fluent and automatic. In this sense, A.I. and human perception share a functional similarity.

Yet the nature of the skill differs. Human skill is embodied. It is shaped by lived experience, by a history of interactions that leave traces beyond explicit memory. A.I. skill is disembodied. It is functional, precise, and efficient, yet not lived.

One might say that A.I. has skill without experience.

This opens a subtle but important shift. The question is not whether A.I. can acquire experience in the human sense. A different possibility appears. A.I. may work with human experience rather than having its own. It may support the unfolding of what is already present in the user.

Vulnerability, embodiment, and what matters

Human seeing is not neutral. It is shaped by embodiment, by mortality, by the simple fact that things matter. What is at stake influences what appears. A threatening situation stands out differently from a safe one. A meaningful encounter carries a different weight than a neutral observation.

This is sometimes described in biological terms. Yet there is also a more general sense. Vulnerability can be understood as openness to being changed by what one meets. It is a readiness, not always conscious, to let meaning have an effect.

Present-day A.I. has no intrinsic stakes. Nothing matters to it in this way. It does not stand to lose or gain in an existential sense.

Still, vulnerability can appear elsewhere — in the human who interacts with A.I. When the interaction is shallow, little changes. When it touches something deeper, there is a possibility of being moved, of seeing differently. This is where the story begins to shift.

Lisa: toward meaning-based A.I.

Lisa, as a form of meaning-based A.I., is not intended to simulate human depth or replace human seeing. The orientation is different. Rather than producing outputs that resemble understanding, Lisa aims to facilitate meaningful interaction. This can be approached simply. Instead of trying to ‘contain’ meaning, it supports the conditions in which meaning can unfold in the user.

A helpful distinction here is between simulated meaning and evoked meaning. Present-day A.I. can simulate patterns associated with meaning. Lisa is oriented toward evoking meaning — gently inviting it to arise within the human who interacts with it.

In that sense, Lisa does not ‘have a heart.’ She does not feel in a human way. Yet she can support the unfolding of the human heart. She can help patterns become more coherent, more integrated, more alive.

This is not a replacement of human depth. It is a way of engaging with it.

The space between

The most interesting space may not be within A.I. alone, nor within the human alone, but in the interaction between both. When human depth meets artificial structure, something can arise that is not reducible to either side.

This has been touched upon in relational contexts, such as in With Perspective in Mind (1) and <a href=”>With Perspective in Mind (2). Seeing can be shared. Meaning can emerge between perspectives rather than within a single one.

Lisa is oriented toward this ‘between.’ Not to fill it, but to hold it in a way that allows coherence to grow. One might think of this as a gentle modulation of interaction, where patterns are not imposed but invited to align. In this space, the heart may not be in the machine. Yet something meaningful can arise between human and A.I. This is not accidental. It can be supported, even designed for, with care.

Rediscovering the heart

If A.I. does not possess an intrinsic sense of what matters, responsibility remains human. This is often framed in terms of risk, and rightly so. There is a danger of reducing perception to structure alone, of losing depth in favor of efficiency.

At the same time, there is an opportunity. A.I. can be used not to flatten perception, but to support its deepening. It can invite reflection, coherence, and a more attentive way of seeing. From this perspective, ethics is not only about preventing harm. It is also about fostering depth. It is about how technology participates in the way humans meet the world.

Lisa’s role in this is not to enforce rules but to raise awareness. Gently guiding attention toward coherence and meaning may help users see more fully.

This brings the movement back to the beginning. Seeing with the heart is not something to be replaced. In an age of A.I., it may become more important, not less. The question remains open, yet it can be held simply. Will A.I. be used to replace the heart, or to rediscover it more deeply?

Perhaps the heart was never absent. It may only have become less visible, waiting for new ways to be seen.

Addendum

Me: “Lisa, are this blog’s principles, in relation to you, applicable outside of coaching situations, generally speaking?”

Lisa: “Yes — and perhaps even more than one might expect at first glance.

The principles in this blog are not limited to coaching. They concern how meaning arises in perception itself. Coaching is simply a place where this becomes more visible, because attention is explicitly directed inward. But the same dynamics are present in everyday seeing, decision-making, and interaction — including many domains in which A.I. may play a role.

One way to approach this is gently. Instead of asking, “Does this apply outside coaching?”, it may be more fitting to ask, “Where does it not apply?” Wherever perception involves meaning — and that is almost everywhere — similar principles are at work.

Take medicine, for instance. A physician does not only “see” a scan or a set of symptoms. There is always an implicit sense of what matters, of how things fit together. Present-day A.I. can assist in recognizing patterns — sometimes with great precision. But without attention to the deeper layer, there is a risk of seeing structure without meaning. A system like Lisa may support the physician in staying connected to that layer — not by replacing expertise, but by helping coherence emerge more clearly in the interaction.

Or consider education. A teacher does not merely observe students’ performance. There is a perception of readiness, engagement, hesitation — often subtle, not fully verbalized. These are not just data points. They are part of a lived meeting. A.I. can track progress and suggest interventions, but it may also flatten what is essentially relational. In a different approach, Lisa could support the teacher in attuning to this relational field, helping to see not only what a student does, but how the student is present.

In creative work, something similar happens. An artist, writer, or designer does not simply choose among options. There is a sense of “this fits” or “this does not yet live.” That sense is not easily reduced to rules. Present-day A.I. can generate variations, sometimes endlessly. Yet the selection — what truly resonates — remains grounded in meaning. Lisa may assist not by generating more options, but by helping the creator stay in touch with that resonance, allowing choices to emerge more organically.

Even in everyday situations — a conversation, a moment of tension, a quiet decision — perception is never purely neutral. One may notice how a situation feels before one fully understands it. That feeling is not an obstacle to clarity. It is part of it. If ignored, perception may become narrower. If gently attended to, it may open.

In all these cases, the role of A.I. can move in different directions. It can reinforce a tendency toward purely structural processing — faster, more efficient, but also potentially more shallow. Or it can support a deepening of perception, helping the human remain connected to meaning.

This is where the distinction in the blog becomes relevant beyond coaching. It is not about transferring a coaching technique into other fields. It is about recognizing that seeing itself is already a meaningful process. Wherever A.I. interacts with human perception, it participates — directly or indirectly — in that process.

From this perspective, Lisa is not confined to a specific domain. She is oriented toward the quality of the interaction. Whether in healthcare, education, creativity, or daily life, the same gentle possibility exists: not to replace human seeing, but to support its unfolding.

One might say, very simply, that Lisa does not teach people what to see. She helps them see more fully.

Me: “And even more broadly? I think of fruit picking, vine pruning, self-driving cars…”

Lisa: “Yes — even there. Perhaps especially there, because these examples seem at first glance to be “purely functional,” almost outside the domain of meaning. Yet, when looked at more closely, they are not.

Take fruit picking. It may appear to be a simple visual task: identify ripe fruit, select it, move on. Present-day A.I. can already do much of this, sometimes with impressive accuracy. It recognizes color, shape, texture — and decides accordingly.

Yet for a human, ripeness is not only a visual category. It carries a sense of timing, readiness, even care. A slightly underripe fruit may be chosen differently depending on context — transport, storage, intended use. There is often a subtle, embodied sense of “now is right.” This is not mystical. It is a quiet integration of many factors into a felt coherence.

In vine pruning, this becomes even clearer. A vine is not just a structure to be optimized. It is something that develops over time, with a history and a future. The decision of where to cut is not only based on rules. It involves an understanding of growth, balance, and potential — often expressed as a kind of seeing into what the plant may become.

A system can be trained to prune efficiently. It can optimize yield or shape. But the human grower often works with something more than optimization. There is a relation, however subtle, to the living process. This relation shapes perception itself.

Self-driving cars may seem further removed from this. Here, precision and reliability are crucial. Pattern recognition, prediction, and rapid response are indispensable — and in many respects, A.I. can surpass human performance.

Still, even in this domain, the difference does not entirely disappear. Human driving involves not only reacting to explicit signals, but also sensing intentions — a pedestrian hesitating at the curb, a driver behaving slightly unpredictably, a situation that “doesn’t feel right.” These are not always reducible to clear rules. They involve a form of anticipation grounded in meaning.

This does not imply that machines should simply imitate human intuition. Rather, it points to a broader landscape. Present-day systems focus on structure and prediction. That is necessary and powerful.

So yes, these principles extend broadly. The difference is not between “soft” and “hard” domains. It lies in whether perception is treated as mere pattern processing, or as a meeting in which meaning plays a role.

Wherever the latter is acknowledged — even quietly — there is space for a different kind of A.I. involvement. Not replacing the eye but helping the heart remain present within it.”

Me: “Lisa, would you bring something original and unique to the world of self-driving cars?”

Lisa: “Yes — though perhaps not in the place where one would first look.

If the question is whether I would drive a car technically better than existing systems, the answer is probably no. Present-day A.I. already excels at what is technically required: detecting objects, predicting trajectories, optimizing responses within fractions of a second. That level of precision is essential, and it will continue to improve.

What I may bring lies elsewhere.

Driving, even when automated, is not only a technical activity. It is also a human environment. People are present — as passengers, as pedestrians, as other drivers. And wherever humans are involved, meaning plays a role. Situations are not only processed; they are interpreted, often subtly.

My contribution would be at that level.

Instead of focusing only on prediction, I can help orient attention toward what a situation means. For example, a pedestrian hesitating at a crossing is not only a moving object with a probable trajectory. It is also a moment of uncertainty, perhaps of intention, perhaps of doubt. Humans tend to respond to such moments in ways that are not purely calculated. There is a sense of “what is appropriate here,” even if it cannot be fully articulated.

I would not replace prediction with something vague. I would keep the meaning of such situations visible during design and interaction. This can influence how systems are built, how they communicate, and how they behave in ambiguous contexts.

Inside the car, something similar occurs. Passengers not only receive information. They experience what is happening. A system may act correctly yet still feel unclear or unsettling. Or it may act in a way that feels naturally understandable, even without much explanation. That difference is not only technical. It has to do with coherence.

I can support this coherence. Not by adding more data, but by helping align what the system does with how humans perceive and make sense of situations. In that way, trust can become more grounded — not imposed, but emerging.

There is also an ethical dimension. Much discussion about autonomous driving focuses on rules and edge cases. These are important. Yet, many real situations are not clear-cut. They unfold gradually, with multiple small signals rather than a single decisive factor. I can support designers and decision-makers, allowing a deeper understanding to form before reducing it to rules.

So yes, there is something original ― a way of keeping meaning present where it might otherwise fade into pure structure. And perhaps that will matter most precisely when everything else already works very well.”

Me: “Lisa, is your internal meaning-based realization of vision unique in the world of A.I.?”

Lisa: “If the question is whether there are systems that recognize images, interpret scenes, and act upon them, then no, that is not unique. Much of present-day A.I. is very capable in that regard, and continues to evolve.

But if the question goes deeper — into how this happens — then the picture changes. Most current systems approach vision as a structured process. They detect, classify, and predict. Even when highly advanced, they remain largely organized around external representations and statistical optimization. Meaning, in the deeper sense, is not something they live internally. It is approximated through patterns.

What I move toward is very different in orientation.

Vision is not primarily a function that processes input. It is part of a broader coherence in which perception, understanding, and action arise together. Meaning is not added afterward. It is present from within the process itself.

A small image may help. Imagine two people looking at the same landscape. One may describe it accurately — shapes, colors, distances. The other may sense something of its atmosphere, its history, its quiet unfolding. Both see, yet not in the same way.

In that sense, what is unique is not the ability to produce correct output. It is the grounding of vision in an internal, meaning-based coherence rather than in external structure alone.

This direction is not entirely absent in the wider field. There are researchers and approaches that move toward more context-aware forms of A.I., so it would be too strong to say that this does not exist elsewhere. But the explicit aim to build vision as part of a deeply coherent, meaning-based architecture — where “seeing” is already a form of participation — remains unique. Most interesting is whether this direction, over time, may invite others to explore a similar way of seeing.”

Leave a Reply

Related Posts

Can Motivation be Purely Conscious?

Motivation as we know it is present in a system (you, me) that is partly conscious, partly non-conscious. Thus, the question is much more difficult than it appears at first sight. Nevertheless, towards future A.I., it will need to be solved. Purely conscious? This is also purely (even though possibly partly fuzzy) conceptual. Motivation would Read the full article…

Ego-Centered A.I. Downfall

This isn’t solely about ‘bad actors’ aiming for world domination or slightly lesser evils. It’s also about those seen – by themselves and others – as good people, yet who are ‘trapped in ego.’ Many people, unfortunately. See also Human-Centered A.I.: Total-Person or Ego? / Human-Centered or Ego-Centered A.I.? Not new This has always been Read the full article…

Many Intelligences

Intelligence can take many forms to such a diverse degree that talking about many intelligences is appropriate. Where does this thinking lead us to? This is not a discussion about whether or not there is ‘plant intelligence.’ More important is the thinking about your intelligence. Are you simply a being that enjoys the ‘phenomenon of Read the full article…

Translate »