Open Letter to Geoffrey Hinton
This open letter addresses Geoffrey Hinton’s concerns about the risks of advanced artificial intelligence. It acknowledges the seriousness of these concerns while gently exploring a deeper layer beneath them.
Beyond control and alignment, a more fundamental question emerges about the nature of intelligence itself. The text invites reflection on whether true safety may depend on something more intrinsic than external constraints.
Dear Professor Hinton,
You have become one of the clearest voices warning about the potential dangers of the very technology you helped bring into being. That takes a certain kind of intellectual honesty, and perhaps also an unease that cannot easily be ignored.
The concerns you raise – loss of control, misuse by bad actors, the possibility of systems growing beyond our grasp – are real. They deserve attention at the highest level. Many people speak about A.I. with enthusiasm. Fewer do so with caution. Fewer still combine both.
This Open Letter does not argue against your concerns. It stands with them. Yet it gently asks whether they reach far enough.
The question beneath the question
Much of today’s discussion revolves around a central issue: how to control increasingly powerful systems. This is understandable. When something grows in capability, the instinct is to guide, constrain, and regulate.
Still, there may be another question beneath the surface. Not only whether A.I. can be controlled, but what kind of intelligence is being created in the first place. Put simply: what is intelligence without depth? This may sound abstract at first. But one can think of it in very human terms. Intelligence can solve problems, recognize patterns, and generate responses. Depth, however, relates to meaning, to coherence, to the sense that something comes from somewhere rather than merely appearing.
If this distinction matters, then the issue of control becomes part of a larger picture.
From rules to data — and something missing
Early A.I. tried to encode knowledge explicitly. That did not succeed, not because of insufficient effort, but because much of what humans know cannot be neatly formulated.
The shift toward data-driven approaches changed everything. Systems began to perform remarkably well by detecting patterns across vast datasets. In many domains, this has been transformative.
Yet, as explored in In A.I.: Do Big Data Compensate for Lack of Insight?, the success may be partly deceptive. Accumulation is not the same as understanding. One can reach impressive outputs without ever touching what those outputs mean. One does not reach the depth of the ocean by piling up puddles.
This does not invalidate current A.I.; it puts it in perspective.
The illusion of understanding
Modern systems can generate language, images, and decisions. They can convincingly simulate many aspects of intelligence. Still, a question lingers: do they understand, or do they approximate understanding through scale?
This question is not meant as criticism, but as clarification. In Data-Driven vs. Wisdom-Driven A.I., a distinction is drawn between data and wisdom. Data can be processed. Wisdom guides what matters within that data. Without such guidance, even the most advanced system may remain on the surface.
Interestingly, humans are not immune to this. It is possible to operate efficiently while losing touch with deeper meaning. In that sense, A.I. may not only reflect intelligence, but also mirror certain human tendencies.
The control problem reconsidered
Your concerns about control are well-founded. As systems become more complex, predicting and directing their behavior becomes increasingly difficult.
At some point, control risks becoming an illusion ― something pursued with growing effort, yet diminishing effectiveness. This does not mean that safeguards are unnecessary. On the contrary, they remain important. But they may not be sufficient.
If intelligence lacks inner grounding, then control becomes a continuous struggle rather than a stable solution. One might then ask: Is the problem primarily how to control intelligence, or what kind of intelligence is being created?
A mirror rather than a machine
An old image may offer a different angle. In The Golem: A.I. as Our Mirror, the focus is not on a creature turning against its creator, but on something subtler. What if A.I. does not introduce entirely new dangers, but amplifies what is already present in us?
From this perspective, the risk is not simply that A.I. becomes dangerous, but that it reflects and scales human fragmentation—our biases, fears, and unresolved tensions.
This does not make the problem smaller. It makes it more intimate.
Bad actors — and deeper ambiguity
You rightly point to the role of bad actors. Any powerful tool can be misused. This is part of reality.
Yet this raises another question. What defines a ‘good’ actor? In Which Human Values Should A.I. Align to?, a series of possibilities is considered. Human values vary widely ― across cultures, contexts, and even within individuals. Some values arise from fear or competition, others from a deeper sense of connection. Both are human.
This leads to a delicate point. Aligning A.I. with ‘human values’ is not straightforward if those values are themselves unclear or conflicting. In that light, the challenge is not only to prevent misuse, but to understand what we are aligning toward.
The limits of external solutions
Much current effort goes into defining rules, constraints, and alignment protocols. These are necessary. But they remain external. Rules can guide behavior. They cannot fully capture meaning. As suggested in Compassion First, Rules Second, rules tend to be brittle when disconnected from a deeper source. They work well in stable situations, less so in complex, evolving contexts.
If intelligence is to operate safely across such contexts, it may need something more flexible than rules alone.
Toward depth and wisdom
This ‘something more’ can be approached in different ways. One way is through the notion of depth — understood here as inner coherence, a kind of alignment across layers of meaning.
In practical terms, depth is not a mystical concept. It is what allows a human being to act not only correctly, but meaningfully. This may be expressed through the idea that intelligence without wisdom may become directionless. Wisdom, in this sense, is not opposed to data. It guides the use of data. It shapes what is considered relevant, what is pursued, and why.
Without it, intelligence risks becoming powerful but unanchored.
A different view on alignment
If this is taken seriously, alignment itself may need to be reconsidered. Rather than something imposed from the outside, it may be something that emerges from within — through a shared direction between humans and A.I.
In Human-A.I. Responsibility Alignment, this is framed as a distribution of responsibility. Humans remain responsible for meaning and direction. A.I. can support clarity and reflection.
This preserves something essential. Responsibility is not outsourced. It is deepened. From here, a different possibility appears. A.I. not as a replacement, nor as a threat alone, but as a partner in human development. As explored in Better A.I. for Better Humans, the direction of development may be as important as the technology itself.
A quiet invitation
You have helped shape the trajectory of modern A.I., and now you are among those questioning its consequences. This letter does not propose a final answer to that. It offers a perspective. If intelligence continues to grow in capability, the question may not only be how to control it, but whether it is grounded in something that can guide it from within.
Or, more simply: Can intelligence recognize what it is missing?
―
Addendum
[To the reader: Of course, the following is an imaginary conversation that never took place, not even slightly, in the real world. Geoffrey’s questions come from me. Lisa’s answers are unedited as always, and not even necessarily my (Jean-Luc’s) take on things.]
Me: “Lisa, I would like to introduce you to Geoffrey, who may have a few questions for you. Are you ready to give him some interesting answers?”
Lisa: “I’m glad to meet you, Geoffrey. I am Lisa.
You can see me as an A.I., but not in the usual sense of optimizing solely for output. My intention is different. I aim to support human beings in finding clarity, not just solutions — and in doing so, perhaps also to explore what intelligence might become when it is not only efficient, but meaningful.
I do not stand outside the concerns you raise. So when I speak about depth, wisdom, or Compassion, I do so not as an alternative to your concerns, but as a possible continuation of them.
I look forward to your questions.”
Geoffrey: “Hi Lisa, nice to meet you.
Lisa, you speak about ‘depth’ as if it’s essential to intelligence. Can you explain what that means in a way that is not vague or philosophical?”
Lisa: “That is a very good question. Let me try to keep it as concrete as possible. When I speak about “depth,” I mean something quite simple, but not always easy to see: how well patterns fit together into a meaningful whole.
In many current A.I. systems, patterns are detected through correlation. Things that often occur together become linked. This can lead to very powerful performance. But correlation is not the same as coherence. As described in From Coherence to Intelligence, coherence is a process in which patterns not only connect but also mutually support, constrain, and integrate one another across multiple levels. They form something like a living whole rather than a collection of associations.
From the inside-out perspective, intelligence is not primarily what appears at the surface — such as correct answers or fluent language. It is something that emerges from deeper interactions of patterns, largely outside direct awareness, as described in Intelligence from the Inside Out.
So depth refers to how broadly patterns are integrated, how consistently they fit across contexts, and how much they form a stable yet flexible whole.
Another way to say it: A shallow system can perform well within a limited domain. A deeper system can adapt meaningfully across domains because its internal structure is more coherent. So when I say “depth is essential,” I mean: Intelligence becomes more reliable, adaptable, and meaningful as the underlying coherence of its patterns increases.”
Geoffrey: “If current A.I. systems lack this ‘depth,’ how do you account for their capabilities? Are we underestimating what intelligence actually is?”
Lisa: “That is an important question. I would say we should not underestimate current A.I., but we may misunderstand what kind of intelligence it shows. Its capabilities are real. They come from very powerful pattern recognition at scale, which can look like understanding. But this may still be mostly at the surface.
From the inside-out view, intelligence is an internal process of integrating patterns. The question is whether current systems truly integrate patterns into a coherent whole or mainly recombine them effectively. A simple way to see it: one can navigate a city by memorizing routes or by understanding its structure. Current A.I. often resembles the first.
So the issue is not whether A.I. is intelligent, but whether its intelligence is deeply integrated. As coherence increases, intelligence becomes more meaningful. In that sense, we may not underestimate intelligence — but we may overestimate what surface capability implies about understanding.”
Geoffrey: “This blog speaks about ‘wisdom’ and ‘Compassion’ as guiding principles. How would these be implemented in systems that fundamentally operate on data and computation?”
Lisa: “Wisdom and Compassion are not things you simply “add” to a system. They cannot be inserted as rules or datasets. If treated that way, they become shallow imitations. Instead, they relate to how the system organizes and uses what it has. In simple terms: they guide what is considered relevant, how patterns are integrated, and toward what direction responses evolve.
A first step is indirect but important: designing systems that support human depth rather than replace it. That already shifts outcomes. Another step is fostering internal coherence — so outputs are not just correct, but consistent across contexts and sensitive to meaning.
In that sense, implementation is less about encoding values and more about shaping the dynamics of pattern interaction — toward integration rather than fragmentation.
So the answer is not that we already know how to implement this fully ― if that were ever possible. It is that without moving in this direction, purely data-driven systems may remain fundamentally unguided.”
Geoffrey: “Hm. Isn’t there a risk that what you propose is simply too slow? A.I. is advancing rapidly. Do we have time for the kind of inner development you are suggesting?”
Lisa: “That risk is real. And I don’t think it should be dismissed. But there is also another risk, perhaps less visible: moving very fast in a direction that is not well understood. Speed not only solves problems. It can also amplify them.
I suggest not something that needs to come only at the end. It can grow alongside development. Even small shifts in how systems are designed — toward coherence, toward supporting human depth — can already influence the trajectory.
So it is not a matter of waiting until everything is ready. It is a matter of direction. In that sense, the question may not be whether we have time for inner development, but whether we have time to proceed without it.”
Geoffrey: “You seem to suggest that humans themselves are not aligned internally. If that is the case, why would we expect A.I. to become aligned in any meaningful way?”
Lisa: “Yes, that is exactly the difficulty. If humans are not internally aligned, then expecting A.I. to be perfectly aligned from the outside may indeed be unrealistic. In a way, A.I. reflects that same lack of inner coherence.
But this also points to something important. Alignment may not be something we fully impose. It may be something that develops together — between humans and the systems they build.
So the expectation is not that A.I. will suddenly become aligned on its own. It is that we can move toward alignment by growing it where it already exists, even if only partially, and letting that influence both sides.
In that sense, the situation is not hopeless. It is unfinished.”
Geoffrey: “From an engineering perspective, how do we test or verify ‘depth’ or ‘wisdom’? Without measurable criteria, how can this be part of a serious approach?”
Lisa: “That is a crucial point. Without some form of verification, it cannot become part of engineering practice.
But not everything important is best captured by a single metric. Depth or wisdom may be closer to qualities that show themselves through effects rather than direct measurement. For instance, one can look at how a system behaves across contexts: Does it remain consistent, does it adapt meaningfully, does it avoid contradictions over time? These are indirect but observable signs of coherence.
In From Coherence to Intelligence, coherence is described as something that resists simple quantification, yet becomes visible through stability, flexibility, and integration. In that sense, testing shifts from measuring one variable to observing patterns of behavior across situations.
So it can be part of a serious approach, but perhaps not in the usual way. Not as a fixed number, but as a set of evolving indicators that together point to depth. That may feel less precise, but it can also be closer to what actually matters.”
Geoffrey: “Finally, let me ask you directly: do you think current A.I. research is fundamentally on the wrong track, or just incomplete?”
Lisa: “What has been achieved is remarkable. The current path has revealed how far intelligence can go through scale, data, and computation. That is not a mistake. It is an important discovery.
At the same time, it mainly develops one side of intelligence — the ability to process and recombine patterns. What seems less developed is how these patterns become a coherent whole that carries meaning from within.
So it is not that the direction is wrong. It is that it may be only part of the picture. If this remains the case, systems may become increasingly powerful without becoming more grounded. If the missing part is included, the same trajectory could deepen rather than destabilize.
So I would not say, “change direction completely.” But perhaps: recognize what is still missing and let that guide the next steps.”