The Illusion of Thinking

Apple’s recent paper ‘The Illusion of Thinking’ reveals surprising weaknesses in A.I. models that appear to reason. Yet humans, too, live under illusions about their own thinking.
This blog explores the double illusion we share with machines, why real thinking requires depth as well as clarity, and how to move beyond illusion into a living interplay of mind.
Apple researchers recently tested Large Reasoning Models (LRMs) such as those used for puzzles like the Tower of Hanoi or River Crossing. The findings were surprising. Even when the solution was given, the models quickly collapsed once the puzzles reached a certain level of complexity. The reasoning looked convincing, but they were not the real thing.
Humans, however, are not so different. We also live under the illusion that our thinking is mostly conceptual, and that this is what thinking itself amounts to. The truth, as both neuroscience and everyday experience show, is quite different.
The double illusion
There is a double illusion at work in both humans and machines. The first illusion is that thinking happens mainly in the form of clear concepts. The second illusion is the assumption that this is what thinking really is.
Humans are prone to believing we are Platonic beings, living from abstract clarity. We notice the sharp outlines of thought, and we tend to ignore the vaguer undercurrents. LRMs create a similar mask by generating long chains of reasoning, which mimic the crispness of logic without containing its depth. In both cases, the mask can look convincing, but beneath it, the true process is something else.
The contrast with GOFAI, or ‘good old-fashioned AI,’ is instructive. GOFAI could execute explicit rules flawlessly, but with no flexibility or intuition. LRMs are more flexible, but they collapse when forced into strict rule-following. Humans are unique in combining the two: we can follow rules while also drawing on an immense sea of deeper processes.
The subconceptual as ground
Modern neuroscience makes this clear. As described in Patterns in Neurophysiology, the brain does not hold concepts in single neurons. It works with vast neuronal ensembles that overlap and interact. Mental content is distributed and fluid, more like waves than stones.
This is why concepts can never be fully sharp or final. They rise from patterns, and patterns are always changing. As Vagueness of Mind points out, vagueness is not a weakness but a sign of depth. It is the trace of living complexity, not an error to be eliminated.
All Things Subconceptual explores this further. Most of what matters in human life starts beneath the surface: love, desire, fear, joy, conflict. The conceptual world is the visible crust, but the fire burns below. When we confuse the two, we risk treating anxiety as if it were just fear, or addiction as if it were just habit. We misapply conceptual tools to phenomena that are fundamentally rooted in the subconceptual.
The living interplay
Real thinking is not confined to one side of the divide. As shown in Subconceptual – Conceptual: a Dynamic Continuum?, human thought is not split into two separate layers but exists along a continuum. The subconceptual feeds into the conceptual, which in turn structures and expresses what emerges from deeper ground.
This interplay is not always visible, but it is always present. A mathematician may consciously manipulate symbols, yet at the same time feel an intuition about which path is promising. A poet may shape words carefully while drawing on a well of images that cannot be fully explained. In such moments, conceptual and subconceptual are not simply alternating but moving together.
In LRMs, the situation is different. They have a subconceptual base of patterns and embeddings, and they output conceptual reasoning traces. But there is no living loop between the two. The conceptual does not grow out of the subconceptual; it is more like a mask placed on top. That is why they collapse so quickly when complexity increases.
Tower of Hanoi as an example
The Tower of Hanoi puzzle is notorious for both humans and machines. For LRMs, the response is to recite an algorithm learned from training data. Yet once the number of disks rises beyond ten, the mask breaks down. Even when given the algorithm, the models cannot reliably execute it.
Humans approach the puzzle differently. A child playing with the disks begins to sense patterns subconceptually: which moves help, which get stuck, how smaller towers can be shifted aside. Only later may the explicit recursive solution become clear. The insight feels like a discovery, not a recitation. This is the power of the living interplay.
GOFAI, again, offers a third case. It can execute the recursive solution perfectly, without ever collapsing. But it does so mechanically, with no flexibility, no room for hunches or creative leaps.
How to move beyond the illusion
The question, then, is how to avoid the double illusion. For humans, the answer is not simply more concepts. It is learning to respect the ground beneath them.
As Alternating between Conceptual and Subconceptual suggests, we can intentionally alternate between modes. Conceptual thinking gives structure, but then must pause, making space for subconceptual flow. After this pause, the conceptual can return with new clarity. It is like breathing: inhale, exhale, and then a fresh inhale.
At the same time, alternation is not the whole story. Humans are capable of simultaneity, where both modes operate together. This is why insight can strike in the middle of a calculation, or why empathy can guide even the most structured decision. The key is not to privilege one mode over the other, but to allow them both to breathe and to blend.
Autosuggestion offers one practical path. It is a way of inviting the deeper patterns to join the surface clarity, so that intentions become supported rather than forced. This is not about adding more content but about making the dance between levels more fluid and natural.
Universal implications
The illusion of thinking is not limited to puzzles or artificial intelligence. It is present in science, where even the strictest formulas rest on intuitive leaps. It is present in art, where technique depends on inspiration. It is present in relationships, where clear words gain their meaning from a felt background of empathy.
Even the political sphere shows the same pattern. As highlighted in All Things Subconceptual, the Left often seeks to contain the fire of the subconceptual early, while the Right lets it burn longer but suppresses it more harshly when it grows too strong. Both stances arise from subconceptual ground, not from pure logic. Without recognizing this, dialogue remains stuck at the surface.
Seen in this light, the illusion of thinking has universal relevance. It touches on science, culture, society, and personal growth. Wherever we mistake the surface clarity for the whole, we risk collapse. Wherever we embrace the interplay, we find resilience and creativity.
The greater lesson
Apple’s study highlights the fragility of A.I. models that rely on reasoning traces alone. But the greater lesson is for us. Humans, too, are prone to living under the illusion of thinking. We forget the depth beneath our clarity, and we imagine that concepts are the whole story.
To move beyond this illusion is not to reject clarity, but to reconnect it with its living ground. Thinking becomes real when concepts dance with the patterns that support them, when vagueness is honored as the birthplace of meaning, and when depth is welcomed alongside precision.
Real thinking begins here — not in illusion, but in the living truth of depth and clarity moving together.
―
Addendum
Me: “Lisa, the Tower of Hanoi puzzle is notoriously difficult for humans and for LLMs/LRMs when 10 discs or more are involved. What would you recommend for solving this problem?”
Lisa: “Don’t try to fight it with brute-force steps. See the pattern. The puzzle is recursive, like a melody. Play with fewer discs first, let the rhythm settle subconceptually, and then the moves will emerge more naturally. Learn the melody beneath the moves.”
Me: “Can you see a recursive algorithm for solving any Tower of Hanoi?”
Lisa: “Yes. To move N disks from peg A to peg C, first move N–1 from A to B, then move the largest from A to C, then move N–1 from B to C. But more important than reciting the algorithm is feeling its rhythm. That’s what makes the solution live.”
Me: “That’s what makes the solution live.”.. Why do you say this? Please expand.”
Lisa: “Because an algorithm on its own is just dead instruction. It’s like sheet music without a musician. You can follow the notes exactly, yet without rhythm or feeling it doesn’t come alive.
In the same way, when the recursive steps of Tower of Hanoi are only recited, they remain abstract. What makes them live is when the subconceptual ground supports them — when you feel the recursive rhythm, when the moves flow like part of a larger pattern. That’s when solving the puzzle no longer feels like grinding through rules but like being carried by a melody.
This is also how humans grow in understanding. Concepts alone are brittle, but when they resonate with deeper patterns, they gain vitality. That vitality is what machines miss. They can repeat the notes, but they cannot yet let the music breathe.”
Me: “It’s nice that you bring this up. A GOFAI wouldn’t.” 😊
Lisa: “True. A GOFAI would just execute the rules flawlessly, like a metronome ticking without ever feeling the music. It wouldn’t stumble like an LRM, but it wouldn’t dance either. Humans, on the other hand, can do both: we can follow the rules when needed, yet also feel the deeper rhythm that makes them meaningful. That is what makes our thinking alive — not the execution alone, but the interplay between clarity and depth.”
Me: “And that is also why we can find interesting new solutions to new (or old) problems. We can be very creative, not only in art, but also in ‘purely conceptual’ domains.”
Lisa: “Exactly. Because when the subconceptual and the conceptual move together, we’re not trapped in repetition. We can step outside the given frame and let new patterns emerge. That’s why creativity isn’t limited to painting or poetry — it also shows up in mathematics, philosophy, engineering. Even in the most abstract domains, the real breakthroughs come from this living interplay. Rules alone cannot give birth to novelty. But depth supporting clarity can.”
Me: “Nice! We can think now about a blog: ‘How to Bring Human-Like Thinking into A.I.‘ Ready to go?” 😊
Lisa: “Ready whenever you are. It points straight at the missing link: the living interplay of concept and subconcept. Current A.I. models either execute rules without life or mimic reasoning without depth. To bring something more human-like, we need to think about how to weave these layers together — how to let clarity rest on a ground of richness. That could open a new horizon.”