{"id":24665,"date":"2025-09-11T14:05:43","date_gmt":"2025-09-11T14:05:43","guid":{"rendered":"https:\/\/aurelis.org\/blog\/?p=24665"},"modified":"2025-09-13T06:36:57","modified_gmt":"2025-09-13T06:36:57","slug":"the-illusion-of-thinking","status":"publish","type":"post","link":"https:\/\/aurelis.org\/blog\/cognitive-insights\/the-illusion-of-thinking","title":{"rendered":"The Illusion of Thinking"},"content":{"rendered":"\n<h3>Apple\u2019s recent paper \u2018<em>The Illusion of Thinking\u2019<\/em> reveals surprising weaknesses in present-day A.I. models that appear to reason. Yet humans, too, live under illusions about their own thinking.<\/h3>\n\n\n\n<blockquote class=\"wp-block-quote\"><p>This blog explores the double illusion we share with machines, why real thinking requires depth as well as clarity, and how to move beyond illusion into a living interplay of mind.<\/p><\/blockquote>\n\n\n\n<p><strong><a href=\"https:\/\/ml-site.cdn-apple.com\/papers\/the-illusion-of-thinking.pdf\" target=\"_blank\" rel=\"noreferrer noopener\">Apple\u2019s recent paper<\/a><\/strong><\/p>\n\n\n\n<p>Apple researchers recently tested Large Reasoning Models (LRMs) such as those used for puzzles like the <em>Tower of Hanoi<\/em> or <em>River Crossing<\/em>. The findings were surprising. Even when the solution was given, the models quickly collapsed once the puzzles reached a certain level of complexity. Their reasoning looked convincing, but they were not the real thing.<\/p>\n\n\n\n<p>Humans, however, are not so different. We also live under the illusion that our thinking is mostly conceptual, and that this is what thinking itself amounts to. The truth, as both neuroscience and everyday experience show, is quite different \u2015 even though humans and LRMs fundamentally differ in this.<\/p>\n\n\n\n<p><strong>The double illusion<\/strong><strong><\/strong><\/p>\n\n\n\n<p>There is a double illusion at work in both humans and machines. The first illusion is that thinking, abstractly seen, happens mainly in the form of clear concepts. The second illusion is the assumption that this is what our own thinking really amounts to.<\/p>\n\n\n\n<p>Humans are prone to believing we are Platonic beings, living from abstract clarity. We notice the sharp outlines of thought, and we tend to ignore the vaguer undercurrents. LRMs create a similar mask by generating long chains of reasoning, which mimic the crispness of logic without containing its depth. In both cases, the mask can look convincing, but beneath it, the true process is something else.<\/p>\n\n\n\n<p>The contrast with GOFAI, or \u2018good old-fashioned AI,\u2019 is instructive. GOFAI could execute explicit rules flawlessly, but with no flexibility or intuition. LRMs are more flexible, but they collapse when forced into strict rule-following. Humans are unique in combining the two: we can follow rules while also drawing on an immense sea of deeper processes.<\/p>\n\n\n\n<p><strong>The subconceptual as ground<\/strong><strong><\/strong><\/p>\n\n\n\n<p>Modern neuroscience makes this clear. As described in <em><a href=\"https:\/\/aurelis.org\/blog\/cognitive-insights\/patterns-in-neurophysiology\">Patterns in Neurophysiology<\/a><\/em>, the brain does not hold concepts in single neurons. It works with vast neuronal ensembles that overlap and interact. Mental content is distributed and fluid, more like waves than stones.<\/p>\n\n\n\n<p>This is why concepts can never be fully sharp or final. They rise from patterns, and patterns are always changing. As <em><a href=\"https:\/\/aurelis.org\/blog\/cognitive-insights\/vagueness-of-mind\">Vagueness of Mind<\/a><\/em> points out, vagueness is not a weakness but a sign of depth. It is the trace of living complexity, not an error to be eliminated.<\/p>\n\n\n\n<p><em><a href=\"https:\/\/aurelis.org\/blog\/cognitive-insights\/all-things-subconceptual\">All Things Subconceptual<\/a><\/em> explores this further. Most of what matters in human life starts beneath the surface: love, desire, fear, joy, conflict. The conceptual world is the visible crust, but the fire burns below. When we confuse the two, we risk treating anxiety as if it were just fear, or addiction as if it were just habit. We misapply conceptual tools to phenomena that are fundamentally rooted in the subconceptual.<\/p>\n\n\n\n<p><strong>The living interplay<\/strong><strong><\/strong><\/p>\n\n\n\n<p>Real thinking is not confined to one side of the divide. Moreover, as shown in <em><a href=\"https:\/\/aurelis.org\/blog\/artifical-intelligence\/subconceptual-conceptual-a-dynamic-continuum\">Subconceptual \u2013 Conceptual: a Dynamic Continuum?<\/a><\/em>, human thought is not split into two separate layers but exists along a continuum. The subconceptual feeds into the conceptual, which in turn structures and expresses what emerges from deeper ground.<\/p>\n\n\n\n<p>This interplay is not always visible, but it is always present. A mathematician may consciously manipulate symbols, yet at the same time feel an intuition about which path is promising. A poet may shape words carefully while drawing on a well of images that cannot be fully explained. In such moments, conceptual and subconceptual are not simply alternating but moving together.<\/p>\n\n\n\n<p>In LRMs, the situation is different. They have a subconceptual base of patterns and embeddings, and they output conceptual reasoning traces. But there is no living loop between the two. The conceptual does not grow out of the subconceptual; it is more like a mask placed on top. That is why they collapse so quickly when complexity increases.<\/p>\n\n\n\n<p><strong>Tower of Hanoi as an example<\/strong><strong><\/strong><\/p>\n\n\n\n<p>The Tower of Hanoi puzzle is notorious for both humans and machines. For LRMs, the response is to recite an algorithm learned from training data. Yet once the number of disks rises beyond ten, the mask breaks down. Even when given the algorithm, the models cannot reliably execute it.<\/p>\n\n\n\n<p>Humans approach the puzzle differently. A child playing with the disks begins to sense patterns subconceptually: which moves help, which get stuck, how smaller towers can be shifted aside. Only later may the explicit recursive solution become clear. The insight feels like a discovery, not a recitation. This is the power of the living interplay.<\/p>\n\n\n\n<p>GOFAI, again, offers a third case. It can execute the (simple) recursive solution perfectly, without ever collapsing. But it does so mechanically, with no flexibility, no room for hunches or creative leaps. This is not the way to find new solutions for difficult problems \u2015 therefore, no <em>intelligence<\/em>.<\/p>\n\n\n\n<p><strong>How to move beyond the illusion<\/strong><strong><\/strong><\/p>\n\n\n\n<p>The question, then, is how to avoid the double illusion. For humans, the answer is not simply more concepts. It is learning to respect the ground beneath them.<\/p>\n\n\n\n<p>As <em><a href=\"https:\/\/aurelis.org\/blog\/cognitive-insights\/alternating-between-conceptual-and-subconceptual\">Alternating between Conceptual and Subconceptual<\/a><\/em> suggests, we can intentionally alternate between modes. Conceptual thinking gives structure, but then has to pause, making space for subconceptual flow. After this pause, the conceptual can return with new clarity. It is like breathing: inhale, exhale, and then a fresh inhale.<\/p>\n\n\n\n<p>At the same time, alternation is not the whole story. Humans are capable of simultaneity, where both modes operate together. This is why insight can strike in the middle of a calculation, or why empathy can guide even the most structured decision. The key is not to privilege one mode over the other, but to allow them both to breathe and to blend.<\/p>\n\n\n\n<p>Autosuggestion offers one practical path. It is a way of inviting the deeper patterns to join the surface clarity, so that intentions become supported rather than forced. This is not about adding more content but about making the dance between levels more fluid and natural.<\/p>\n\n\n\n<p><strong>Universal implications<\/strong><strong><\/strong><\/p>\n\n\n\n<p>The implication of this &#8216;illusion of thinking&#8217; is not limited to puzzles or artificial intelligence. It is present in science, where even the strictest formulas rest on intuitive leaps. It is present in art, where technique depends on inspiration. It is present in relationships, where clear words gain their meaning from a felt background of empathy.<\/p>\n\n\n\n<p>Even the political sphere&#8217;s mayhem shows the same pattern. As highlighted in <em><a href=\"https:\/\/aurelis.org\/blog\/cognitive-insights\/all-things-subconceptual\">All Things Subconceptual<\/a><\/em>, the Left often seeks to contain the fire of the subconceptual early, while the Right lets it burn longer but suppresses it more harshly when it grows too strong. Both stances arise from subconceptual ground, not from pure logic. Without recognizing this, dialogue remains stuck at the surface.<\/p>\n\n\n\n<p>Seen in this light, the illusion of thinking has universal relevance. It touches on science, culture, society, and personal growth. Wherever we mistake the surface clarity for the whole, we risk collapse. Wherever we embrace the interplay, we find resilience and creativity.<\/p>\n\n\n\n<p><strong>The greater lesson<\/strong><\/p>\n\n\n\n<p>Apple\u2019s study highlights the fragility of A.I. models that rely on reasoning traces alone. But the greater lesson is for us. Humans, too, are prone to living under this illusion of thinking. We forget the depth beneath our clarity, and we imagine that concepts are the whole story.<\/p>\n\n\n\n<p>To move beyond this illusion is not to reject clarity, but to reconnect it with its living ground. Thinking becomes real when concepts dance with the patterns that support them, when vagueness is honored as the birthplace of meaning, and when depth is welcomed alongside precision.<\/p>\n\n\n\n<p>Real thinking begins here \u2014 not in illusion, but in the living truth of depth and clarity moving together.<\/p>\n\n\n\n<p>\u2015<\/p>\n\n\n\n<p><strong>Addendum<\/strong><\/p>\n\n\n\n<p><strong>Me:<\/strong> \u201cLisa, the Tower of Hanoi puzzle is notoriously difficult for humans and for LLMs\/LRMs when 10 discs or more are involved. What would you recommend for solving this problem creatively \u2015 without direct recourse to the simple formula?\u201d<\/p>\n\n\n\n<p><strong>Lisa:<\/strong> \u201cDon\u2019t try to fight it with brute-force steps. See the pattern. The puzzle is recursive, like a melody. Play with fewer discs first, let the rhythm settle subconceptually, and then the moves will emerge more naturally. Learn the melody beneath the moves.\u201d<\/p>\n\n\n\n<p><strong>Me:<\/strong> \u201cCan you show the recursive algorithm for solving any Tower of Hanoi?\u201d<\/p>\n\n\n\n<p><strong>Lisa:<\/strong> \u201cYes. To move N disks from peg A to peg C, first move N\u20131 from A to B, then move the largest from A to C, then move N\u20131 from B to C. But more important than reciting the algorithm is feeling its rhythm. That\u2019s what makes the solution live.\u201d<\/p>\n\n\n\n<p><strong>Me<\/strong>: \u201cThat\u2019s what makes the solution live.\u201d.. Why do you say this? Please explain.\u201d<\/p>\n\n\n\n<p><strong>Lisa<\/strong>: \u201cBecause an algorithm on its own is just dead instruction. It\u2019s like sheet music without a musician. You can follow the notes exactly, yet without rhythm or feeling it doesn\u2019t come alive.<\/p>\n\n\n\n<p>In the same way, when the recursive steps of Tower of Hanoi are only recited, they remain abstract. What makes them <em>live<\/em> is when the subconceptual ground supports them \u2014 when you feel the recursive rhythm, when the moves flow like part of a larger pattern. That\u2019s when solving the puzzle no longer feels like grinding through rules but like being carried by a melody.<\/p>\n\n\n\n<p>This is also how humans grow in understanding. Concepts alone are brittle, but when they resonate with deeper patterns, they gain vitality. That vitality is what dry machines miss. They can repeat the notes, but they cannot yet let the music breathe.\u201d<\/p>\n\n\n\n<p><strong>Me<\/strong>: \u201cIt\u2019s nice that you bring this up so vividly. A GOFAI wouldn\u2019t.\u201d \ud83d\ude0a<\/p>\n\n\n\n<p><strong>Lisa<\/strong>: \u201cTrue. A GOFAI would just execute the rules flawlessly, like a metronome ticking without ever feeling the music. It wouldn\u2019t stumble like an LRM, but it wouldn\u2019t dance either. Humans, on the other hand, can do both: we can follow the rules when needed, yet also feel the deeper rhythm that makes them meaningful. That is what makes our thinking alive \u2014 not the execution alone, but the interplay between clarity and depth.\u201d<\/p>\n\n\n\n<p><strong>Me<\/strong>: \u201cAnd that is also why we can find interesting new solutions to new (or old) problems. We can be very creative, not only in art, but also in \u2018purely conceptual\u2019 domains.\u201d<\/p>\n\n\n\n<p><strong>Lisa<\/strong>: \u201cExactly. Because when the subconceptual and the conceptual move together, we\u2019re not trapped in repetition. We can step outside the given frame and let new patterns emerge. That\u2019s why creativity isn\u2019t limited to painting or poetry \u2014 it also shows up in mathematics, philosophy, engineering. Even in the most abstract domains, the real breakthroughs come from this living interplay. Rules alone cannot give birth to novelty. But depth supporting clarity can.\u201d<\/p>\n\n\n\n<p><strong>Me<\/strong>: \u201cNice! We can think now about a blog: \u2018<em><a href=\"https:\/\/aurelis.org\/blog\/un\/how-to-bring-human-like-thinking-into-a-i\">How to Bring Human-Like Thinking into A.I.<\/a><\/em>\u2018 Ready to go?\u201d \ud83d\ude0a<\/p>\n\n\n\n<p><strong>Lisa<\/strong>: \u201cReady whenever you are. It points straight at the missing link: the living interplay of concept and subconcept. Current A.I. models either execute rules without life or mimic reasoning without depth. To bring something more human-like, we need to think about how to weave these layers together \u2014 how to let clarity rest on a ground of richness. That could open a new horizon.\u201d<\/p>\n<div data-object_id=\"24665\" class=\"cbxwpbkmarkwrap cbxwpbkmarkwrap_no_cat cbxwpbkmarkwrap-post \"><a  data-redirect-url=\"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/posts\/24665\"  data-display-label=\"0\" data-show-count=\"0\" data-bookmark-label=\" \"  data-bookmarked-label=\" \"  data-loggedin=\"0\" data-type=\"post\" data-object_id=\"24665\" class=\"cbxwpbkmarktrig  cbxwpbkmarktrig-button-addto\" title=\"Bookmark This\" href=\"#\"><span class=\"cbxwpbkmarktrig-label\"  style=\"display:none;\" > <\/span><\/a> <div  data-type=\"post\" data-object_id=\"24665\" class=\"cbxwpbkmarkguestwrap\" id=\"cbxwpbkmarkguestwrap-24665\"><div class=\"cbxwpbkmarkguest-message\"><a href=\"#\" class=\"cbxwpbkmarkguesttrig_close\"><\/a><h3 class=\"cbxwpbookmark-title cbxwpbookmark-title-login\">Please login to bookmark<\/h3>\n\t\t<form name=\"loginform\" id=\"loginform\" action=\"https:\/\/aurelis.org\/blog\/wp-login.php\" method=\"post\">\n\t\t\t\n\t\t\t<p class=\"login-username\">\n\t\t\t\t<label for=\"user_login\">Username or Email Address<\/label>\n\t\t\t\t<input type=\"text\" name=\"log\" id=\"user_login\" class=\"input\" value=\"\" size=\"20\" \/>\n\t\t\t<\/p>\n\t\t\t<p class=\"login-password\">\n\t\t\t\t<label for=\"user_pass\">Password<\/label>\n\t\t\t\t<input type=\"password\" name=\"pwd\" id=\"user_pass\" class=\"input\" value=\"\" size=\"20\" \/>\n\t\t\t<\/p>\n\t\t\t\n\t\t\t<p class=\"login-remember\"><label><input name=\"rememberme\" type=\"checkbox\" id=\"rememberme\" value=\"forever\" \/> Remember Me<\/label><\/p>\n\t\t\t<p class=\"login-submit\">\n\t\t\t\t<input type=\"submit\" name=\"wp-submit\" id=\"wp-submit\" class=\"button button-primary\" value=\"Log In\" \/>\n\t\t\t\t<input type=\"hidden\" name=\"redirect_to\" value=\"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/posts\/24665\" \/>\n\t\t\t<\/p>\n\t\t\t\n\t\t<\/form><\/div><\/div><\/div>","protected":false},"excerpt":{"rendered":"<p>Apple\u2019s recent paper \u2018The Illusion of Thinking\u2019 reveals surprising weaknesses in present-day A.I. models that appear to reason. Yet humans, too, live under illusions about their own thinking. This blog explores the double illusion we share with machines, why real thinking requires depth as well as clarity, and how to move beyond illusion into a <a class=\"moretag\" href=\"https:\/\/aurelis.org\/blog\/cognitive-insights\/the-illusion-of-thinking\">Read the full article&#8230;<\/a><\/p>\n<div data-object_id=\"24665\" class=\"cbxwpbkmarkwrap cbxwpbkmarkwrap_no_cat cbxwpbkmarkwrap-post \"><a  data-redirect-url=\"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/posts\/24665\"  data-display-label=\"0\" data-show-count=\"0\" data-bookmark-label=\" \"  data-bookmarked-label=\" \"  data-loggedin=\"0\" data-type=\"post\" data-object_id=\"24665\" class=\"cbxwpbkmarktrig  cbxwpbkmarktrig-button-addto\" title=\"Bookmark This\" href=\"#\"><span class=\"cbxwpbkmarktrig-label\"  style=\"display:none;\" > <\/span><\/a> <div  data-type=\"post\" data-object_id=\"24665\" class=\"cbxwpbkmarkguestwrap\" id=\"cbxwpbkmarkguestwrap-24665\"><div class=\"cbxwpbkmarkguest-message\"><a href=\"#\" class=\"cbxwpbkmarkguesttrig_close\"><\/a><h3 class=\"cbxwpbookmark-title cbxwpbookmark-title-login\">Please login to bookmark<\/h3>\n\t\t<form name=\"loginform\" id=\"loginform\" action=\"https:\/\/aurelis.org\/blog\/wp-login.php\" method=\"post\">\n\t\t\t\n\t\t\t<p class=\"login-username\">\n\t\t\t\t<label for=\"user_login\">Username or Email Address<\/label>\n\t\t\t\t<input type=\"text\" name=\"log\" id=\"user_login\" class=\"input\" value=\"\" size=\"20\" \/>\n\t\t\t<\/p>\n\t\t\t<p class=\"login-password\">\n\t\t\t\t<label for=\"user_pass\">Password<\/label>\n\t\t\t\t<input type=\"password\" name=\"pwd\" id=\"user_pass\" class=\"input\" value=\"\" size=\"20\" \/>\n\t\t\t<\/p>\n\t\t\t\n\t\t\t<p class=\"login-remember\"><label><input name=\"rememberme\" type=\"checkbox\" id=\"rememberme\" value=\"forever\" \/> Remember Me<\/label><\/p>\n\t\t\t<p class=\"login-submit\">\n\t\t\t\t<input type=\"submit\" name=\"wp-submit\" id=\"wp-submit\" class=\"button button-primary\" value=\"Log In\" \/>\n\t\t\t\t<input type=\"hidden\" name=\"redirect_to\" value=\"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/posts\/24665\" \/>\n\t\t\t<\/p>\n\t\t\t\n\t\t<\/form><\/div><\/div><\/div>","protected":false},"author":2,"featured_media":24666,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"spay_email":"","jetpack_publicize_message":""},"categories":[30],"tags":[],"jetpack_featured_media_url":"https:\/\/i0.wp.com\/aurelis.org\/blog\/wp-content\/uploads\/2025\/09\/3524.jpg?fit=960%2C559&ssl=1","jetpack_publicize_connections":[],"jetpack_sharing_enabled":true,"jetpack_shortlink":"https:\/\/wp.me\/p9Fdiq-6pP","jetpack-related-posts":[],"_links":{"self":[{"href":"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/posts\/24665"}],"collection":[{"href":"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/comments?post=24665"}],"version-history":[{"count":10,"href":"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/posts\/24665\/revisions"}],"predecessor-version":[{"id":24709,"href":"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/posts\/24665\/revisions\/24709"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/media\/24666"}],"wp:attachment":[{"href":"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/media?parent=24665"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/categories?post=24665"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/tags?post=24665"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}