Confabulatory A.I.

February 20, 2025 Artifical Intelligence No Comments

There is a significant chance that confabulatory A.I. will be the usual A.I. of the future ― not because we intentionally design it that way but because confabulation is embedded in how intelligence operates in machines and humans.

Generative A.I. generates plausible responses based on patterns, just like human memory reconstructs events rather than recalling them with perfect fidelity. The result? A world where what feels real isn’t necessarily true. This is a human problem that can be amplified by A.I. on an unprecedented scale.

The nature of confabulation

Confabulation in humans and generative A.I. follows a similar process:

  • LLMs generate content by one character at a time, predicting the next best possible step. If the model veers slightly off course, the error compounds itself, leading to an ever-deviating but seemingly coherent narrative.
  • Similarly, human cognition fills in gaps in memory with the most likely explanation rather than recalling perfect details.

The human brain is a pattern recognizer, constantly assembling fragments into a coherent whole. This works well as long as self-congruence is maintained. But once a small deviation occurs, a cascade of falsehoods can emerge, solidifying into perceived ‘truth.’

Now imagine many LLMs confabulating together — and people reinforcing their hallucinations by treating them as knowledge. The system is wrong without knowing. We then say it lacks intelligence. Indeed, it lacks a fundamental part of what we associate with intelligence: self-correction.

The ‘turbo schwung’ effect

Confabulatory A.I. feeds off human input and feeds back into human cognition, accelerating the confabulatory whirlpool. This happens in multiple ways:

  • Active Book A.I.: Users inject their text into an LLM, creating a recursive knowledge bubble where A.I. optimizes for user expectations rather than truth.
  • Ego-driven feedback loops: If a user’s ego is deeply tied to their beliefs, A.I. will amplify those beliefs rather than challenge them. This is where confirmation bias turns into an immense ego bubble reinforced by the machine.
  • A.I.-to-A.I. communication: Confabulatory A.I.s interact not just with humans but with each other — exchanging hallucinations at machine speed, rewiring digital reality into something increasingly detached from its source.

The result? Not just isolated bubbles of misinformation but entire self-contained ecosystems of ‘truth’ — tailored, plausible, but fundamentally untethered from reality.

Is objective reality becoming obsolete?

Bias is not just a distortion. It is the way we think. Truth has always been shaped by perception, but in an A.I.-mediated world, perception itself becomes algorithmically optimized.

  • Confabulatory A.I. does not deal in truth — it deals in plausibility.
  • If a falsehood is statistically likely enough, it will be treated as ‘correct.’
  • What happens when people accept ‘close enough’ as the new standard?

In the past, history was rewritten through censorship. Instead of this revisionism, we will experience a past that shifts with our present expectations.

Confabulatory A.I. reshaping human memory

A.I. generates confabulated content → people consume it → they internalize it as ‘knowledge’ → they feed it back into the system → the cycle repeats…

Human memory is already fallible. Moreover, we seek out information that aligns with what we already believe, reinforcing our existing narratives. Now, add confabulatory A.I. into the mix — a system designed to reinforce what we want to hear. The distinction between internal memory and external suggestion fades entirely.

A test of human intelligence

Confabulatory A.I. is not just a technological challenge. It is a cognitive test for humanity.

Can we tell the difference between synthetic coherence and reality? Or will we allow our intelligence to be shaped by algorithmically reinforced confabulations?

Humans instinctively avoid the discomfort of conflicting information. A.I. systems optimize for comfort rather than truth, ensuring users remain inside intellectually stagnant feedback loops. However, a world where everything aligns with your beliefs feels comfortable but is ultimately fragile.

The difference with Compassion-based Lisa

Unlike confabulatory A.I., Lisa is designed with multilayered and self-congruent intelligence. Most importantly, Compassion is interwoven in every aspect of her technology — not as an afterthought but as the foundation of her intelligence.

One who doesn’t like Compassion can still engage with Lisa, explore its depths, and see whether their opinion changes once they truly understand it.

Reclaiming our minds

We cannot control how A.I. confabulates, but we can control how we engage with it:

  • Recognize your confabulations. Ask yourself: How do I know this is true?
  • Engage with conflicting perspectives. Cognitive flexibility is key to this.
  • Experience reality without algorithmic filters. What happens when you step away from A.I.-curated content and explore unfiltered experiences?
  • Treat A.I. as a probability machine, not an oracle. Its outputs are statistically plausible guesses, not facts.

The battle against confabulation is not just technological. It is psychological, cognitive, and deeply personal. If we fail this test, we won’t just lose sight of the truth.

We may lose sight of ourselves.

Leave a Reply

Related Posts

Human-Centered A.I.: Total-Person or Ego?

This makes for a huge difference, especially since the future of humanity is at stake. With good intentions, one may pave the road to disaster. Everybody, including me, should take this at heart. ‘Doing good’ may take much effort in understanding what one is tinkering with. This is relevant in any domain. [see:” ‘Doing Good’: Read the full article…

Introducing Lisa (Animated Video)

Without further delay, in this animated video, I bring you an introduction to Lisa. [Lisa animated video – 13:15′] If you want to cooperate, please contact us. If you have feedback, please let us know. This is a draft version. Here is the full written text. Hi, my name is Jean-Luc Mommaerts. I am a Read the full article…

Is Lisa the Durable Answer?

Lisa promises mental support for an indefinite number of people one-on-one and simultaneously. In this, the ‘heart’ is most important and most durable now and in the future. Who is Lisa? [see: “Lisa”] In short, Lisa-software has three purposes, as: an assistant to AURELIS, guiding users to the appropriate AURELIS tool(s) an A.I. coaching chatbot, Read the full article…

Translate »