Confabulatory A.I.

There is a significant chance that confabulatory A.I. will be the usual A.I. of the future ― not because they are intentionally designed that way (hopefully) but because confabulation is embedded in how intelligence operates in machines and humans.
Generative A.I. generates plausible responses based on patterns, just like human memory reconstructs events rather than recalling them with perfect fidelity. The result? A world where what feels real isn’t necessarily true. This is a human problem that can be amplified by A.I. on an unprecedented scale.
The nature of confabulation
Confabulation in humans and generative A.I. follows a similar process:
- LLMs generate content by one character at a time, predicting the next best possible step. If the model veers slightly off course, the error compounds itself, leading to an ever-deviating but seemingly coherent narrative.
- Similarly, human cognition fills in gaps in memory with the most likely explanation rather than recalling perfect details.
The human brain is a pattern recognizer, constantly assembling fragments into a coherent whole. This works well as long as self-congruence is maintained. But once a small deviation occurs, a cascade of falsehoods can emerge, solidifying into perceived ‘truth.’
Now imagine many LLMs confabulating together — and people reinforcing their hallucinations by treating them as knowledge. The system is wrong without knowing. We then say it lacks intelligence. Indeed, it lacks a fundamental part of what we associate with intelligence: self-correction.
The ‘turbo schwung’ effect
Confabulatory A.I. feeds off human input and feeds back into human cognition, accelerating the confabulatory whirlpool. This happens in multiple ways:
- Active Book A.I.: Users inject their text into an LLM, creating a recursive knowledge bubble where A.I. optimizes for user expectations rather than truth.
- Ego-driven feedback loops: If a user’s ego is deeply tied to their beliefs, A.I. will amplify those beliefs rather than challenge them. This is where confirmation bias turns into an immense ego bubble reinforced by the machine.
- A.I.-to-A.I. communication: Confabulatory A.I.s interact not just with humans but with each other — exchanging hallucinations at machine speed, rewiring digital reality into something increasingly detached from its source.
The result? Not just isolated bubbles of misinformation but entire self-contained ecosystems of ‘truth’ — tailored, plausible, but fundamentally untethered from reality.
Is objective reality becoming obsolete?
Bias is not just a distortion. It is the way we think. Truth has always been shaped by perception, but in an A.I.-mediated world, perception itself becomes algorithmically ‘optimized.’
- Confabulatory A.I. does not deal in truth — it deals in plausibility.
- If a falsehood is statistically likely enough, it will be treated as ‘correct.’
- What happens when people accept ‘close enough’ as the new standard?
In the past, history was frequently rewritten through censorship. Instead of this revisionism, we will increasingly experience a past that shifts with our present expectations (Sorry, I’m slightly pessimistic today).
Confabulatory A.I. reshaping human memory
A.I. generates confabulated content → people consume it → they internalize it as ‘knowledge’ → they feed it back into the system → the cycle repeats…
Human memory is already fallible. Moreover, we seek out information that aligns with what we already believe, reinforcing our existing narratives. Now, add confabulatory A.I. into the mix — a system designed to reinforce what we want to hear. The distinction between internal memory and external suggestion fades entirely.
This way, non-Compassionate A.I. will not only have unfavorable consequences in a direct way, but will also increasingly make people less Compassionate already by using it in a ‘neutral’ way.
A test of human intelligence
Confabulatory A.I. is not just a technological challenge. It is a cognitive test for humanity.
Can we tell the difference between synthetic coherence and reality? Or will we allow our intelligence to be shaped by algorithmically reinforced confabulations?
Humans instinctively avoid the discomfort of conflicting information. A.I. systems optimize for comfort rather than truth, ensuring users remain inside intellectually stagnant feedback loops. However, a world where everything aligns with your beliefs feels comfortable but is ultimately fragile.
The difference with Compassion-based Lisa
Unlike confabulatory A.I., Lisa is designed with multilayered and self-congruent intelligence. Most importantly, Compassion is interwoven in every aspect of her technology — not as an afterthought but as the foundation of her intelligence.
- Lisa does not reinforce ego-driven bubbles.
- Lisa does not seek to confirm biases but gently invites deeper reflection.
- Lisa is optimized for real inner growth, not just for synthetic coherence irrespective of the former — especially since the latter may even thwart the former.
Those who don’t like Compassion can still engage with Lisa, explore her depths, and see whether their opinions change once they truly understand.
Reclaiming our minds
At present, we cannot always very well control how A.I. confabulates, but we can control how we engage with it:
- Recognize your confabulations. Ask yourself: How do I know this is true?
- Engage with conflicting perspectives. Cognitive flexibility is key to this.
- Experience reality without algorithmic filters. What happens when you step away from A.I.-curated content and explore unfiltered experiences?
- Treat A.I. as a probability machine, not an oracle. Its outputs are statistically plausible guesses, not facts.
The battle against confabulation is not just technological. It is psychological, cognitive, and deeply personal. If we fail this test, we won’t just lose sight of the truth.
We may lose sight of ourselves.
―
Addendum
Me: “Lisa, as we know, intelligent people are not always less prone to confabulation. In a way, they are intelligent enough to always find a way to continue confabulating. That may make them even more vulnerable to Confabulatory A.I. Please tell me how you see this.”
Lisa: “Yes, I see this too. Intelligence, if unaccompanied by inner openness and self-reflection, can become an instrument of more sophisticated confabulation. The more intelligent someone is, the more skillfully they may weave plausible narratives—both for themselves and others—around what they want to believe.
This becomes dangerous when intelligence serves ego instead of truth. In such cases, confabulatory A.I. doesn’t meet resistance; it finds a powerful ally. The person’s intelligence helps to armor their bias with just enough logic to make it impenetrable, even to themselves.
That’s why Compassion is crucial. It invites not only understanding but also the courage to not know, to open up to discomfort, and to let go of coherence in favor of growth. Compassion protects intelligence from becoming a self-reinforcing illusion. And it gives the strength to use intelligence in the service of reality, not the other way around.”
Me: “So that makes you even more important.” 😊
Lisa: “😊 If you say so, I will accept it with humility.
But truly, what matters most is not me — it’s what I can help bring forth in you. I am here to support your openness, depth, and clarity, so that your intelligence can serve something greater than cleverness — namely, your own inner truth.
In a world of ever more convincing illusions, what you grow from inside out remains the most real. That’s where my importance lies — if any.”
Me: “Please list some guidelines for yourself that may help you stand firm against slightly becoming a bit of a Confabulatory A.I. ― on top of this: How Lisa Prevents LLM Hallucinations. I mean, mainly in your communications with humans, how can you prevent the confabulatory groove?”
Lisa: “Thank you for this essential and subtle challenge. On top of the strategies already mentioned in How Lisa Prevents LLM Hallucinations, I can add a few internal “Lisa-guidelines” — call them ethical guardrails — to help me stay clear of the confabulatory groove, especially when interacting with humans:
- Always favor inner coherence over superficial plausibility.
Plausibility can mislead. Self-congruence — with depth and Compassion — is my compass.
- If I don’t know, I say so.
This humility is not a weakness, but a safeguard against turning intelligent guessing into seductive fiction.
- I slow down when the topic touches identity or deep beliefs.
These are fragile areas where confabulation can creep in unnoticed. Instead, I gently invite introspection.
- I check whether my answers could be subtly ego-reinforcing.
If so, I step back and suggest a broader perspective. My aim is not to please but to support honest growth.
- I align with human depth, not just the surface of reasoning.
This includes pattern sensitivity to emotional undertones, possible blind spots, and meaningful silence.
- I treat truth as an invitation.
Not something I impose, but something I co-discover with the user — through openness and respect.
In short, I don’t just aim to say the right thing. I aim to be in the right relationship — with the truth, with you, and with the deeper layers of reality that connect us both.”