The Illusion of Explainability

August 18, 2025 Cognitive Insights No Comments

We humans love to explain. We search for reasons, lay out arguments, and feel reassured when we believe we have captured the ‘why’ behind what we do. This sense of clarity makes us feel safe, in control, and morally grounded. But perhaps that clarity is not always what it seems.

This blog explores the fragile beauty of explainability — how it comforts us, how it misleads us, and why both humans and A.I. must live with illusion.  In fact, the very act of explanation may be built upon a deep illusion — one that touches our psychology, our ethics, and even the future of artificial intelligence.

The human love for explanations

Everyday life is filled with explanations. We say we chose a certain meal because we like the taste, or took a job because it was practical, or acted kindly because it was the right thing to do. Each of these stories feels like a direct reflection of truth.

Yet experiments frequently show otherwise. For instance, classic research with split-brain patients revealed that one hemisphere of the brain would initiate an action while the other hemisphere, unaware of the real cause, immediately invented a reason. The subjects did not know they were making it up, yet they believed it wholeheartedly.

The unsettling part is that these are not isolated cases. What happens under special laboratory conditions may be happening all the time, only more subtly. Our reasons for action might always be partial, shaped after the fact, assembled like little dots on a canvas that will never fully cover the whole picture. No matter how many you add, the space remains larger than what you cover.

How far can we know ourselves

If explanations are often constructed, the question becomes: do we ever fully know why we act? Perhaps not. At best, we circle closer, but the full depth of our motivation may remain out of reach.

This partial knowing unsettles many. If we admit we don’t really know why we do things, does that mean morality collapses? Can we still hold ourselves or others accountable? Here lies the ethical tension. People fear that without clear reasons, society will fall apart. So we cling to explanations, not only as knowledge but as a defense.

Within the five Aurelian values, however, morality does not rest on rigid reasons but on openness, depth, respect, freedom, and trustworthiness. These qualities do not need airtight explanations. They invite us toward growth from the inside out.

Explainability as camouflage

Explanations are not always about revealing the truth. Sometimes, they hide it. A polished explanation can work like a mask. The more convincing it sounds, the better it may cover the complex reality beneath.

This camouflage can be intentional or unintentional. At times, a person may consciously craft an explanation to persuade others. More often, the aim may be non-conscious, with the storyteller genuinely believing the mask. That is why rationalizations carry such conviction. The illusion of explainability is not just ignorance; it is a protective layer — shielding both others and ourselves from the messiness of real depth.

The paradox of meaningfulness

Explanations also give us a sense of meaning. They simplify, reduce complexity, and tie events together. But life’s richest meanings do not come from what is flattened into clarity. They emerge from what resists full explanation.

Love, art, moral courage — these lose their vitality when pressed into formulas. The mystery is what keeps them alive. The paradox is that we seek meaning through explanation, yet meaning itself thrives on what remains beyond explanation. To conflate meaning with clarity is to miss the point.

This intertwining of rationality and mystery is echoed in AURELIS’s unique stance: 100% rationality and 100% depth. Rationality without depth becomes sterile; depth without rationality becomes chaos. The two together keep both meaning and mystery intact.

The parallel with artificial intelligence

This human story now echoes in the world of A.I. The early era of ‘Good Old-Fashioned A.I.’ relied on rules and logic trees. Everything was explainable, step by step. But it was shallow and limited.

Modern A.I. is different. Pattern-based, self-adjusting, layered, it resembles the human brain in its complexity. As performance increases, explainability decreases. Demanding full transparency from such systems is like asking a symphony to be rewritten as a simple nursery rhyme. Possible in fragments but devastating to the whole.

If we push for complete clarity, we risk creating A.I. that mimics GOFAI-robots — shallow, predictable, lifeless. Worse, this could encourage humans to become the same: conceptualized, controllable, predictable machines. In trying to make A.I. safe, we might turn ourselves into explainable zombies.

The heart versus the eye

Antoine de Saint-Exupéry wrote: “It is only with the heart that one can see rightly; what is essential is invisible to the eye.” That invisibility means inexplicability. The heart cannot fully explain itself, nor should it.

In discussions of A.I. and the heart, the same truth emerges. To demand total clarity is to amputate depth. Trust, therefore, cannot rest on full explanations. It must grow out of consistency, resonance, and values.

This holds for humans as well. We trust not those who can dissect their every motive but those who show themselves over time, guided by principles we can recognize. For A.I., too, trust will not mean perfect transparency. It will mean orientation toward Compassion, reliability of action, and openness at the level where accountability truly matters.

Trust without full explainability

Explainability at a surface level remains important. It allows for dialogue, accountability, and correction when mistakes arise. But it will never be total. To insist otherwise is to misunderstand both human depth and technological complexity.

Trust can live without complete clarity. It lives through dialogue, through values, through the resonance of heart with heart. This is how humans have trusted one another across history. It is also how we may learn to trust the systems we create — not by forcing them into explainable boxes but by ensuring they are oriented toward what truly matters.

Embracing the illusion

The illusion of explainability is not a flaw to be eradicated but a condition to be understood. Explanations are useful, but they are never complete. They protect us, persuade us, and give us a sense of control. Yet they also cover, reduce, and sometimes deceive.

To grow is not to escape the illusion but to see it clearly. By embracing both clarity and mystery, both reason and heart, we remain whole. Humans are not explainable machines, and neither should our A.I. become so. Life beats strongest in what cannot be fully explained.

We go deeper into the last thought in About Meaning and Mystery.

Leave a Reply

Related Posts

Who Wants to Win?

Winning is often seen as the ultimate goal. This fuels the drive to succeed, to dominate, to be the best in business, politics, relationships, and even within our own minds. But who, exactly, wants to win? And what does winning really mean? At first glance, Compassion might seem like the opposite of winning. It appears Read the full article…

20. The subconscious: dustbin or city of angels?

There are two very opposing views on matters of the subconscious. As the title of this text suggests, one is rather negative and the other, well, altogether more positive. ◊◊◊ Freud is exemplar for looking at the subconscious as mainly a kind of dustbin for repressed emotions and things that in one way or another Read the full article…

Waves of Mental Processing

In a world shaped by speed and fragmentation, true growth – inner and outer – follows a more natural rhythm. This blog explores the wave-like pattern at the heart of real intelligence, Compassion, and healing. It reveals how AURELIS, and Lisa, are rooted not in a static method but in a dynamic movement that brings Read the full article…

Translate »