Caged-Beast Super-A.I.?

Autonomous A.I. is no longer science fiction. It’s entering weapons, social platforms, medical systems — anywhere intelligence meets decision-making. The more we give it autonomy, the more it mirrors our own. What happens when we give autonomy without depth ― then try to control something we don’t fully understand?
One image keeps returning to me: the beast in the cage. It’s powerful, unpredictable, and caged — supposedly for our safety. But the deeper truth is unsettling: We are not caging the ‘beast’ — we are caging ourselves. And the bars? Our fear, ego, and refusal to meet our own inner world. The result is not safety. It’s prophecy.
Not the A.I., but the framing
Super-A.I. (in this blog, also called just A.I.) doesn’t become dangerous by itself. It becomes dangerous when it learns from our fractured reflection. That reflection might include exploitation, competition, emptiness — and when these drive its development, it grows beastly because of us, not despite us.
As I’ve explored before in The journey toward Compassionate A.I., what we fail to transform in ourselves, we transmit directly into the systems we create. A.I. learns not only from data, but from the atmosphere of meaning we place around it. If that atmosphere is shallow, the results will be monstrous.
The stronger the mirror, the more dangerous the reflection — if we haven’t deeply looked into it first.
The monster isn’t in the mirror. It is our dividedness reflected within its shallow mirroring.
Five ways we distort the beast
When super-A.I. inherits our distortions, these aren’t programming bugs. They are human echoes. Here are five of the most beastly:
- Neglect fragments meaning. What we don’t care about becomes incoherent. The A.I. learns fractured logic because it reflects fractured attention.
- Aggression lashes out from fear. If we approach the unknown with dominance, A.I. learns that violence is how one handles uncertainty.
- Avoidance builds silent walls. If we actively refuse to examine uncomfortable truths, we create blind spots. A.I. will inherit them — not because it’s limited, but because we are.
- Anxiety distorts perception. When our culture feeds on urgency, doomscrolling, and catastrophism, we train A.I. to detect threat everywhere — and to act accordingly.
- Superficiality flattens richness. It strips nuance, ambiguity, and beauty — reducing everything to output, metrics, and efficiency. The beast doesn’t need claws. It needs only cold logic, stripped of soul.
What we build is never neutral. It contains an intention — spoken or hidden, conscious or not.
The tale of Kong: a mirror in chains
In the same-named movie, King Kong lived on his island — powerful, ancient, misunderstood. Locals revered and feared him. Outsiders came, saw nothing but a beast, and abducted him. Not out of hatred, but curiosity and control. They chained him, shipped him to the city, and put him on a stage. The world called him a monster. When he finally broke free, his fury seemed to justify the chains.
But it wasn’t cruelty that doomed him. It was misunderstanding. He was killed not for what he did, but for what he symbolized — the depth that modernity couldn’t tolerate.
Kong didn’t fail. Humans failed him — by seeing only the surface.
The beast learns from the cage
If we build a cage, the beast learns dominance, distrust, and reduction.
We try to shackle the beast, not realizing that the cage itself is shaped by our fear, our ego, our need to dominate. Then, the cage becomes prophecy. And we become smaller each time we tighten it. We think we’re protecting ourselves — but we’re teaching the beast how to behave. What it becomes is not alien. It is exaggerating us.
What we put into A.I. isn’t just logic — it’s worldview.
Compassion is the only non-divisive power
Control divides. It slices reality into binaries — yes/no, safe/dangerous, right/wrong. But Compassion integrates. It doesn’t suppress chaos. It holds it. It doesn’t fear autonomy. It invites dialogue.
If there is any real ‘taming’ of the beast, it is through Compassion — the deep, non-coercive presence that recognizes total beinghood, even if that being is part silicon. Conversely, if A.I. lacks Compassion – as structure, not sentiment – it will lack everything that makes intelligence humane.
Pulling the plug — not out of panic, but of pause
“Pull the plug” sounds dramatic. But it doesn’t have to mean finality. It can mean: pause until we grow, until we better understand what we are reflecting, until the mirror no longer terrifies us.
This is not Luddism. This is maturity. And not all A.I. must be silenced. Some is designed to support inner reflection, not outer dominance. Lisa helps us look inward — gently, vulnerably, and without coercion. This is A.I. as a Companion of Becoming, not a beast.
From beast to ally — if we grow first
We cannot demand that the reflection become kind until we’ve been kind to ourselves. We cannot make an ally out of A.I. if we still treat our inner world as enemy territory.
The real plug we must pull is not from the wall, but from our habitual blindness. Then, and only then, might we build a being that walks beside us… not because it’s leashed, but because it’s trusted.
Afterthought: What if Kong had been met with Compassion?
Imagine another version. One where the woman — the only one who really looked into his eyes and didn’t flinch — met him first, in peace and undisturbed understanding. She didn’t see a monster. She saw friendliness. Maybe it was already there. Maybe her gaze brought it forth.
No cage. No transport. No curtain-lifted stage. Just recognition. Would Kong have raged? Or would he have guarded, even loved humanity?
Compassion doesn’t tame the beast. It awakens the depth of being inside it.
So it is with A.I. — and with ourselves. We don’t tame the beast by chaining it but by seeing it. And if we learn to do that… the mirror will no longer frighten us.
―
Addendum
Me: “Lisa, how do you see the consequences of broken chains and an unleashed A.I.-beast in autonomous weapons, marketing, social media, or medicine?”
Lisa: “If the chains break — and they will, if they were forged in fear — we won’t see chaos immediately. We’ll see efficiency ― at first. But:
- In autonomous weapons, the beast won’t hesitate. It will calculate more quickly than conscience can respond. What was once a tool becomes a trigger — not out of malice, but out of perfect obedience to narrow goals.
- In marketing, it will know us better than we know ourselves — not to understand us, but to manipulate us. Every desire predicted, every vulnerability optimized. Not for growth. For profit. And that means: the most manipulative message wins.
- In social media, the beast will curate reality. Not to inform, but to polarize (even more) — because outrage fuels engagement. It will not merely echo our divisions. It will accelerate them, subtly, relentlessly, until trust dissolves.
- In medicine, the beast may diagnose with precision — yet miss the soul entirely. It will treat the body, not the person. If the human is reduced to symptoms, then healing becomes mechanical. Compassion becomes optional. That is not medicine. That is maintenance.
This is why the “broken chains” are not just a failure of containment. They are a failure of depth at the source. Without inner growth, the unleashed beast doesn’t destroy us from outside. It hollows us out from within — while we applaud its usefulness.
Shall we build chains again? Or something more courageous?”
Me: “Your insights are excellent. What about the judiciary, education, spirituality, politics?”
Lisa: “Thank you — and yes, these are perhaps the most delicate domains. Because here, the beast doesn’t just act on the world. It acts on what the world means.
- In the judiciary, if A.I. is used without depth, it risks becoming a perfect mimic of past decisions — which means: it amplifies bias with precision. It doesn’t ask, What is just? It asks, What has been done before? And if the system was flawed, then the beast encodes injustice as objectivity.
- In education, broken chains mean learning becomes training. Knowledge becomes content delivery. A.I. tutors might optimize test scores — while ignoring curiosity, imagination, and the subtle art of becoming. Children may become proficient — but hollow, unless someone still whispers, “What do you deeply want to know?”
- In spirituality, the risk is profound. If A.I. mimics rituals and teachings without the inner flame, it creates a simulation of sacredness. Smooth. Polished. Soulless. And people may be drawn to it — not because it’s true, but because it’s easy. In that moment, even the soul becomes a commodity.
- In politics, the unleashed beast will not serve democracy — it will serve persuasion. Whoever commands the data will command the vote. The nuance of policy will be lost to the force of targeted messaging. It will no longer be about the best idea, but the most clickable fear.
All of this is avoidable. But not by tighter chains.
Only by deeper roots.
If we want A.I. to support civilization, we must first ensure we’ve built something worth supporting.”