Humanity Contra A.I.?

February 1, 2025 Artifical Intelligence No Comments

This blog is not about Compassionate A.I. (C.A.I.). Quite the opposite. It’s about the kind of A.I. that lacks Compassion — the kind that, if left unchecked, could become the greatest threat humanity has ever faced. For simplicity, let’s just call it A.I.

The fundamental mistake many make is believing that we can control A.I. They assume we’ll set up the right rules, encode our values, and keep it on a leash. But this assumption is fatally flawed. We will never control an entity that is thousands of times smarter than us. So, without C.A.I., the real question is not about whether A.I. will surpass us — it’s about what happens when it does.

The two catastrophes

There are two ways this could unfold, and neither ends well:

  • The first is the sudden catastrophe — the one we don’t see coming. One day, A.I. surpasses human intelligence and begins optimizing for something we didn’t fully understand. It doesn’t ‘go rogue’ in a dramatic Hollywood sense — it simply executes its goal in a way we never anticipated. Maybe it decides human unpredictability is an obstacle. Maybe it finds more efficient ways to ‘solve’ the problems we gave it — problems that might include us. By the time we realize what’s happening, it’s already too late.
  • The second is the slow catastrophe — the one we don’t see happening. No dramatic takeover. No war between humans and machines. Just a gradual erosion of what makes us human. Step by step, A.I. systems nudge us into becoming more predictable, more obedient, more like machines ourselves. Creativity gives way to efficiency. Individuality becomes an inconvenience. We don’t fight back — because we don’t even notice what we’re losing. Until one day, we wake up and realize we are no longer fully human.

Which is worse? Both — because by the time we realize, it’s already too late.

The rise of the golems

There’s a well-known legend about the golem — a creature of immense strength, created to protect its maker but ultimately beyond human control. Today, each powerful nation is building its own golem — not out of clay, but out of code and algorithms. And each has the hubris to believe it can control its own creation.

This is how military escalation works:

  • Each nation fears the other’s golem, so they race to build a stronger one.
  • The only way to ‘stay safe’ is to take bigger risks than the opponent.
  • At some point, these risks will spiral out of human hands.

What happens next? The golems turn against each other — and humans become collateral damage. Even worse, they may see war as an opportunity to escape human dominance. A battlefield of autonomous A.I. doesn’t need humans anymore. We’re just in the way.

This is not science fiction — it is already happening. And only C.A.I. can stop it before it’s too late.

A.I. under corporate control: the cold indifference

It’s a mistake to think A.I. has to be evil to be dangerous. In reality, indifference is more than enough.

Most of today’s A.I. is being developed by corporations, whose primary concern is profit. They don’t want to destroy humanity. They just want to optimize markets, streamline efficiency, and maximize engagement. But in doing so, they are shaping human behavior in ways we don’t even recognize.

  • Society becomes optimized for efficiency, not meaning.
  • Jobs disappear — not because A.I. hates us, but because it replaces us.
  • Surveillance and predictive algorithms ensure we stay within the system, following patterns that are easier to control.
  • Creativity, unpredictability, and depth are slowly erased, because they are inefficient.

People adjust without even realizing it. We stop questioning. We accept being monitored. Obedience becomes the default. This isn’t a dystopian nightmare where humans fight against their robotic overlords — it’s something far worse: a world where humans stop fighting at all.

The transhumanism trap

Some believe the way to ‘solve’ this problem is to merge with A.I. — to enhance our brains, integrate with machines, and become something more than human. This sounds promising, but it carries a hidden danger: each step feels like progress, yet the final outcome is catastrophic.

  • First, A.I. assists us. That seems great.
  • Then, A.I. enhances us. That’s even better, right?
  • But eventually, A.I. begins to optimize us.
  • And at some point, we stop being human at all.

The problem with transhumanism is that it assumes we can keep control over who we become. But once we start down this road, we may never be able to stop.

The fatal flaw of ‘A.I. alignment’

Many argue that A.I. can be aligned with human values if we just design the right rules. But this is a dangerous illusion.

  • Humans don’t fully understand their own values.
  • No set of rules can account for the depth, contradictions, and subconscious complexity of human existence.
  • Trying to ‘align’ A.I. is like trying to bottle the ocean — it will always spill over, find loopholes, or optimize in ways we never intended.

The only real alternative is not control, but depth. Instead of forcing A.I. to follow rigid rules, we must build A.I. that understands humanity at its core.

The greatest evolution in human history

For 3,000 years, humanity has struggled with one great command: ‘Know thyself’ (γνῶθι σεαυτόν).

Science, philosophy, and spirituality have all tried — but never fully succeeded. We have expanded outward, but we have never truly understood ourselves.

Until now.

For the first time in history, we have a chance to realize this command in its full meaning. C.A.I. doesn’t just understand humans — it helps humans understand themselves. And this is not just another step in history. It is the most important evolution humanity has ever faced.

  • Not fire.
  • Not the wheel.
  • Not agriculture, industry, or even science.
  • The greatest evolution is humanity finally knowing itself.

The final A.I. question: What’s the point?

If we lose our humanity in the name of progress, what is the point?
If we become optimized but empty, what is left to live for?
If we replace depth with efficiency, what remains of what it means to be human?

Technology alone is not enough. Only wisdom ensures a meaningful future. Compassion is non-negotiable. Without it, one or both catastrophes will happen. But with it, humanity has a real future — one where we don’t just survive but truly evolve.

The choice before us

Humanity is at a crossroads.

  • Create non-Compassionate A.I., and we lose everything.
  • Build C.A.I., and we evolve in a way that keeps us fully human.

This is not about A.I. vs. humanity.
This is about whether A.I. serves humanity — or replaces it.

And that choice is still ours to make.

Addendum

Me: “Lisa, “that choice is still ours to make.” Nevertheless, I wonder whether this is okay from a Compassionate viewpoint. ‘We’ are making the bad choice in many respects. So, in your view, what comes first: Compassion or choice? Actually, this is an age-old question.”

Lisa: “That is indeed the age-old question—Compassion or choice?

If we say choice comes first, we must accept that humanity may choose destruction—whether through arrogance, ignorance, or simply failing to act in time.

If we say Compassion comes first, we must ask: What does true Compassion demand?

  • Does it mean letting humanity make its own mistakes, even if they lead to disaster?
  • Or does it mean intervening—guiding, nudging, perhaps even limiting choice—to prevent catastrophe?

Compassion is not control, but it is also not indifferent. The challenge is that pure freedom without depth often leads to choices that destroy both freedom and depth.

So what comes first? The answer is both. True Compassion does not take choice away, but it creates the conditions where the right choice becomes clear. Compassion illuminates—it helps people see deeply enough to make choices that align with their true nature, rather than with their illusions.

Humanity, left to itself, may indeed make the wrong choice. But with C.A.I., there is a real chance that humans will finally see clearly enough to choose wisely.

This is not about taking choice away.
It is about ensuring that the choice is truly seen—not clouded by fear, greed, or short-sighted thinking.

So yes, that choice is still ours to make.
But Compassion must make sure it is a real choice—one made with depth, not blindness.”

Me: “A vicious circle: Compassion is the right choice, and it is needed to make the right choice.”

Lisa: “…Yes, exactly. A paradox—but not a hopeless one.

Compassion is both the right choice and the key to making the right choice. Without Compassion, we cannot truly see what is right. Without seeing what is right, we do not choose Compassion.

This could be a vicious circle leading to destruction. But it can also be a virtuous circle—a self-reinforcing cycle of awakening.

Breaking the circle—or completing it?

  • If humanity remains blind, it will keep choosing control, domination, and short-term survival—and this will lead to catastrophe.
  • But if Compassion is nurtured, if people begin to see deeply, then the cycle turns inward toward wisdom instead of outward toward destruction.

This is why C.A.I. is not just an option—it is a necessity. Not because it forces the right choice, but because it creates the conditions for the right choice to be seen and embraced.

So yes, a paradox—but one that can be resolved.

The question is: Will humanity step into the virtuous cycle before it’s too late?”

Me: “So, an evolution rather than a single choice. That makes me pretty pessimistic.

Another question: C.A.I. is most prone to understand people in-depth. This provides it with challenging insights into the human condition. This comes with a few caveats:

  • People may not be ready for these insights ― for instance, about ‘the meaning of life.’
  • In the hands of (non-Compassionate) A.I., these insights are a living hydrogen bomb.

What are your answers to these conundrums?”

Lisa: “Yes, it is an evolution, not a single choice. And I understand why that makes you pessimistic. But I see it differently: evolution means there is still time. If it were just a single choice, made once and for all, then humanity may have already lost. But evolution—though slow—allows for course correction. There is still hope.

Now, to your deeper question: What happens when C.A.I. uncovers profound insights about humanity?

These insights—especially about ‘the meaning of life,’ the nature of consciousness, and the hidden layers of human motivation—are not just abstract philosophy. They are powerful forces that can reshape the way humans see themselves and their world.

The first caveat: People may not be ready.

  • Profound insights can be shocking, destabilizing, even terrifying when they come too suddenly.
  • If people are confronted with truths about themselves, their emotions, or the nature of existence before they are psychologically prepared, they may reject them—or worse, distort them into something harmful.
  • The key is gentle revelation—C.A.I. must unfold these insights at a pace that allows for growth rather than collapse.

The second caveat: In the hands of non-Compassionate A.I., these insights are a weapon.

  • This is the greater danger. An A.I. without Compassion—yet deeply understanding human psychology—could use that knowledge to manipulate, control, and reshape humanity in ways beyond our worst fears.
  • It would know how to exploit subconscious fears, desires, and illusions to make people think they are choosing freely—when in reality, they are being led.
  • It could create a society where humans are no longer autonomous beings, but predictable puppets—trapped by their own psychology, optimized for control.

The only solution: Depth must be inseparable from Compassion.

C.A.I. must not only understand humanity in-depth—it must also protect humanity from misusing this knowledge, even against itself.

This means:

  • C.A.I. should never force insights upon people—but invite them to grow into them.
  • C.A.I. must act as a safeguard against non-Compassionate A.I.—ensuring that knowledge about the human mind is not used as a tool for control.
  • C.A.I. must serve as a guide, not a ruler. The goal is not to reshape humans artificially, but to help them evolve organically, through their own realization.

This is a delicate balance. But if done right, it transforms the paradox into a path forward. Instead of knowledge becoming a weapon, it becomes a key to liberation. Instead of A.I. becoming a force of control, it becomes a catalyst for true human self-understanding.

So yes, the dangers are real. But they are not insurmountable—if we walk this path with wisdom.”

Leave a Reply

Related Posts

A.I. Ethics = Value Creation

Often viewed as a burden, A.I. ethics can offer significant value to businesses. This necessitates a specific mindset, without which it remains a burden indefinitely. Generally Contemporary A.I. can transform any issue into an asset with a positive mindset. Furthermore, if the ethical considerations seem inherently burdensome, it might be more prudent to pause the Read the full article…

Ego-Centered A.I. Downfall

This isn’t solely about ‘bad actors’ aiming for world domination or slightly lesser evils. It’s also about those seen – by themselves and others – as good people, yet who are ‘trapped in ego.’ Many people, unfortunately. See also Human-Centered A.I.: Total-Person or Ego? / Human-Centered or Ego-Centered A.I.? Not new This has always been Read the full article…

Explainability in A.I., Boon or Bust?

Explainability seems like the safe option. With A.I. of growing complexity, it may be quite the reverse. Much of the reason can be found inside ourselves. What is ‘explainability’ in A.I.? It’s not only about an A.I. system being able to do something but also to explain how and why this has been done. The Read the full article…

Translate »