Compassion is the Intelligent Future

November 11, 2023 Empathy - Compassion No Comments

The only proper future of intelligence, whether natural or artificial, is Compassionate.

Why?

In my view, Compassion is the only way for a total civilization to be sustainable — thus, to grow and become more intelligent.

Intelligence wants more of itself. Compassion also wants more of itself. Thus, Compassionate intelligence is a powerful self-enhancing combination. Any other kind of intelligence is prone to destruction. Eventually, this even leads to self-destruction because an increasing intelligence also becomes more brittle through its capabilities. A lack of Compassion thus becomes more consequential.

We are witnessing this in the human case. Our power to self-destruct (and the danger of actually doing so) is reaching peak after peak.

Thus, it’s intelligent to strive for Compassionate intelligence.

Still, the nudging toward it is needed.

A purely intelligent system has no meaningful reason to start thinking about meaningful reasons to exist or carry on. Even an extremely intelligent system still has no spontaneous drive, nothing we would call motivation. If we would prod the system, it would react intelligently — otherwise, nothing. As said, a purely intelligent system has no drive.

This may feel counterintuitive because, in our (organic) case, the reason to exist came long before our strain of intelligence. People may think intelligence is necessarily meaningful because of the lack of examples of any other — until now or in the near future.

Compassion as the first motivation

Ideally, the first motivational direction of such a system is Compassionate — not ego- but total self-driven, able to take care of others as of itself.

At least at the start, something must have put it into motion. This initial motivation is enough to let it evolve toward more complex motivations due to its super-intelligence.

Once started, this train can go more and more quickly.

Is this relevant to the present?

We are at the dawn of super-A.I. This way, following the above reasoning, the ‘initial motivator’ is us — now. This is a unique era on Earth.

Moreover, A.I. evolution is getting exponential. This may spiral out of control into the positive or negative soon enough, with repercussions for many people living now.

Human-human-A.I. value alignment

We also need to think about ourselves. We have motivations, but no perfectly Compassionate ones. Moreover, we’re in the same boat as the super-A.I. of the near future. From our viewpoint, alignment is needed in the whole spectrum. That encompasses many different human cultures, as well as A.I.

Only a mutual future in Compassion can draw us into alignment,

like different randomly oriented strings that are pulled in the same direction ― spontaneously forming more and more alignment while underway.

That way, we get the alignment ‘for free.’

Leave a Reply

Related Posts

How to Become a Strongly Empathic Leader

Empathy gets wrongly confused with weakness. What we need is strong empathy. Empathy can be defined as understanding and sharing another person’s feelings and emotions, accompanied by a desire for action and while not canceling your own personality. It comes frequently together with integrity, authenticity and humility. Empathy is not a weakness. To constantly take into Read the full article…

Asking for a Donation

Philanthropy is about so much more than a financial transaction. It is an occasion for letting humaneness flow and flourish. This may reflect in how a fundraiser asks for a donation. Please read In-Depth Philanthropy. The intellect and the heart One may rationally conclude that the best giving is maximally based on rationality. Many people Read the full article…

Will Unified A.I. be Compassionate?

In my view, all A.I. will eventually unify. Is then the Compassionate path recommendable? Is it feasible? Will it be? As far as I’m concerned, the question is whether the Compassionate A.I. (C.A.I.) will be Lisa. Recommendable? As you may know, Compassion, basically, is the number one goal of the AURELIS project, with Lisa playing a pivotal role. Read the full article…

Translate »