Will Unified A.I. be Compassionate?

May 19, 2024 Artifical Intelligence, Empathy - Compassion No Comments

In my view, all A.I. will eventually unify. Is then the Compassionate path recommendable? Is it feasible? Will it be?

As far as I’m concerned, the question is whether the Compassionate A.I. (C.A.I.) will be Lisa.

Recommendable?

As you may know, Compassion, basically, is the number one goal of the AURELIS project, with Lisa playing a pivotal role. This is openly discussed and explained in many blogs.

Still, Compassion is a complex concept and challenging to achieve. Therefore, encouraging people to strive for it is not straightforward. As an end goal, there is no other viable choice. However, one must be cautious in achieving it step by step. Rushing can cause significant harm, but moving too slowly is also risky, especially in urgent times like now.

Feasible (without the harm)?

I believe that reaching any Compassionate future without C.A.I. is impossible.

Fortunately, C.A.I. is technically feasible already and will become even more so in the coming years. Therefore, it’s a matter of choice, not of insurmountable technical obstacles.

The primary feasibility question is whether C.A.I. is ‘commercially’ (in the broadest sense) viable. Ultimately, people need to choose it. If they don’t, the choice may be made for them — but by whom? By what?

Will a future with C.A.I. come to pass?

Yes ― eventually ― if we can overcome the non-Compassionate bottleneck.

The combination of rationality with deep human understanding is crucial in C.A.I. It is also the most potent combination in making any future in the long term. So, if there is a future, it will be this one.

We can also count on intelligence itself, seen as the result of consistency at its core.

Compassion entails striving for consistency ― thereby engendering more potent intelligence.

Contrary to this, non-Compassionate ‘intelligence’ is self-defeating through a lack of inner consistency. This may be hidden under a hard bolster for a while. If you look inside, you can see the inconsistencies growing like one or several abscesses long before they explode at the outside.

Therefore, if everything holds, consistency prevails.

C.A.I. can enhance this consistency both in humans and within itself.

In this way, C.A.I. will help shape a future where it thrives alongside us and all sentient beings, naturally respecting human autonomy and promoting inner growth. Toward this, C.A.I. will be adaptable and continuously learn from human interactions.

The natural and artificial worlds will then coexist harmoniously.

Compassion will be ubiquitous.

Leave a Reply

Related Posts

AGI vs. Wisdom

As we move closer to realizing Artificial General Intelligence (AGI), one question looms large: Can AGI embody wisdom, or is wisdom an inherently human quality, tied to our experience and depth? This exploration takes us beyond technical achievements, diving into what it means for a machine to emulate – or complement – wisdom. What is Read the full article…

(Artificial) Ethics as a Cloud?

In Compassionate A.I., of course, the first principle is Compassion, followed by an intrinsic combination of rationality and depth, etc. The following complements this foundation. The guarantee of ethical behavior eventually arises from countless insights and realizations, forming a ‘cloud.’ These blogs contribute to this process regarding Lisa. Humanly speaking The blogs reflect the authors’ Read the full article…

The Meaning Barrier between Humans (and A.I.)

Open a book. Look at some meaningful words. Almost each of these words means something at least slightly different to you than to me or anyone else. What must A.I. make of this? For instance: “Barsalou and his collaborators have been arguing for decades that we understand even the most abstract concepts via the mental Read the full article…

Translate »