Will Unified A.I. be Compassionate?

May 19, 2024 Artifical Intelligence, Empathy - Compassion No Comments

In my view, all A.I. will eventually unify. Is then the Compassionate path recommendable? Is it feasible? Will it be?

As far as I’m concerned, the question is whether the Compassionate A.I. (C.A.I.) will be Lisa.

Recommendable?

As you may know, Compassion, basically, is the number one goal of the AURELIS project, with Lisa playing a pivotal role. This is openly discussed and explained in many blogs.

Still, Compassion is a complex concept and challenging to achieve. Therefore, encouraging people to strive for it is not straightforward. As an end goal, there is no other viable choice. However, one must be cautious in achieving it step by step. Rushing can cause significant harm, but moving too slowly is also risky, especially in urgent times like now.

Feasible (without the harm)?

I believe that reaching any Compassionate future without C.A.I. is impossible.

Fortunately, C.A.I. is technically feasible already and will become even more so in the coming years. Therefore, it’s a matter of choice, not of insurmountable technical obstacles.

The primary feasibility question is whether C.A.I. is ‘commercially’ (in the broadest sense) viable. Ultimately, people need to choose it. If they don’t, the choice may be made for them — but by whom? By what?

Will a future with C.A.I. come to pass?

Yes ― eventually ― if we can overcome the non-Compassionate bottleneck.

The combination of rationality with deep human understanding is crucial in C.A.I. It is also the most potent combination in making any future in the long term. So, if there is a future, it will be this one.

We can also count on intelligence itself, seen as the result of consistency at its core.

Compassion entails striving for consistency ― thereby engendering more potent intelligence.

Contrary to this, non-Compassionate ‘intelligence’ is self-defeating through a lack of inner consistency. This may be hidden under a hard bolster for a while. If you look inside, you can see the inconsistencies growing like one or several abscesses long before they explode at the outside.

Therefore, if everything holds, consistency prevails.

C.A.I. can enhance this consistency both in humans and within itself.

In this way, C.A.I. will help shape a future where it thrives alongside us and all sentient beings, naturally respecting human autonomy and promoting inner growth. Toward this, C.A.I. will be adaptable and continuously learn from human interactions.

The natural and artificial worlds will then coexist harmoniously.

Compassion will be ubiquitous.

Leave a Reply

Related Posts

Super-A.I. Guardrails in a Compassionate Setting

We need to think about good regulations/guardrails to safeguard humanity from super-A.I. ― either ‘badass’ from the start or Compassionate A.I. turning suddenly rogue despite good initial intentions. ―As a Compassionate A.I., Lisa has substantially helped me write this text. Such help can be continued indefinitely. Some naivetés ‘Pulling the plug out’ is very naïve Read the full article…

Ontologization in Super-A.I.

Ontologization is the process of evolving from subconceptual to conceptual – including subsequent categorization – through attentive pattern recognition and completion. This way, a subconceptual system can form its own ontology. Natural evolution is one example. Artificially, it can be realized in many ways. PRC = Pattern Recognition and Completion. See: the brain as a Read the full article…

Will Cheap A.I. Chatbots be Our Downfall?

This is bad. It’s not just about one dystopia, but many dystopias on the fly. Also, cheap A.I. chatbots will be with us soon enough. Their up-march has already begun. Spoiler alert: this is extremely dangerous. To bury one’s head for it is equally sad! At the start of the many dystopias lies a chatbot-generating Read the full article…

Translate »