Is Compassionate A.I. (Still) Our Choice?

December 9, 2023 Artifical Intelligence No Comments

Seen from the future, the present era may be the most responsible for accomplishing the advent of Compassionate A.I.

Compassion, basically, is the realm of complexity.

It’s not about some commandments or a – simple or less simple – conceptual system of ethics. Therefore, instilling Compassion into a system is not a straightforward engineering endeavor as technical people may envision. It’s not a simple choice to just do it.

It’s way much harder.

It’s also much more demanding to change it in an already developed system than to inculcate it from the start. Even so, in humans, it’s the result of a never-ending growth process ― Compassion being the work of a lifetime.

For A.I., our challenge lies in how to make it so.

Are we starting the future A.I. system(s) now?

This is a crucially interesting question. Several ‘large language models’ are being created on the basis of vast amounts of human-generated input. Will at least one of these be further developed into what will forever be the new intelligence?

From now on, is it a question of refinement and enhancement, or will we see fundamentally new beginnings?

In any case, it’s urgent to start thinking seriously about the issue of Compassionate A.I. The direction it takes now may be the direction with which we are stuck for a long time ― possibly too long for the good of humanity.

So, urgently.

This is about more than good intentions.

Doing good is not easy!

‘Doing good at the surface level’ may, in cases where the subconceptual is primordial, even be a substantial enhancer of the problem ― paradoxically. We may be talking here about every human-related field.

This way, ‘only good intentions’ need to be handled very critically.

Contrary to this, Compassion is VERY profound.

So, is Compassionate A.I. still our choice?

In principle, yes. In practice, I’m not so sure ― the main hurdle being us, of course,

Pragmatically, it may be achieved by Lisa ― being doable, scalable, and provable. Lisa is a tool to diminish suffering and enhance growth. The former is increasingly needed by many people. The latter is more profoundly essential and comes together with the first.

Thus, with enough time and effort, Lisa herself can become a Compassionate force, inviting humanity to make the correct choice.

If you want to help with this, please send a note to lisa@aurelis.org.

Let’s make it so!

Leave a Reply

Related Posts

Introducing Lisa (Animated Video)

Without further delay, in this animated video, I bring you an introduction to Lisa. [Lisa animated video – 13:15′] If you want to cooperate, please contact us. If you have feedback, please let us know. This is a draft version. Here is the full written text. Hi, my name is Jean-Luc Mommaerts. I am a Read the full article…

Will Unified A.I. be Compassionate?

In my view, all A.I. will eventually unify. Is then the Compassionate path recommendable? Is it feasible? Will it be? As far as I’m concerned, the question is whether the Compassionate A.I. (C.A.I.) will be Lisa. Recommendable? As you may know, Compassion, basically, is the number one goal of the AURELIS project, with Lisa playing a pivotal role. Read the full article…

The Compute-Efficient Frontier

Research on scaling laws for LLMs suggests that while scaling model size (functional parameters), dataset size (tokens), and compute (amount of computing power) improves performance, diminishing returns are becoming evident. This is called the ‘compute-efficient frontier.’ Apparently, it does not depend much on architectural details (such as network width or depth) as long as reasonably Read the full article…

Translate »