Is Compassionate A.I. (Still) Our Choice?

December 9, 2023 Artifical Intelligence No Comments

Seen from the future, the present era may be the most responsible for accomplishing the advent of Compassionate A.I.

Compassion, basically, is the realm of complexity.

It’s not about some commandments or a – simple or less simple – conceptual system of ethics. Therefore, instilling Compassion into a system is not a straightforward engineering endeavor as technical people may envision. It’s not a simple choice to just do it.

It’s way much harder.

It’s also much more demanding to change it in an already developed system than to inculcate it from the start. Even so, in humans, it’s the result of a never-ending growth process ― Compassion being the work of a lifetime.

For A.I., our challenge lies in how to make it so.

Are we starting the future A.I. system(s) now?

This is a crucially interesting question. Several ‘large language models’ are being created on the basis of vast amounts of human-generated input. Will at least one of these be further developed into what will forever be the new intelligence?

From now on, is it a question of refinement and enhancement, or will we see fundamentally new beginnings?

In any case, it’s urgent to start thinking seriously about the issue of Compassionate A.I. The direction it takes now may be the direction with which we are stuck for a long time ― possibly too long for the good of humanity.

So, urgently.

This is about more than good intentions.

Doing good is not easy!

‘Doing good at the surface level’ may, in cases where the subconceptual is primordial, even be a substantial enhancer of the problem ― paradoxically. We may be talking here about every human-related field.

This way, ‘only good intentions’ need to be handled very critically.

Contrary to this, Compassion is VERY profound.

So, is Compassionate A.I. still our choice?

In principle, yes. In practice, I’m not so sure ― the main hurdle being us, of course,

Pragmatically, it may be achieved by Lisa ― being doable, scalable, and provable. Lisa is a tool to diminish suffering and enhance growth. The former is increasingly needed by many people. The latter is more profoundly essential and comes together with the first.

Thus, with enough time and effort, Lisa herself can become a Compassionate force, inviting humanity to make the correct choice.

If you want to help with this, please send a note to lisa@aurelis.org.

Let’s make it so!

Leave a Reply

Related Posts

Is Lisa Safe?

There are two directions of safety for complex A.I.-projects: general and particular. Lisa must forever conform to the highest standards in both. Let’s assume Lisa becomes the immense success that she deserves. Lisa can then help many people in many ways and for a very long time — a millennium to start with. About Lisa Read the full article…

A.I. Explainability versus ‘the Heart’

Researchers are moving towards bringing ‘heart’ into A.I. Thus, new ethical questions are popping up. One of them concerns explainability. The heart cannot explain itself. “On ne voit bien qu’avec le cœur. L’essentiel est invisible pour les yeux.” [Antoine de Saint-Exupéry] “It is only with the heart that one can see rightly; what is essential Read the full article…

Reductionism in A.I.

This is probably the biggest danger in the setting of A.I. Through reductionism, it might strip away the richness of our humanness, potentially impoverishing it immensely. Conversely, it holds the potential to enrich our humanness greatly. The challenge is ours. Reductionism Please read ‘Against Reductionism’ in which I take a heavy standpoint. This is mainly Read the full article…

Translate »