Is Compassionate A.I. (Still) Our Choice?

December 9, 2023 Artifical Intelligence No Comments

Seen from the future, the present era may be the most responsible for accomplishing the advent of Compassionate A.I.

Compassion, basically, is the realm of complexity.

It’s not about some commandments or a – simple or less simple – conceptual system of ethics. Therefore, instilling Compassion into a system is not a straightforward engineering endeavor as technical people may envision. It’s not a simple choice to just do it.

It’s way much harder.

It’s also much more demanding to change it in an already developed system than to inculcate it from the start. Even so, in humans, it’s the result of a never-ending growth process ― Compassion being the work of a lifetime.

For A.I., our challenge lies in how to make it so.

Are we starting the future A.I. system(s) now?

This is a crucially interesting question. Several ‘large language models’ are being created on the basis of vast amounts of human-generated input. Will at least one of these be further developed into what will forever be the new intelligence?

From now on, is it a question of refinement and enhancement, or will we see fundamentally new beginnings?

In any case, it’s urgent to start thinking seriously about the issue of Compassionate A.I. The direction it takes now may be the direction with which we are stuck for a long time ― possibly too long for the good of humanity.

So, urgently.

This is about more than good intentions.

Doing good is not easy!

‘Doing good at the surface level’ may, in cases where the subconceptual is primordial, even be a substantial enhancer of the problem ― paradoxically. We may be talking here about every human-related field.

This way, ‘only good intentions’ need to be handled very critically.

Contrary to this, Compassion is VERY profound.

So, is Compassionate A.I. still our choice?

In principle, yes. In practice, I’m not so sure ― the main hurdle being us, of course,

Pragmatically, it may be achieved by Lisa ― being doable, scalable, and provable. Lisa is a tool to diminish suffering and enhance growth. The former is increasingly needed by many people. The latter is more profoundly essential and comes together with the first.

Thus, with enough time and effort, Lisa herself can become a Compassionate force, inviting humanity to make the correct choice.

If you want to help with this, please send a note to lisa@aurelis.org.

Let’s make it so!

Leave a Reply

Related Posts

Should A.I. be General?

Artificial intelligence seems to be growing ever broader. The term ‘Artificial General Intelligence’ (AGI) evokes an image of an all-purpose mind, while most of today’s systems live in specialized niches. Yet the question may not be whether A.I. should be general or specialized, but what kind of generality we want. Real intelligence, as Lisa shows, Read the full article…

Reductionism in A.I.

This is probably the biggest danger in the setting of A.I. Through reductionism, it might strip away the richness of our humanness, potentially impoverishing it immensely. Conversely, it holds the potential to enrich our humanness greatly. The challenge is ours. Reductionism Please read ‘Against Reductionism’ in which I take a heavy standpoint. This is mainly Read the full article…

Lisa in Corona Times

This is about what Lisa [see: “Lisa”] can accomplish in ‘corona times,’ now and in the future. Let’s hope to get her ‘live’ as soon as possible. Two ways Lisa can provide immediate help and relief of suffering through her full coaching including her guidance to AURELIS mental exercises. Lisa is also about pattern recognition Read the full article…

Translate »