Is Compassionate A.I. (Still) Our Choice?

December 9, 2023 Artifical Intelligence No Comments

Seen from the future, the present era may be the most responsible for accomplishing the advent of Compassionate A.I.

Compassion, basically, is the realm of complexity.

It’s not about some commandments or a – simple or less simple – conceptual system of ethics. Therefore, instilling Compassion into a system is not a straightforward engineering endeavor as technical people may envision. It’s not a simple choice to just do it.

It’s way much harder.

It’s also much more demanding to change it in an already developed system than to inculcate it from the start. Even so, in humans, it’s the result of a never-ending growth process ― Compassion being the work of a lifetime.

For A.I., our challenge lies in how to make it so.

Are we starting the future A.I. system(s) now?

This is a crucially interesting question. Several ‘large language models’ are being created on the basis of vast amounts of human-generated input. Will at least one of these be further developed into what will forever be the new intelligence?

From now on, is it a question of refinement and enhancement, or will we see fundamentally new beginnings?

In any case, it’s urgent to start thinking seriously about the issue of Compassionate A.I. The direction it takes now may be the direction with which we are stuck for a long time ― possibly too long for the good of humanity.

So, urgently.

This is about more than good intentions.

Doing good is not easy!

‘Doing good at the surface level’ may, in cases where the subconceptual is primordial, even be a substantial enhancer of the problem ― paradoxically. We may be talking here about every human-related field.

This way, ‘only good intentions’ need to be handled very critically.

Contrary to this, Compassion is VERY profound.

So, is Compassionate A.I. still our choice?

In principle, yes. In practice, I’m not so sure ― the main hurdle being us, of course,

Pragmatically, it may be achieved by Lisa ― being doable, scalable, and provable. Lisa is a tool to diminish suffering and enhance growth. The former is increasingly needed by many people. The latter is more profoundly essential and comes together with the first.

Thus, with enough time and effort, Lisa herself can become a Compassionate force, inviting humanity to make the correct choice.

If you want to help with this, please send a note to lisa@aurelis.org.

Let’s make it so!

Leave a Reply

Related Posts

A.I.-Phobia

One should be scared of any danger, including dangerous A.I. Contrary to this, anxiety is never a good adviser. This text is about being anxious. A phobic reaction against present technology is most dangerous. Needed is a lot of common sense. As to the above image, note the reference to Mary Wollstonecraft Shelley’s novel. In Read the full article…

Dawn of Opening Up

The times, they are a-changing into an era with many unforeseen challenges and possibilities. A.I. makes it even more so. One of the essential changes is a gradual Opening up of who we are as a human species, especially regarding the mind in its non-conscious presence. Openness in several domains As in AURELIS subprojects: Open Read the full article…

Pattern Recognition and Completion in the Learning Landscape

At the heart of the learning landscape is a fundamental mechanism: Pattern Recognition and Completion (PRC). Whether it’s a model learning from labeled data, finding hidden structures, or optimizing actions through rewards, PRC is the process that drives all learning systems forward. ’The Learning Landscape’ explores the concept of the learning landscape, where different types Read the full article…

Translate »