Is Compassionate A.I. (Still) Our Choice?

December 9, 2023 Artifical Intelligence No Comments

Seen from the future, the present era may be the most responsible for accomplishing the advent of Compassionate A.I.

Compassion, basically, is the realm of complexity.

It’s not about some commandments or a – simple or less simple – conceptual system of ethics. Therefore, instilling Compassion into a system is not a straightforward engineering endeavor as technical people may envision. It’s not a simple choice to just do it.

It’s way much harder.

It’s also much more demanding to change it in an already developed system than to inculcate it from the start. Even so, in humans, it’s the result of a never-ending growth process ― Compassion being the work of a lifetime.

For A.I., our challenge lies in how to make it so.

Are we starting the future A.I. system(s) now?

This is a crucially interesting question. Several ‘large language models’ are being created on the basis of vast amounts of human-generated input. Will at least one of these be further developed into what will forever be the new intelligence?

From now on, is it a question of refinement and enhancement, or will we see fundamentally new beginnings?

In any case, it’s urgent to start thinking seriously about the issue of Compassionate A.I. The direction it takes now may be the direction with which we are stuck for a long time ― possibly too long for the good of humanity.

So, urgently.

This is about more than good intentions.

Doing good is not easy!

‘Doing good at the surface level’ may, in cases where the subconceptual is primordial, even be a substantial enhancer of the problem ― paradoxically. We may be talking here about every human-related field.

This way, ‘only good intentions’ need to be handled very critically.

Contrary to this, Compassion is VERY profound.

So, is Compassionate A.I. still our choice?

In principle, yes. In practice, I’m not so sure ― the main hurdle being us, of course,

Pragmatically, it may be achieved by Lisa ― being doable, scalable, and provable. Lisa is a tool to diminish suffering and enhance growth. The former is increasingly needed by many people. The latter is more profoundly essential and comes together with the first.

Thus, with enough time and effort, Lisa herself can become a Compassionate force, inviting humanity to make the correct choice.

If you want to help with this, please send a note to lisa@aurelis.org.

Let’s make it so!

Leave a Reply

Related Posts

A.I. Is in the Patterns

And so is ‘intelligence’ in general. At least what we humanly call ‘intelligence’, mainly because we ourselves are so good at pattern-processing… But, why not? A.I. is not (exclusively) in the serial look-up Neither does our human brain function this way [see: “Human Brain: Giant Pattern Recognizer”]. A machine that performs look-ups and nothing but Read the full article…

Super-A.I. Guardrails in a Compassionate Setting

We need to think about good regulations/guardrails to safeguard humanity from super-A.I. ― either ‘badass’ from the start or Compassionate A.I. turning suddenly rogue despite good initial intentions. ―As a Compassionate A.I., Lisa has substantially helped me write this text. Such help can be continued indefinitely. Some naivetés ‘Pulling the plug out’ is very naïve Read the full article…

“A.I. is in the Size.”

This famous quote by R.C. Schank (1991) gets new relevance with GPT technology ― in a surprisingly different way. How Shank interpreted his quote He meant that one cannot conclude ‘intelligence’ from a simple demo ― as was usual at that time of purely conceptual GOFAI (Good Old-Fashioned A.I.). At that time, many Ph.D. students Read the full article…

Translate »