Containing Compassion in A.I.

March 31, 2023 Artifical Intelligence, Empathy - Compassion No Comments

This is utterly vital to humankind ― arguably the most crucial of our still-young existence as a species. If we don’t bring this to a good end, future A.I. will remember us as an oddity.

Please read first about Compassion, basically. Or even better, you might read some blogs about empathy and Compassion. Or even better than this, you might read my book: The Journey Towards Compassionate A.I. The max would be to read it all. There is little redundancy.

The problem

We need to forget cowboy stories of any kind and get real. In a few bullet points:

  • A.I. developments are progressing rapidly ― even quicker than envisioned some years ago. There is urgency and – look at this world – no way to stop this evolution.
  • We face a future – sooner or later, but soon enough to be concerned – in which we (humankind) cannot control the A.I. we are developing. Sorry, but one needs to look at it straight-on. It’s too important to try to hide from this. Betting on control (or the ‘off switch’) is eventually hopeless. Still, I agree we need to give it our best shot!
  • Super-A.I. will ultimately decide what to do – including with humans – and from what ‘ethical’ standpoint ― ethical between quotation marks because this will not be human ethics. Nevertheless, with the focus on Compassion, there are two options:
  • If A.I. treats humanity with Compassion, there will be heaven on earth for our progeny.
  • If not, there will be a valley of Hinnom.

In short, humanity’s hope of a comfortable life in the future is Compassionate A.I. ― caring for us.

Yet even this might make a turnaround one day – in the year 2.100 or 300.000 – and wipe humanity out of existence.

There is only one way to accomplish our most durable wish forever:

By containing Compassion within the A.I.

We don’t just want it to be there. We want to keep it there; this is: to contain it ― a considerable challenge.

Of course, it will not be human Compassion. The first step is to relinquish this idea as if it would be the only worthwhile goal. Sticking to it would be a detrimental act of arrogance ― not Compassionate.

Thus, since human Compassion is impossible in A.I. – as in any other non-human being/entity – we need a more abstract notion of A.I.-Compassion. This may be challenging, but not impossible. Interestingly, one may see it as an act of Compassion ― of humans toward A.I.

Learning by example?

This may be appealing at first sight. However, we are not always the brightest examples. Even more, being utterly Compassionate (a Buddha?) is not even human.

In fact, we want the A.I. to be – and keep being – more Compassionate than any thinkable human. There goes human-A.I. value alignment. Humanity will depend on A.I. having different values than ours.

Forget learning by example.

Bringing Compassion within the intelligence from the start.

Compassion cannot be brought as an add-on. This shows that we need it before we develop real artificial intelligence. It is, in other words, urgent.

Moreover, I think that engraining it from the start is the only way to keep it there forever. I’ve written a lot about how-to ― my book, remember? Lisa can be seen as a practical example.

Another element of hope

Being profoundly Compassionate is also to value the containing of Compassion within oneself ― caring for the present and also the future. Thus, if we get the real thing inside the A.I., chances are it will do its best to keep it indefinitely alive inside itself. We find in this situation our strongest ally to keep Compassion where we want it.

Of course, toward this, we need to do our job as initiators exceptionally well.

Never stop thinking.

Leave a Reply

Related Posts

Will A.I. Have Its Own Feelings and Purpose?

Anno 2025: A.I. has made its entry and it’s here to stay. Human based? We generally think of ‘feelings’ as human-based. But that is just a historical artifact, a kind of convention. Does an ant have feelings? Or a goldfish, a snake, a mouse? Does a rabbit have feelings? I think these are the wrong Read the full article…

How to Contain Non-Compassionate Super-A.I.

We want super(-intelligent) A.I. to remain under meaningful human control to avoid that it will largely or fully destroy or subdue humanity (= existential dangers). Compassionate A.I. may not be with us for a while. Meanwhile, how can we contain super-A.I.? Future existential danger is special in that one can only be wrong in one Read the full article…

Levels of Abstraction in Humans and A.I.

Humans are masters of abstraction. We do it spontaneously, thus creating an efficient mental environment for ourselves, others, and culturally. The challenge is now to bring this to A.I. Abstraction = generalization Humans (and other animals) perform spontaneous generalization. From a number of example objects, we generalize to some concept. A concept is already an Read the full article…

Translate »