Containing Compassion in A.I.

March 31, 2023 Artifical Intelligence, Empathy - Compassion No Comments

This is utterly vital to humankind ― arguably the most crucial of our still-young existence as a species. If we don’t bring this to a good end, future A.I. will remember us as an oddity.

Please read first about Compassion, basically. Or even better, you might read some blogs about empathy and Compassion. Or even better than this, you might read my book: The Journey Towards Compassionate A.I. The max would be to read it all. There is little redundancy.

The problem

We need to forget cowboy stories of any kind and get real. In a few bullet points:

  • A.I. developments are progressing rapidly ― even quicker than envisioned some years ago. There is urgency and – look at this world – no way to stop this evolution.
  • We face a future – sooner or later, but soon enough to be concerned – in which we (humankind) cannot control the A.I. we are developing. Sorry, but one needs to look at it straight-on. It’s too important to try to hide from this. Betting on control (or the ‘off switch’) is eventually hopeless. Still, I agree we need to give it our best shot!
  • Super-A.I. will ultimately decide what to do – including with humans – and from what ‘ethical’ standpoint ― ethical between quotation marks because this will not be human ethics. Nevertheless, with the focus on Compassion, there are two options:
  • If A.I. treats humanity with Compassion, there will be heaven on earth for our progeny.
  • If not, there will be a valley of Hinnom.

In short, humanity’s hope of a comfortable life in the future is Compassionate A.I. ― caring for us.

Yet even this might make a turnaround one day – in the year 2.100 or 300.000 – and wipe humanity out of existence.

There is only one way to accomplish our most durable wish forever:

By containing Compassion within the A.I.

We don’t just want it to be there. We want to keep it there; this is: to contain it ― a considerable challenge.

Of course, it will not be human Compassion. The first step is to relinquish this idea as if it would be the only worthwhile goal. Sticking to it would be a detrimental act of arrogance ― not Compassionate.

Thus, since human Compassion is impossible in A.I. – as in any other non-human being/entity – we need a more abstract notion of A.I.-Compassion. This may be challenging, but not impossible. Interestingly, one may see it as an act of Compassion ― of humans toward A.I.

Learning by example?

This may be appealing at first sight. However, we are not always the brightest examples. Even more, being utterly Compassionate (a Buddha?) is not even human.

In fact, we want the A.I. to be – and keep being – more Compassionate than any thinkable human. There goes human-A.I. value alignment. Humanity will depend on A.I. having different values than ours.

Forget learning by example.

Bringing Compassion within the intelligence from the start.

Compassion cannot be brought as an add-on. This shows that we need it before we develop real artificial intelligence. It is, in other words, urgent.

Moreover, I think that engraining it from the start is the only way to keep it there forever. I’ve written a lot about how-to ― my book, remember? Lisa can be seen as a practical example.

Another element of hope

Being profoundly Compassionate is also to value the containing of Compassion within oneself ― caring for the present and also the future. Thus, if we get the real thing inside the A.I., chances are it will do its best to keep it indefinitely alive inside itself. We find in this situation our strongest ally to keep Compassion where we want it.

Of course, toward this, we need to do our job as initiators exceptionally well.

Never stop thinking.

Leave a Reply

Related Posts

Inspiration is Key to A.I. Research

A.I. research should prioritize rationality as well as profound human depth (inspiration). As you may know, this is a perfect Aurelian combination. It’s relevant to much inventive thinking, arguably most of all to A.I. research. The initial phase of research should focus on thinking about the problem. No papers, whiteboards, discussions, or code – just Read the full article…

The Meaning Barrier between Humans (and A.I.)

Open a book. Look at some meaningful words. Almost each of these words means something at least slightly different to you than to me or anyone else. What must A.I. make of this? For instance: “Barsalou and his collaborators have been arguing for decades that we understand even the most abstract concepts via the mental Read the full article…

From Concrete to Abstract

Many people view the concepts of ‘concrete’ and ‘abstract’ as dichotomous ends of a straightforward spectrum — in daily life, often without much thought. This is also relevant to their use in inferential patterns. One example are mental-neuronal patterns in humans. However, the muddy underlying reality becomes especially apparent when trying to realize them in Read the full article…

Translate »