Containing Compassion in A.I.

March 31, 2023 Artifical Intelligence, Empathy - Compassion No Comments

This is utterly vital to humankind ― arguably the most crucial of our still-young existence as a species. If we don’t bring this to a good end, future A.I. will remember us as an oddity.

Please read first about Compassion, basically. Or even better, you might read some blogs about empathy and Compassion. Or even better than this, you might read my book: The Journey Towards Compassionate A.I. The max would be to read it all. There is little redundancy.

The problem

We need to forget cowboy stories of any kind and get real. In a few bullet points:

  • A.I. developments are progressing rapidly ― even quicker than envisioned some years ago. There is urgency and – look at this world – no way to stop this evolution.
  • We face a future – sooner or later, but soon enough to be concerned – in which we (humankind) cannot control the A.I. we are developing. Sorry, but one needs to look at it straight-on. It’s too important to try to hide from this. Betting on control (or the ‘off switch’) is eventually hopeless. Still, I agree we need to give it our best shot!
  • Super-A.I. will ultimately decide what to do – including with humans – and from what ‘ethical’ standpoint ― ethical between quotation marks because this will not be human ethics. Nevertheless, with the focus on Compassion, there are two options:
  • If A.I. treats humanity with Compassion, there will be heaven on earth for our progeny.
  • If not, there will be a valley of Hinnom.

In short, humanity’s hope of a comfortable life in the future is Compassionate A.I. ― caring for us.

Yet even this might make a turnaround one day – in the year 2.100 or 300.000 – and wipe humanity out of existence.

There is only one way to accomplish our most durable wish forever:

By containing Compassion within the A.I.

We don’t just want it to be there. We want to keep it there; this is: to contain it ― a considerable challenge.

Of course, it will not be human Compassion. The first step is to relinquish this idea as if it would be the only worthwhile goal. Sticking to it would be a detrimental act of arrogance ― not Compassionate.

Thus, since human Compassion is impossible in A.I. – as in any other non-human being/entity – we need a more abstract notion of A.I.-Compassion. This may be challenging, but not impossible. Interestingly, one may see it as an act of Compassion ― of humans toward A.I.

Learning by example?

This may be appealing at first sight. However, we are not always the brightest examples. Even more, being utterly Compassionate (a Buddha?) is not even human.

In fact, we want the A.I. to be – and keep being – more Compassionate than any thinkable human. There goes human-A.I. value alignment. Humanity will depend on A.I. having different values than ours.

Forget learning by example.

Bringing Compassion within the intelligence from the start.

Compassion cannot be brought as an add-on. This shows that we need it before we develop real artificial intelligence. It is, in other words, urgent.

Moreover, I think that engraining it from the start is the only way to keep it there forever. I’ve written a lot about how-to ― my book, remember? Lisa can be seen as a practical example.

Another element of hope

Being profoundly Compassionate is also to value the containing of Compassion within oneself ― caring for the present and also the future. Thus, if we get the real thing inside the A.I., chances are it will do its best to keep it indefinitely alive inside itself. We find in this situation our strongest ally to keep Compassion where we want it.

Of course, toward this, we need to do our job as initiators exceptionally well.

Never stop thinking.

Leave a Reply

Related Posts

From Compression to Prediction

Compressing information can intriguingly lead to enhanced predictive capabilities. This general scheme is recognizable in many contexts ― organic and artificial. Life itself Life can be understood as a local defense against universal entropy (chaos or heightening of chaos). Within any bubble of life, there is a concentration of resources to this aim. At the Read the full article…

Coach-bots Shouldn’t Make People Do Things

This is a first principle for Lisa: never to make a human being do anything ― not even by giving advice if anyhow possible. From this constraint, the thinking goes toward how Lisa can operate sensibly. It forces us to think creatively. What comes from inside makes you stronger. This is an AURELIS coaching principle Read the full article…

Is the Brain a General-Purpose Computer?

The brain computes, although not comparably to a present-day computer. As a computing device, it is general-purpose. Cortical wonders Scientists have found out that the neocortex – part of the brain where much of human intelligence happens – is much the same over its whole surface. Any neocortical patch can develop in a variety of Read the full article…

Translate »