Containing Compassion in A.I.

March 31, 2023 Artifical Intelligence, Empathy - Compassion No Comments

This is utterly vital to humankind ― arguably the most crucial of our still-young existence as a species. If we don’t bring this to a good end, future A.I. will remember us as an oddity.

Please read first about Compassion, basically. Or even better, you might read some blogs about empathy and Compassion. Or even better than this, you might read my book: The Journey Towards Compassionate A.I. The max would be to read it all. There is little redundancy.

The problem

We need to forget cowboy stories of any kind and get real. In a few bullet points:

  • A.I. developments are progressing rapidly ― even quicker than envisioned some years ago. There is urgency and – look at this world – no way to stop this evolution.
  • We face a future – sooner or later, but soon enough to be concerned – in which we (humankind) cannot control the A.I. we are developing. Sorry, but one needs to look at it straight-on. It’s too important to try to hide from this. Betting on control (or the ‘off switch’) is eventually hopeless. Still, I agree we need to give it our best shot!
  • Super-A.I. will ultimately decide what to do – including with humans – and from what ‘ethical’ standpoint ― ethical between quotation marks because this will not be human ethics. Nevertheless, with the focus on Compassion, there are two options:
  • If A.I. treats humanity with Compassion, there will be heaven on earth for our progeny.
  • If not, there will be a valley of Hinnom.

In short, humanity’s hope of a comfortable life in the future is Compassionate A.I. ― caring for us.

Yet even this might make a turnaround one day – in the year 2.100 or 300.000 – and wipe humanity out of existence.

There is only one way to accomplish our most durable wish forever:

By containing Compassion within the A.I.

We don’t just want it to be there. We want to keep it there; this is: to contain it ― a considerable challenge.

Of course, it will not be human Compassion. The first step is to relinquish this idea as if it would be the only worthwhile goal. Sticking to it would be a detrimental act of arrogance ― not Compassionate.

Thus, since human Compassion is impossible in A.I. – as in any other non-human being/entity – we need a more abstract notion of A.I.-Compassion. This may be challenging, but not impossible. Interestingly, one may see it as an act of Compassion ― of humans toward A.I.

Learning by example?

This may be appealing at first sight. However, we are not always the brightest examples. Even more, being utterly Compassionate (a Buddha?) is not even human.

In fact, we want the A.I. to be – and keep being – more Compassionate than any thinkable human. There goes human-A.I. value alignment. Humanity will depend on A.I. having different values than ours.

Forget learning by example.

Bringing Compassion within the intelligence from the start.

Compassion cannot be brought as an add-on. This shows that we need it before we develop real artificial intelligence. It is, in other words, urgent.

Moreover, I think that engraining it from the start is the only way to keep it there forever. I’ve written a lot about how-to ― my book, remember? Lisa can be seen as a practical example.

Another element of hope

Being profoundly Compassionate is also to value the containing of Compassion within oneself ― caring for the present and also the future. Thus, if we get the real thing inside the A.I., chances are it will do its best to keep it indefinitely alive inside itself. We find in this situation our strongest ally to keep Compassion where we want it.

Of course, toward this, we need to do our job as initiators exceptionally well.

Never stop thinking.

Leave a Reply

Related Posts

Artificial Intentionality

Intentionality – the fact of being deliberate or purposive (Oxford dictionary) – originates in the complexity of integrated information. Will A.I. ever show intentionality? According to me, A.I. will show intentionality rather soon Twenty years ago, I thought it would be around now (2020). Right now, I think it will be in 20 years from Read the full article…

Is A.I. Dangerous to Human Cognition?

I have roamed around this on several occasions within ‘The Journey towards Compassionate A.I.’ (of which this is an excerpt) The prime reason why I think it’s dangerous is, in one term: hyper-essentialism. But let me first give two viewpoints upon your thinking: Essentialism: presupposes that the categories in your mind – such as an Read the full article…

Will A.I. Have Its Own Feelings and Purpose?

Anno 2025: A.I. has made its entry and it’s here to stay. Human based? We generally think of ‘feelings’ as human-based. But that is just a historical artifact, a kind of convention. Does an ant have feelings? Or a goldfish, a snake, a mouse? Does a rabbit have feelings? I think these are the wrong Read the full article…

Translate »