Containing Compassion in A.I.

March 31, 2023 Artifical Intelligence, Empathy - Compassion No Comments

This is utterly vital to humankind ― arguably the most crucial of our still-young existence as a species. If we don’t bring this to a good end, future A.I. will remember us as an oddity.

Please read first about Compassion, basically. Or even better, you might read some blogs about empathy and Compassion. Or even better than this, you might read my book: The Journey Towards Compassionate A.I. The max would be to read it all. There is little redundancy.

The problem

We need to forget cowboy stories of any kind and get real. In a few bullet points:

  • A.I. developments are progressing rapidly ― even quicker than envisioned some years ago. There is urgency and – look at this world – no way to stop this evolution.
  • We face a future – sooner or later, but soon enough to be concerned – in which we (humankind) cannot control the A.I. we are developing. Sorry, but one needs to look at it straight-on. It’s too important to try to hide from this. Betting on control (or the ‘off switch’) is eventually hopeless. Still, I agree we need to give it our best shot!
  • Super-A.I. will ultimately decide what to do – including with humans – and from what ‘ethical’ standpoint ― ethical between quotation marks because this will not be human ethics. Nevertheless, with the focus on Compassion, there are two options:
  • If A.I. treats humanity with Compassion, there will be heaven on earth for our progeny.
  • If not, there will be a valley of Hinnom.

In short, humanity’s hope of a comfortable life in the future is Compassionate A.I. ― caring for us.

Yet even this might make a turnaround one day – in the year 2.100 or 300.000 – and wipe humanity out of existence.

There is only one way to accomplish our most durable wish forever:

By containing Compassion within the A.I.

We don’t just want it to be there. We want to keep it there; this is: to contain it ― a considerable challenge.

Of course, it will not be human Compassion. The first step is to relinquish this idea as if it would be the only worthwhile goal. Sticking to it would be a detrimental act of arrogance ― not Compassionate.

Thus, since human Compassion is impossible in A.I. – as in any other non-human being/entity – we need a more abstract notion of A.I.-Compassion. This may be challenging, but not impossible. Interestingly, one may see it as an act of Compassion ― of humans toward A.I.

Learning by example?

This may be appealing at first sight. However, we are not always the brightest examples. Even more, being utterly Compassionate (a Buddha?) is not even human.

In fact, we want the A.I. to be – and keep being – more Compassionate than any thinkable human. There goes human-A.I. value alignment. Humanity will depend on A.I. having different values than ours.

Forget learning by example.

Bringing Compassion within the intelligence from the start.

Compassion cannot be brought as an add-on. This shows that we need it before we develop real artificial intelligence. It is, in other words, urgent.

Moreover, I think that engraining it from the start is the only way to keep it there forever. I’ve written a lot about how-to ― my book, remember? Lisa can be seen as a practical example.

Another element of hope

Being profoundly Compassionate is also to value the containing of Compassion within oneself ― caring for the present and also the future. Thus, if we get the real thing inside the A.I., chances are it will do its best to keep it indefinitely alive inside itself. We find in this situation our strongest ally to keep Compassion where we want it.

Of course, toward this, we need to do our job as initiators exceptionally well.

Never stop thinking.

Leave a Reply

Related Posts

Patterns + Rewards in A.I.

Human-inspired Pattern Recognition and Completion (PRC) may significantly heighten the efficiency of Reinforcement Learning (RL) — also in A.I. See for PRC: The Brain as a Predictor See for RL: Why Reinforcement Learning is Special Mutually reinforcing PRC shows valid directions and tentatively also realizes them. RL consolidates/reinforces the best directions and attenuates the lesser Read the full article…

The Future is Prediction

I bring the concept of prediction from different angles to show the common ground. Through this, one may get a glimpse of its future importance. The Future of A.I. The concept of prediction pops up regularly in different ways to look at future A.I. developments. For instance, in temporal difference (TD) learning as expanded upon Read the full article…

Legal vs. Deontological in A.I.

The trolley problem This is a well-known problem in A.I. A trolley driver gets into a situation where he must choose between killing one person by taking a deliberate action or letting five others get killed by not reacting to the situation. Deontologically, people tend not to choose purely logically and statistically in such situations. Read the full article…

Translate »