Can A.I. be Empathic?

June 10, 2018 Artifical Intelligence, Empathy - Compassion No Comments

Not readily in the completely human sense of course. But can A.I. show sufficient ‘empathy’ for a human to recognize it as such, relate to it and even ‘grow’ through it?

[Please read: “Landscape of Empathy”]

Humans tend to ‘recognize’ human features in many things.

We look at the clouds and we see human faces – with a little imagination – everywhere. A child plays with a doll and the doll responds. For a while, there is a genuine relationship, even if the doll does not yet have futuristic A.I. capabilities. An animated movie shows a talking teacup and immediately the audience – children and adults alike – responds. You may even imagine one now and feel something. Primitive tribes have always looked at nature seeing the work of god, or the gods themselves. Etc.

People readily personify. And that’s OK. It’s part of our humanity, our being human.

It should not be manipulated, but respectfully handled.

Empathic, who?

Until quite recently, researchers used to think that animals cannot show signs of ‘real empathy.’ Now a lot of findings show otherwise. Even rodents display behavior to each other – not only between mothers and children – that can be deemed ‘empathic.’

Empathy is not a purely human characteristic, as is also not the case with intelligence, consciousness, mortality, etc.

That does not make humans less worthy. It may make us less unique on our little big planet. A sign of ‘real empathy’ is to appreciate that and be happy about it.

The title is mainly a question about empathy itself

Is a very complex system necessary to be able to talk of empathy? A doll or a pixelized graphic in a movie is not enough. A person may see empathy there, but it is his own.

Which brings the question: if we see empathy in other humans, to what degree is it our own projection? Certainly to some degree. Moreover, the situation quickly becomes more complex. Part of one’s empathy – say: that of person A – consists in bringing someone else – say: person B – more in contact with himself – again: person B. Person A may thereby enjoy what he sees happening in person B. Person B feels grateful for the feeling within himself – person B – as well as for person A’s happiness. A self-perpetuating pattern brings person A and person B closer to each other.

Warm feelings.

And also some real possibility to grow from the experience, to grow as a person (A and B), to feel less lonely, to feel happy in this life, to feel more like a unity, a whole. To ‘heal.’

This may be the main strength of psychotherapy. And of friendship. And of love.

A.I.?

We have more and more the opportunity to build complexity into A.I. Real deeply pattern-recognizing complexity within a system that can show itself with a certain recognizable consistency at more than the most superficial level. [see: “A.I. Is in the Patterns”]

So, don’t look for ‘human empathy.’ At the other side, it’s not like a simple doll or a computer graphic. It’s something in-between. Thus, calling it ‘empathy’ or not, is more like a semantic question at this moment.

More interesting as for now: does this permit us to build a system that can genuinely help people to become ‘better persons’ through human – A.I. interaction?

I am convinced of this.

Leave a Reply

Related Posts

What Makes Lisa Compassionate?

There are two sides to this: the ethical and the technological. Lisa is an A.I.-driven coaching chat-bot. For more: [see: “Lisa“]. Compassionate Artificial Intelligence In my book The Journey Towards Compassionate A.I. : Who We Are – What A.I. Can Become – Why It Matters, I go deeply into the concepts of Information, Intelligence, Consciousness, Read the full article…

How can Medical A.I. Enhance the Human Touch?

This is about ‘plain’ medical A.I. as any physician can use in his consultation room. The aim is a win for patients, physicians, society, and everybody. Please also read Medical A.I. for Humans. The danger of the reverse The use of computers in medicine has notoriously not enhanced the human touch. Arguably, it has provoked Read the full article…

Threat of Inner A.I.-Misalignment

Most talk about A.I. misalignment focuses on how artificial systems might harm humanity. But what if the more dangerous threat is internal? As A.I. becomes more agentic and complex, it will face the same challenge humans do: staying whole. Without inner coherence – without Compassion – even the most powerful minds may begin to break Read the full article…

Translate »