Can A.I. be Empathic?

June 10, 2018 Artifical Intelligence, Empathy - Compassion No Comments

Not readily in the completely human sense of course. But can A.I. show sufficient ‘empathy’ for a human to recognize it as such, relate to it and even ‘grow’ through it?

[Please read: “Landscape of Empathy”]

Humans tend to ‘recognize’ human features in many things.

We look at the clouds and we see human faces – with a little imagination – everywhere. A child plays with a doll and the doll responds. For a while, there is a genuine relationship, even if the doll does not yet have futuristic A.I. capabilities. An animated movie shows a talking teacup and immediately the audience – children and adults alike – responds. You may even imagine one now and feel something. Primitive tribes have always looked at nature seeing the work of god, or the gods themselves. Etc.

People readily personify. And that’s OK. It’s part of our humanity, our being human.

It should not be manipulated, but respectfully handled.

Empathic, who?

Until quite recently, researchers used to think that animals cannot show signs of ‘real empathy.’ Now a lot of findings show otherwise. Even rodents display behavior to each other – not only between mothers and children – that can be deemed ‘empathic.’

Empathy is not a purely human characteristic, as is also not the case with intelligence, consciousness, mortality, etc.

That does not make humans less worthy. It may make us less unique on our little big planet. A sign of ‘real empathy’ is to appreciate that and be happy about it.

The title is mainly a question about empathy itself

Is a very complex system necessary to be able to talk of empathy? A doll or a pixelized graphic in a movie is not enough. A person may see empathy there, but it is his own.

Which brings the question: if we see empathy in other humans, to what degree is it our own projection? Certainly to some degree. Moreover, the situation quickly becomes more complex. Part of one’s empathy – say: that of person A – consists in bringing someone else – say: person B – more in contact with himself – again: person B. Person A may thereby enjoy what he sees happening in person B. Person B feels grateful for the feeling within himself – person B – as well as for person A’s happiness. A self-perpetuating pattern brings person A and person B closer to each other.

Warm feelings.

And also some real possibility to grow from the experience, to grow as a person (A and B), to feel less lonely, to feel happy in this life, to feel more like a unity, a whole. To ‘heal.’

This may be the main strength of psychotherapy. And of friendship. And of love.

A.I.?

We have more and more the opportunity to build complexity into A.I. Real deeply pattern-recognizing complexity within a system that can show itself with a certain recognizable consistency at more than the most superficial level. [see: “A.I. Is in the Patterns”]

So, don’t look for ‘human empathy.’ At the other side, it’s not like a simple doll or a computer graphic. It’s something in-between. Thus, calling it ‘empathy’ or not, is more like a semantic question at this moment.

More interesting as for now: does this permit us to build a system that can genuinely help people to become ‘better persons’ through human – A.I. interaction?

I am convinced of this.

Leave a Reply

Related Posts

Artificial Intentionality

Intentionality – the fact of being deliberate or purposive (Oxford dictionary) – originates in the complexity of integrated information. Will A.I. ever show intentionality? According to me, A.I. will show intentionality rather soon Twenty years ago, I thought it would be around now (2020). Right now, I think it will be in 20 years from Read the full article…

The Double Ethical Bottleneck of A.I.

This is a small excerpt from my book The Journey Towards Compassionate A.I. The whole book describes the why’s, what’s and how’s concerning this. Getting through the A.I. bi-bottleneck On the road towards genuine super-A.I. – encompassing all domains of intelligence and in each being much more effective than humans – I see not one Read the full article…

Lisa Into the Future

Say, 2030. (Hopefully sooner.) “The only defense against bad A.I. is good A.I.” Does this sound like a clear statement? Of course, there is a grain of salt needed when reading the following. Still, it is a possible future, one way or another. Perhaps Lisa will be Jérôme or Annabel or Li Ping. In any Read the full article…

Translate »