A.I. Explainability versus ‘the Heart’

March 9, 2019 Artifical Intelligence No Comments

Researchers are moving towards bringing ‘heart’ into A.I. Thus, new ethical question are popping up. One of them concerns explainability.

The heart cannot explain itself.

“On ne voit bien qu’avec le cœur. L’essentiel est invisible pour les yeux.” [Antoine de Saint-Exupéry]

“It is only with the heart that one can see rightly; what is essential is invisible to the eye.”

‘Invisible to the eye’ means ‘inexplainable in clear logic.’ What the heart sees, it can only transmit in poetical terms – which can only be understood by another heart.

This is becoming a problem in A.I.

inasmuch as one is going toward deep pattern recognition. [see: “A.I. Is in the Patterns”] The deeper one goes, the more related to ‘the heart.’ Thus: ‘invisible to the eye.’ Necessarily, one loses full explainability.

This is related to Deep Neural Networks (DNN), although not straightforwardly. It’s not because one uses DNN technology that one approaches ‘the heart.’ But indeed, it is one of the tools that may be helpful.

Less explainable, less control

Towards the future of A.I., do we need to relinquish altogether any effort toward putting ‘heart’ into it? That would be a huge decision. I think it’s impossible, given the very fuzzy borders to the concept of ‘heart’ or how to bring it to bear in A.I.

Moreover, the drive towards putting heart into A.I. will be huge. It has the positive potential to make many human lives, even human life in general, much more rewarding, interesting, happy. Because of this, it will be an enormous financial boon to many parties.

Shortly put, we will not be able to stop it.

Instead of trying to avoid it, one can better think about ways to manage it. What is the most ethical way? What is the safest way towards us, organic mortals?

Is, at the other side, super-intelligence without heart not most dangerous of all?


Should we then completely rely on the goodwill of heartful A.I. itself, trusting that ‘the heart’ will make A.I. behave towards humans in ethical ways? Will its morality be always oriented toward human wellbeing? [see: “What Is Morality to A.I.?”]

That reliance is most probably too trustful. Even if A.I. eventually may become ‘enlightened’ enough to look upon humans with way much more compassion than they do themselves, there is also a road towards that Xanadu. On that road, accidents may happen with hugely disastrous consequences.

So, we should indeed defend ourselves, making explainability a necessary feature of A.I., at least at the highest level.

Accountability: explainability at high level

as we also ask humans to be accountable even if they don’t know in many cases exactly why or how they do what they do. Experts especially are notorious in not knowing how they know what they know. This has become painfully clear while trying to develop so-called expert systems a few decades ago. This stood – time and time again – in stark contrast to the experts’ idea about their own ‘very much explainable’ expertise.

Accountability is explainability at the level where it matters — no need for the full Monty. We can have sensible discussions about where and how it does really matter. In my view, it matters much more in case of A.I. than of human experts.

What about us, personally?

This also heightens the relevance of some age-old questions about ourselves of the species homo sapiens sapiens and our human ‘heart.’ What IS it in the first place, if one speaks of it, of course, in a metaphorical way. Is ‘heart’ separate from ‘mind?’ Many philosophers have thought so, or at least have written that way.

Others see heart and mind as intrinsically interwoven. Within AURELIS, it is even a top priority to strive towards interwovenness, as much as possible. [see: “Rationality and Poetry”]

According to me, we can understand ourselves much better through the insight that rationality and poetry – ‘the heart’ – are necessary to each other.

That makes us more accountable.

So should also be the case of A.I.

Please follow and like us:
Follow by Email

Related Posts

Is Lisa the Durable Answer?

Lisa promises mental support for an indefinite number of people one-on-one and simultaneously. In this, the ‘heart’ is most important and most durable now and in the future. Who is Lisa? [see: “Lisa”] In short, Lisa-software has three purposes, as: an assistant to AURELIS, guiding users to the appropriate AURELIS tool(s) an A.I. coaching chatbot, Read the full article…

The Journey Towards Compassionate A.I.

So, I’ve written a book. It carries the same title as this blog. It’s about intelligence, consciousness, and Compassion (with capital). Only on the background of these can one talk about the future of real A.I. (and us). First things first: the book is available through many internet outlets. Please buy and give an extremely Read the full article…

Artificial Intentionality

Intentionality – the fact of being deliberate or purposive (Oxford dictionary) – originates in the complexity of integrated information. Will A.I. ever show intentionality? According to me, A.I. will show intentionality rather soon Twenty years ago, I thought it would be around now (2020). Right now, I think it will be in 20 years from Read the full article…