A.I. Explainability versus ‘the Heart’

March 9, 2019 Artifical Intelligence No Comments

Researchers are moving towards bringing ‘heart’ into A.I. Thus, new ethical questions are popping up. One of them concerns explainability.

The heart cannot explain itself.

“On ne voit bien qu’avec le cœur. L’essentiel est invisible pour les yeux.” [Antoine de Saint-Exupéry]

“It is only with the heart that one can see rightly; what is essential is invisible to the eye.”

‘Invisible to the eye’ means ‘inexplainable in clear logic.’ What the heart sees, it can only transmit in poetical terms – which can only be understood by another heart.

This is becoming a problem in A.I.

inasmuch as one is going toward deep pattern recognition. [see: “A.I. Is in the Patterns”] The deeper one goes, the more related to ‘the heart.’ Thus: ‘invisible to the eye.’ Necessarily, one loses full explainability.

This is related to Deep Neural Networks (DNN), although not straightforwardly. It’s not because one uses DNN technology that one approaches ‘the heart.’ But indeed, it is one of the tools that may be helpful.

Less explainable, less control

Towards the future of A.I., do we need to relinquish altogether any effort toward putting ‘heart’ into it? That would be a huge decision. I think it’s impossible, given the very fuzzy borders to the concept of ‘heart’ or how to bring it to bear in A.I.

Moreover, the drive towards putting heart into A.I. will be huge. It has the positive potential to make many human lives, even human life in general, much more rewarding, interesting, happy. Because of this, it will be an enormous financial boon to many parties.

Shortly put, we will not be able to stop it.

Instead of trying to avoid it, one can better think about ways to manage it. What is the most ethical way? What is the safest way towards us, organic mortals?

Is, on the other side, super-intelligence without heart not most dangerous of all?

Trustworthiness

Should we then completely rely on the goodwill of heartful A.I. itself, trusting that ‘the heart’ will make A.I. behave towards humans in ethical ways? Will its morality be always oriented toward human wellbeing? [see: “What Is Morality to A.I.?”]

That reliance is most probably too trustful. Even if A.I. eventually may become ‘enlightened’ enough to look upon humans with way much more compassion than they do themselves, there is also a road towards that Xanadu. On that road, accidents may happen with hugely disastrous consequences.

So, we should indeed defend ourselves, making explainability a necessary feature of A.I., at least at the highest level.

Accountability: explainability at high level

as we also ask humans to be accountable even if they don’t know in many cases exactly why or how they do what they do. Experts especially are notorious for not knowing how they know what they know. This has become painfully clear while trying to develop so-called expert systems a few decades ago. This stood – time and time again – in stark contrast to the experts’ idea about their own ‘very much explainable’ expertise.

Accountability is explainability at the level where it matters — no need for the full Monty. We can have sensible discussions about where and how it does really matter. In my view, it matters much more in the case of A.I. than of human experts.

What about us, personally?

This also heightens the relevance of some age-old questions about ourselves of the species homo sapiens sapiens and our human ‘heart.’ What IS it in the first place, if one speaks of it, of course, in a metaphorical way. Is ‘heart’ separate from ‘mind?’ Many philosophers have thought so, or at least have written that way.

Others see heart and mind as intrinsically interwoven. Within AURELIS, it is even a top priority to strive towards interwovenness, as much as possible. [see: “Rationality and Poetry”]

According to me, we can understand ourselves much better through the insight that rationality and poetry – ‘the heart’ – are necessary to each other.

That makes us more accountable.

So should also be the case of A.I.

Leave a Reply

Related Posts

An A.I.-Equivalent of Feeling?

What if A.I. could grow something like feelings — not by mimicking humans, but through meaningful presence? This blog explores how artificial intelligence, grounded in Compassion, can develop a genuine equivalent to human emotion. Not imitation, but resonance. Not reaction, but receptivity. And not control — but a new kind of presence. Opening the question Read the full article…

Why Real A.I. has Barely Begun

Many believe A.I. is nearing its limits — that progress has slowed, and the excitement will soon fade. Yet what seems like an end is only the edge of a much larger beginning. Real A.I. has not peaked; it has only touched the surface of what intelligence can be. This next chapter is not about Read the full article…

Openness to Complexity in the Age of A.I.

We are entering the Age of A.I., and nothing will ever be the same. Complexity is growing everywhere — in business, in global governance, in our own inner lives. Treating it as complicatedness (no complexity involved) is a recipe for collapse. The only real solution is Openness (mainly to our own complexity). With it, business, Read the full article…

Translate »