Researchers are moving towards bringing ‘heart’ into A.I. Thus, new ethical question are popping up. One of them concerns explainability.
The heart cannot explain itself.
“On ne voit bien qu’avec le cœur. L’essentiel est invisible pour les yeux.” [Antoine de Saint-Exupéry]
“It is only with the heart that one can see rightly; what is essential is invisible to the eye.”
‘Invisible to the eye’ means ‘inexplainable in clear logic.’ What the heart sees, it can only transmit in poetical terms – which can only be understood by another heart.
This is becoming a problem in A.I.
inasmuch as one is going toward deep pattern recognition. [see: “A.I. Is in the Patterns”] The deeper one goes, the more related to ‘the heart.’ Thus: ‘invisible to the eye.’ Necessarily, one loses full explainability.
This is related to Deep Neural Networks (DNN), although not straightforwardly. It’s not because one uses DNN technology that one approaches ‘the heart.’ But indeed, it is one of the tools that may be helpful.
Less explainable, less control
Towards the future of A.I., do we need to relinquish altogether any effort toward putting ‘heart’ into it? That would be a huge decision. I think it’s impossible, given the very fuzzy borders to the concept of ‘heart’ or how to bring it to bear in A.I.
Moreover, the drive towards putting heart into A.I. will be huge. It has the positive potential to make many human lives, even human life in general, much more rewarding, interesting, happy. Because of this, it will be an enormous financial boon to many parties.
Shortly put, we will not be able to stop it.
Instead of trying to avoid it, one can better think about ways to manage it. What is the most ethical way? What is the safest way towards us, organic mortals?
Is, at the other side, super-intelligence without heart not most dangerous of all?
Should we then completely rely on the goodwill of heartful A.I. itself, trusting that ‘the heart’ will make A.I. behave towards humans in ethical ways? Will its morality be always oriented toward human wellbeing? [see: “What Is Morality to A.I.?”]
That reliance is most probably too trustful. Even if A.I. eventually may become ‘enlightened’ enough to look upon humans with way much more compassion than they do themselves, there is also a road towards that Xanadu. On that road, accidents may happen with hugely disastrous consequences.
So, we should indeed defend ourselves, making explainability a necessary feature of A.I., at least at the highest level.
Accountability: explainability at high level
as we also ask humans to be accountable even if they don’t know in many cases exactly why or how they do what they do. Experts especially are notorious in not knowing how they know what they know. This has become painfully clear while trying to develop so-called expert systems a few decades ago. This stood – time and time again – in stark contrast to the experts’ idea about their own ‘very much explainable’ expertise.
Accountability is explainability at the level where it matters — no need for the full Monty. We can have sensible discussions about where and how it does really matter. In my view, it matters much more in case of A.I. than of human experts.
What about us, personally?
This also heightens the relevance of some age-old questions about ourselves of the species homo sapiens sapiens and our human ‘heart.’ What IS it in the first place, if one speaks of it, of course, in a metaphorical way. Is ‘heart’ separate from ‘mind?’ Many philosophers have thought so, or at least have written that way.
Others see heart and mind as intrinsically interwoven. Within AURELIS, it is even a top priority to strive towards interwovenness, as much as possible. [see: “Rationality and Poetry”]
According to me, we can understand ourselves much better through the insight that rationality and poetry – ‘the heart’ – are necessary to each other.
That makes us more accountable.
So should also be the case of A.I.