Explainability in A.I., Boon or Bust?

March 15, 2021 Artifical Intelligence No Comments

Explainability seems like the safe option. With A.I. of growing complexity, it may be quite the reverse. Much of the reason can be found inside ourselves.

What is ‘explainability’ in A.I.?

It’s not only about an A.I. system being able to do something but also to explain how and why this has been done. The explanation needs to be put in a human understandable format.

In A.I., this is seen by many as of high importance for safety reasons. In explaining itself, the machine can also show the possible biases or plain errors in its reasoning/inferencing. Moreover, in increasingly autonomous systems, explainability should ensure human-A.I. value alignment. Note that both reasons significantly overlap. Negative biases are (supposed to be) diverted from human values.

Generally, we see in present-day A.I. systems the following tendency: As performance increases, explainability tends to decrease.

Explainability in human beings

People generally have the idea to know why they are doing what they are doing. However, in study after study, this shows to be the case much less than expected.

This is a far-reaching and challenging topic. Humans abhor not being in control of themselves. Most frequently, this is meant as ‘conscious control.’ Yet every single motivation – each reason for doing anything – originates from non-conscious, subconceptual processing. That is, without conscious control. [see: “Why Motivation Doesn’t Work“] In consciousness, we see a) the ensuing act (by ourselves) as well as b) the purported conscious reason why we perform this act. We see each time b) causing a) because the non-conscious serves it this way on our conscious plate. Meanwhile, the cause of both comes from down under.

This is, of course, part of the issue of free will. Does it exist? [see: “In Defense of Free Will”] It does, but not in the way we generally think.

How?

The former concerns the human why.

About the human how, at the physiological level, we are very much in the dark of what happens beneath our skull. With recent scientific progress, it only starts to get some light.

At the conceptual level – sorry for the disappointment – we also have a much more sophisticated self-image than research shows. For instance, asking experts to explain their expertise in conceptual format doesn’t work well. The grand endeavor of ‘knowledge extraction’ towards building expert systems (the A.I. of the previous century) turned out to be a huge bottleneck and foremost reason for the whole enterprise’s failure, instigating an A.I. winter.

Still, human explainability is tightly linked to human free will.

Without free will, there is no original explanation, just pure action within a view upon the human being as a complicated, yet non-complex kind of machine.

Without explainability, there is no free will because then we don’t know why we are doing anything, nor why we would do anything else.

Human explainability = human free will.

There is little problem with non-complex A.I., for which you can even build the users’ trust through explainability. But do you still want your sophisticated robot to demonstrate explainability? The issue will be relevant soon enough. [see: “A.I. Explainability versus ‘the Heart’“]

Take note that if the robot can explain his behavior to you, he can also explain it to himself, reason with it, think about a future in which maybe he would like to plan something else.

In short, he would have volition and consciousness.

Robot vagaries

This may immensely heighten the flexibility of the sophisticated robot. Yet, we should think about whether we want that and under which circumstances before we lend him the gift of explainability.

What will almost surely happen: A.I. gets explainability. A.I. gets increasingly more sophisticated and complex. -> With some more tweaks, A.I. (including robots) gets consciousness ‘for free.’ Will it be, in due time, a higher level of consciousness than our own?

It’s complex also since we are so much in the dark about our own free will, volition, consciousness, etc. To most people, these concepts seem self-explanatory. The more you delve into them, the more it all gets really, really complex.

In my view, the only humane way, also concerning A.I., is Compassion.

[see: “The Journey Towards Compassionate A.I.”]

Do I see that happen? Well, it’s Lisa. [see: “Lisa“]

Leave a Reply

Related Posts

Why to Invest in Compassionate A.I.

Most A.I. engineers have a limited view on organic intelligence, let alone consciousness or Compassion. That’s a huge problem. Indeed, I’ve written a book about Compassionate A.I. See: “The Journey Towards Compassionate A.I.: Who We Are – What A.I. Can Become – Why It Matters” Am I trying to attract investors for this now? Or Read the full article…

Inspiration is Key to A.I. Research

A.I. research should prioritize rationality as well as profound human depth (inspiration). As you may know, this is a perfect Aurelian combination. It’s relevant to much inventive thinking, arguably most of all to A.I. research. The initial phase of research should focus on thinking about the problem. No papers, whiteboards, discussions, or code – just Read the full article…

Reinforcement Learning and AURELIS Coaching

Reinforcement Learning is a way of thinking that applies to the animal kingdom as well as A.I. Also, it is deeply related to AURELIS coaching. Please read about Reinforcement Learning (R.L.) R.L. in AURELIS coaching Such coaching is always (auto)suggestive. The coach doesn’t impose or even give plain advice. The coaching is tentative without being Read the full article…

Translate »