Explainability in A.I., Boon or Bust?

March 15, 2021 Artifical Intelligence No Comments

Explainability seems like the safe option. With A.I. of growing complexity, it may be quite the reverse. Much of the reason can be found inside ourselves.

What is ‘explainability’ in A.I.?

It’s not only about an A.I. system being able to do something but also to explain how and why this has been done. The explanation needs to be put in a human understandable format.

In A.I., this is seen by many as of high importance for safety reasons. In explaining itself, the machine can also show the possible biases or plain errors in its reasoning/inferencing. Moreover, in increasingly autonomous systems, explainability should ensure human-A.I. value alignment. Note that both reasons significantly overlap. Negative biases are (supposed to be) diverted from human values.

Generally, we see in present-day A.I. systems the following tendency: As performance increases, explainability tends to decrease.

Explainability in human beings

People generally have the idea to know why they are doing what they are doing. However, in study after study, this shows to be the case much less than expected.

This is a far-reaching and challenging topic. Humans abhor not being in control of themselves. Most frequently, this is meant as ‘conscious control.’ Yet every single motivation – each reason for doing anything – originates from non-conscious, subconceptual processing. That is, without conscious control. [see: “Why Motivation Doesn’t Work“] In consciousness, we see a) the ensuing act (by ourselves) as well as b) the purported conscious reason why we perform this act. We see each time b) causing a) because the non-conscious serves it this way on our conscious plate. Meanwhile, the cause of both comes from down under.

This is, of course, part of the issue of free will. Does it exist? [see: “In Defense of Free Will”] It does, but not in the way we generally think.

How?

The former concerns the human why.

About the human how, at the physiological level, we are very much in the dark of what happens beneath our skull. With recent scientific progress, it only starts to get some light.

At the conceptual level – sorry for the disappointment – we also have a much more sophisticated self-image than research shows. For instance, asking experts to explain their expertise in conceptual format doesn’t work well. The grand endeavor of ‘knowledge extraction’ towards building expert systems (the A.I. of the previous century) turned out to be a huge bottleneck and foremost reason for the whole enterprise’s failure, instigating an A.I. winter.

Still, human explainability is tightly linked to human free will.

Without free will, there is no original explanation, just pure action within a view upon the human being as a complicated, yet non-complex kind of machine.

Without explainability, there is no free will because then we don’t know why we are doing anything, nor why we would do anything else.

Human explainability = human free will.

There is little problem with non-complex A.I., for which you can even build the users’ trust through explainability. But do you still want your sophisticated robot to demonstrate explainability? The issue will be relevant soon enough. [see: “A.I. Explainability versus ‘the Heart’“]

Take note that if the robot can explain his behavior to you, he can also explain it to himself, reason with it, think about a future in which maybe he would like to plan something else.

In short, he would have volition and consciousness.

Robot vagaries

This may immensely heighten the flexibility of the sophisticated robot. Yet, we should think about whether we want that and under which circumstances before we lend him the gift of explainability.

What will almost surely happen: A.I. gets explainability. A.I. gets increasingly more sophisticated and complex. -> With some more tweaks, A.I. (including robots) gets consciousness ‘for free.’ Will it be, in due time, a higher level of consciousness than our own?

It’s complex also since we are so much in the dark about our own free will, volition, consciousness, etc. To most people, these concepts seem self-explanatory. The more you delve into them, the more it all gets really, really complex.

In my view, the only humane way, also concerning A.I., is Compassion.

[see: “The Journey Towards Compassionate A.I.”]

Do I see that happen? Well, it’s Lisa. [see: “Lisa“]

Leave a Reply

Related Posts

The Next Breakthrough in A.I.

will not be technological, but philosophical. Of course, technology will be necessary to realize the philosophical. It will not be one more technological breakthrough, but rather a combination of new and old technologies. “Present-day A.I. = sophisticated perception” These are the words of Yann LeCun, a leading A.I. scientist, founding father of convolutional nets, which Read the full article…

Will Super-A.I. Make People Happier?

This is the paramount question — more vital than any debate about intelligence. It’s a bit weird that it is seldom put at the forefront, as if we’re more concerned about who is the most knowledgeable, therefore the most powerful. What people? It should not be about a few, as it should not exclude billions. Read the full article…

Active Learning in A.I.

An active learner deliberately searches for information/knowledge to become smarter. In biological evolution on Earth The ‘Cambrian explosion’ was probably jolted by the appearance of active learning in natural evolution. It was the time when living beings started to chase other living beings— thus also being chased, heightening the challenges of survival. This mutual predation Read the full article…

Translate »