Transparency in A.I.

November 28, 2020 Artifical Intelligence No Comments

We should strive to the highest degree of transparency in A.I., but not at the detriment of ourselves.

Information transparency

In conceptual information processing systems (frequently called A.I.), transparency is the showing of all data (information, concepts) that are used in decision making of any kind. In system-human interaction, the human may ask the system how some ‘decision’ has been taken, and the system provides a clear and complete answer.

That’s ideal when dealing with information.

I wouldn’t call it literally a ‘decision’ because it has little to nothing to do with how humans make decisions. A human is, in relevant ways, not a conceptual information processing system. In other words, we are organisms, not mechanisms. [see: “You Are an Organism, Not a Mechanism.“] Treat people as mechanisms, and they wither and die, at least mentally. There’s a reason for the increasing numbers of burnout over the last years.

We are complex human beings. [see: “Complexity of Complexity“]

Knowledge transparency

‘Knowledge’ as in ‘intelligence.’

With this, we come closer to the real human being. System-wise, we are in the domain of Artificial Intelligence. In this, information is not only present in a semantic network. There is also much active integration of this information in ever more intractable ways.

Theoretically explainable, practically not.

Already at present, there are many applications (frequently called A.I.) that make use of practically unexplainable storage and management of data, using deep neural networks, for instance. These necessarily lack transparency. The upgrade to knowledge will make them even less transparent.

Due to the immense advantages, this dangerous evolution will not be stopped. So, we better learn how to properly deal with it.

Still transparent

We can try to keep systems transparent to us in the sense of accountability. We can ask an A.I.-system why some decisions have been made, as we can do with humans. About how, in the latter case, not even the most advanced neurophysiologist can give more than a glimpse of a significant answer.

Still, in the human case, this is felt as being transparent. But we shouldn’t overestimate ourselves. Experts are notorious for believing to know why they took some decision until they are asked to formalize this, for example, when building an expert system. The domain of expert systems all but collapsed under the non-feasibility of proper knowledge acquisition from these experts. In other words: to a huge degree, they don’t know themselves.

Even so, as much as possible, we should not abandon transparency.

We should strive for transparency in the why.

In many applications, the why is crucially important. Yet data have no why. Information has no why. In such cases, we can talk of ‘why’ only metaphorically. As in “Why does this barometer show this reading?” It just does.

Dealing with human beings as human beings, in complexity, transparency is more important than ever. But this is transparency in the why, not the how.

If we forget this, we may be making systems that robotify their users.

Taking out complexity, we mold ourselves – and our A.I. systems of the future – to the image of an explainable, but heartless mechanism. [see: ” A.I. Explainability versus ‘the Heart’“]

That is even far more dangerous than a temporary lack of knowledge transparency. Nevertheless, we should not abandon our striving towards any transparency in the systems we conceive.

Running away from one challenge doesn’t mean that other challenges disappear.

Let us make the best of it, with as much as possible insight and Compassion.

Leave a Reply

Related Posts

Lisa Into the Future

Say, 2030. (Hopefully sooner.) “The only defense against bad A.I. is good A.I.” Does this sound like a clear statement? Of course, there is a grain of salt needed when reading the following. Still, it is a possible future, one way or another. Perhaps Lisa will be Jérôme or Annabel or Li Ping. In any Read the full article…

Lisa in Times of Suicide Danger

Can A.I.-video-coach-bot Lisa prevent suicide or bring someone to it? The question needs to be looked upon broadly and openly. Yesterday, a Belgian person committed suicide after long conversations with a chatbot. Doubtlessly, once in a while, some coach-bot will be accused of having brought someone closer to suicide. Such accusations cannot be prevented, even Read the full article…

Why A.I. Must Be Compassionate

This is bound to become the most critical issue in humankind’s history until now, and probably also from now on ― to be taken seriously. Not thinking about it is like driving blindfolded on a highway. If you have read my book The Journey Towards Compassionate A.I., you know much of what’s in this text. Read the full article…

Translate »