Transparency in A.I.

November 28, 2020 Artifical Intelligence No Comments

We should strive to the highest degree of transparency in A.I., but not at the detriment of ourselves.

Information transparency

In conceptual information processing systems (frequently called A.I.), transparency is the showing of all data (information, concepts) that are used in decision making of any kind. In system-human interaction, the human may ask the system how some ‘decision’ has been taken, and the system provides a clear and complete answer.

That’s ideal when dealing with information.

I wouldn’t call it literally a ‘decision’ because it has little to nothing to do with how humans make decisions. A human is, in relevant ways, not a conceptual information processing system. In other words, we are organisms, not mechanisms. [see: “You Are an Organism, Not a Mechanism.“] Treat people as mechanisms, and they wither and die, at least mentally. There’s a reason for the increasing numbers of burnout over the last years.

We are complex human beings. [see: “Complexity of Complexity“]

Knowledge transparency

‘Knowledge’ as in ‘intelligence.’

With this, we come closer to the real human being. System-wise, we are in the domain of Artificial Intelligence. In this, information is not only present in a semantic network. There is also much active integration of this information in ever more intractable ways.

Theoretically explainable, practically not.

Already at present, there are many applications (frequently called A.I.) that make use of practically unexplainable storage and management of data, using deep neural networks, for instance. These necessarily lack transparency. The upgrade to knowledge will make them even less transparent.

Due to the immense advantages, this dangerous evolution will not be stopped. So, we better learn how to properly deal with it.

Still transparent

We can try to keep systems transparent to us in the sense of accountability. We can ask an A.I.-system why some decisions have been made, as we can do with humans. About how, in the latter case, not even the most advanced neurophysiologist can give more than a glimpse of a significant answer.

Still, in the human case, this is felt as being transparent. But we shouldn’t overestimate ourselves. Experts are notorious for believing to know why they took some decision until they are asked to formalize this, for example, when building an expert system. The domain of expert systems all but collapsed under the non-feasibility of proper knowledge acquisition from these experts. In other words: to a huge degree, they don’t know themselves.

Even so, as much as possible, we should not abandon transparency.

We should strive for transparency in the why.

In many applications, the why is crucially important. Yet data have no why. Information has no why. In such cases, we can talk of ‘why’ only metaphorically. As in “Why does this barometer show this reading?” It just does.

Dealing with human beings as human beings, in complexity, transparency is more important than ever. But this is transparency in the why, not the how.

If we forget this, we may be making systems that robotify their users.

Taking out complexity, we mold ourselves – and our A.I. systems of the future – to the image of an explainable, but heartless mechanism. [see: ” A.I. Explainability versus ‘the Heart’“]

That is even far more dangerous than a temporary lack of knowledge transparency. Nevertheless, we should not abandon our striving towards any transparency in the systems we conceive.

Running away from one challenge doesn’t mean that other challenges disappear.

Let us make the best of it, with as much as possible insight and Compassion.

Leave a Reply

Related Posts

Why We NEED Compassionate A.I.

It’s not just a boon. Humanity is at a stage where we desperately need the support that possibly only Compassionate A.I. can provide. This is not about the future. The need is related to the inner dissociation that we (humanoids, humans) have increasingly been stumbling into since the dawn of conscious conceptualization. That’s a long Read the full article…

A Divided World Will be Conquered by A.I.

Seriously. People fighting each other at a geopolitical level will, through competition and strife, build a world in which A.I. follows suit. There is no doubt about this: IF… THEN. As I write in my book ‘The Journey towards Compassionate A.I.,’ we are entitled to be anxious about A.I. – the real one, soon to Read the full article…

From Tool to Autonomy

When can a tool – gradually – become an autonomous agent, and how must we deal with this? What is an agent? And what is a tool? For instance, is your body (or your left hand) a tool of your brain? Is your entire body – including your brain – a tool of your mind? Read the full article…

Translate »