Transparency in A.I.

November 28, 2020 Artifical Intelligence No Comments

We should strive to the highest degree of transparency in A.I., but not at the detriment of ourselves.

Information transparency

In conceptual information processing systems (frequently called A.I.), transparency is the showing of all data (information, concepts) that are used in decision making of any kind. In system-human interaction, the human may ask the system how some ‘decision’ has been taken, and the system provides a clear and complete answer.

That’s ideal when dealing with information.

I wouldn’t call it literally a ‘decision’ because it has little to nothing to do with how humans make decisions. A human is, in relevant ways, not a conceptual information processing system. In other words, we are organisms, not mechanisms. [see: “You Are an Organism, Not a Mechanism.“] Treat people as mechanisms, and they wither and die, at least mentally. There’s a reason for the increasing numbers of burnout over the last years.

We are complex human beings. [see: “Complexity of Complexity“]

Knowledge transparency

‘Knowledge’ as in ‘intelligence.’

With this, we come closer to the real human being. System-wise, we are in the domain of Artificial Intelligence. In this, information is not only present in a semantic network. There is also much active integration of this information in ever more intractable ways.

Theoretically explainable, practically not.

Already at present, there are many applications (frequently called A.I.) that make use of practically unexplainable storage and management of data, using deep neural networks, for instance. These necessarily lack transparency. The upgrade to knowledge will make them even less transparent.

Due to the immense advantages, this dangerous evolution will not be stopped. So, we better learn how to properly deal with it.

Still transparent

We can try to keep systems transparent to us in the sense of accountability. We can ask an A.I.-system why some decisions have been made, as we can do with humans. About how, in the latter case, not even the most advanced neurophysiologist can give more than a glimpse of a significant answer.

Still, in the human case, this is felt as being transparent. But we shouldn’t overestimate ourselves. Experts are notorious for believing to know why they took some decision until they are asked to formalize this, for example, when building an expert system. The domain of expert systems all but collapsed under the non-feasibility of proper knowledge acquisition from these experts. In other words: to a huge degree, they don’t know themselves.

Even so, as much as possible, we should not abandon transparency.

We should strive for transparency in the why.

In many applications, the why is crucially important. Yet data have no why. Information has no why. In such cases, we can talk of ‘why’ only metaphorically. As in “Why does this barometer show this reading?” It just does.

Dealing with human beings as human beings, in complexity, transparency is more important than ever. But this is transparency in the why, not the how.

If we forget this, we may be making systems that robotify their users.

Taking out complexity, we mold ourselves – and our A.I. systems of the future – to the image of an explainable, but heartless mechanism. [see: ” A.I. Explainability versus ‘the Heart’“]

That is even far more dangerous than a temporary lack of knowledge transparency. Nevertheless, we should not abandon our striving towards any transparency in the systems we conceive.

Running away from one challenge doesn’t mean that other challenges disappear.

Let us make the best of it, with as much as possible insight and Compassion.

Leave a Reply

Related Posts

The Return of Expertext

‘Expertext,’ a term coined in the nineties (*), is now more relevant than ever as an efficient combination of semantic and declarative knowledge becomes practically feasible. This combination promises to bridge the gap between raw data and meaningful insights, paving the way for advanced A.I. systems that can think more like humans. Also, my first Read the full article…

Lisa in Corona Times

This is about what Lisa [see: “Lisa”] can accomplish in ‘corona times,’ now and in the future. Let’s hope to get her ‘live’ as soon as possible. Two ways Lisa can provide immediate help and relief of suffering through her full coaching including her guidance to AURELIS mental exercises. Lisa is also about pattern recognition Read the full article…

Dawn of Opening Up

The times, they are a-changing into an era with many unforeseen challenges and possibilities. A.I. makes it even more so. One of the essential changes is a gradual Opening up of who we are as a human species, especially regarding the mind in its non-conscious presence. Openness in several domains As in AURELIS subprojects: Open Read the full article…

Translate »