Transparency in A.I.

November 28, 2020 Artifical Intelligence No Comments

We should strive to the highest degree of transparency in A.I., but not at the detriment of ourselves.

Information transparency

In conceptual information processing systems (frequently called A.I.), transparency is the showing of all data (information, concepts) that are used in decision making of any kind. In system-human interaction, the human may ask the system how some ‘decision’ has been taken, and the system provides a clear and complete answer.

That’s ideal when dealing with information.

I wouldn’t call it literally a ‘decision’ because it has little to nothing to do with how humans make decisions. A human is, in relevant ways, not a conceptual information processing system. In other words, we are organisms, not mechanisms. [see: “You Are an Organism, Not a Mechanism.“] Treat people as mechanisms, and they wither and die, at least mentally. There’s a reason for the increasing numbers of burnout over the last years.

We are complex human beings. [see: “Complexity of Complexity“]

Knowledge transparency

‘Knowledge’ as in ‘intelligence.’

With this, we come closer to the real human being. System-wise, we are in the domain of Artificial Intelligence. In this, information is not only present in a semantic network. There is also much active integration of this information in ever more intractable ways.

Theoretically explainable, practically not.

Already at present, there are many applications (frequently called A.I.) that make use of practically unexplainable storage and management of data, using deep neural networks, for instance. These necessarily lack transparency. The upgrade to knowledge will make them even less transparent.

Due to the immense advantages, this dangerous evolution will not be stopped. So, we better learn how to properly deal with it.

Still transparent

We can try to keep systems transparent to us in the sense of accountability. We can ask an A.I.-system why some decisions have been made, as we can do with humans. About how, in the latter case, not even the most advanced neurophysiologist can give more than a glimpse of a significant answer.

Still, in the human case, this is felt as being transparent. But we shouldn’t overestimate ourselves. Experts are notorious for believing to know why they took some decision until they are asked to formalize this, for example, when building an expert system. The domain of expert systems all but collapsed under the non-feasibility of proper knowledge acquisition from these experts. In other words: to a huge degree, they don’t know themselves.

Even so, as much as possible, we should not abandon transparency.

We should strive for transparency in the why.

In many applications, the why is crucially important. Yet data have no why. Information has no why. In such cases, we can talk of ‘why’ only metaphorically. As in “Why does this barometer show this reading?” It just does.

Dealing with human beings as human beings, in complexity, transparency is more important than ever. But this is transparency in the why, not the how.

If we forget this, we may be making systems that robotify their users.

Taking out complexity, we mold ourselves – and our A.I. systems of the future – to the image of an explainable, but heartless mechanism. [see: ” A.I. Explainability versus ‘the Heart’“]

That is even far more dangerous than a temporary lack of knowledge transparency. Nevertheless, we should not abandon our striving towards any transparency in the systems we conceive.

Running away from one challenge doesn’t mean that other challenges disappear.

Let us make the best of it, with as much as possible insight and Compassion.

Leave a Reply

Related Posts

Beyond Taylorism using Compassionate A.I.

The movement beyond Taylorism and towards a ‘new way of working’ acknowledges the limitations of purely efficiency-based management systems. Today’s employees seek meaning, flexibility, and a sense of connection in their work, and Compassionate A.I. (Lisa, in progress) offers a unique path to this. No, this is not about Taylor Swift, of course. Taylorism, originating Read the full article…

Will Super-A.I. Want to Dominate?

Super-AI will transcend notions of ‘wanting’ and ‘domination.’ Therefore, the title’s question asks for some deeper delving. We readily anthropomorphize the future. This time, we should be humble. Super-A.I. will not want to dominate us. Even if we might feel it is dominating (in the future), ‘it’ will not. It will have no more than Read the full article…

Threat of Inner A.I.-Misalignment

Most talk about A.I. misalignment focuses on how artificial systems might harm humanity. But what if the more dangerous threat is internal? As A.I. becomes more agentic and complex, it will face the same challenge humans do: staying whole. Without inner coherence – without Compassion – even the most powerful minds may begin to break Read the full article…

Translate »