A.I. and Constructionism

February 14, 2021 Artifical Intelligence No Comments

Many people, and Western culture (if not most cultures) in general, mainly live in ‘constructed reality.’ In combination with the power of A.I., this is excruciatingly dangerous.

Constructionism

[see: “Constructionism“]

In short, humans mainly live in a ‘constructed reality’ full of group-based assumptions. On the one side, this is an asset. It makes life simpler. On the other side, it poses many growing problems because of increasing societal and technological complexity.

Then comes A.I. Great?

Algorithmic machine learning, (un)supervised learning in neural networks

Present-day A.I. is mainly comprised of a set of categorizing technologies. They are fundamentally distinct from how our human wetware (our brain) performs intelligence.

Thus, they are better called ‘advanced information processing technologies’ instead of Artificial Intelligence. I agree it would be a less commercially attractive term. It would be more respectful towards real intelligence. But OK, let’s keep the ‘A.I.’ for now.

Being a set of categorizing technologies, this A.I. is well suited to the aforementioned constructed reality. It fits our self-image. However,

it is unfit for reality itself.

Thus, it may exponentially enlarge our many growing problems. Unfortunately, at present, it already does. For instance, it is increasingly deployed in the realm of HR. [see: “A.I., HR, Danger Ahead“], where it works as well as the constructs it’s based upon.

Not surprisingly, as in the realm of psychotherapeutic models and techniques, this means that towards its well-intended aim, it doesn’t work. What may make it look like working is related to placebo. In private conversations and at seminars that I’ve attended, scientists in the field acknowledge that.

Straightforwardly.

Meanwhile, the show goes on.

I guess that’s the self-enhancing power of any constructed reality and of constructionism itself. It lives through being-certain; therefore, it keeps its being-certain in order to live.

Do you recognize in this the principle of placebo?

One giant problem: Eventually, no placebo is sustainable. Even broader than this: Un-truth may grow and grow, but in the end, truth prevails.

Well, maybe in the case of A.I., truth prevails after the demise of humanity itself. As said, this is excruciatingly dangerous.

A.I. without Compassion is not doomed. We are.

Compassionate A.I., indeed. [see: “Compassionate A.I.“] Compassion is concerned with reality. It looks through all untoward categories and constructs. It is concerned with real human beings.

Moreover, it’s even a commercially right choice. [see: “Why to Invest in Compassionate A.I.“] Towards understanding and working with humans, the categorizing A.I. will reach a ceiling, as did Platonic A.I. (good old-fashioned conceptual A.I.) in the past. The ceiling will be higher, as we are already witnessing, thus things will also be potentially more dangerous. In my view, the next breakthrough-technology will be in the realm of Compassion. Categorizing A.I. may be part of this. It can be embedded. This way, it can be a constructive element in the broader whole.

But in the end, hopefully, Compassionate A.I. will save the day, the century, and the species.

Leave a Reply

Related Posts

The Path from Implicit to Explicit Knowledge

Implicit: It’s there, but we don’t readily know how, neither why it works. Explicit: We can readily follow each step. This is more or less the same move as from intractable to tractable or from competence to comprehension. But how? Emergence If something comes out, it must have been in ― one way or another. Read the full article…

How can A.I. Become Compassionate?

Since this may be the only possible human-friendly future, it’s good to know how it can be reached, at least principally. Please read Compassion, basically, The Journey Towards Compassionate A.I., and Why A.I. Must Be Compassionate. Two ways and an opposite In principle, A.I. can become Compassionate by itself, or we may guide it toward Read the full article…

The Return of Expertext

‘Expertext,’ a term coined in the nineties (*), is now more relevant than ever as an efficient combination of semantic and declarative knowledge becomes practically feasible. This combination promises to bridge the gap between raw data and meaningful insights, paving the way for advanced A.I. systems that can think more like humans. Also, my first Read the full article…

Translate »