A.I. and Constructionism

February 14, 2021 Artifical Intelligence No Comments

Many people, and Western culture (if not most cultures) in general, mainly live in ‘constructed reality.’ In combination with the power of A.I., this is excruciatingly dangerous.

Constructionism

[see: “Constructionism“]

In short, humans mainly live in a ‘constructed reality’ full of group-based assumptions. On the one side, this is an asset. It makes life simpler. On the other side, it poses many growing problems because of increasing societal and technological complexity.

Then comes A.I. Great?

Algorithmic machine learning, (un)supervised learning in neural networks

Present-day A.I. is mainly comprised of a set of categorizing technologies. They are fundamentally distinct from how our human wetware (our brain) performs intelligence.

Thus, they are better called ‘advanced information processing technologies’ instead of Artificial Intelligence. I agree it would be a less commercially attractive term. It would be more respectful towards real intelligence. But OK, let’s keep the ‘A.I.’ for now.

Being a set of categorizing technologies, this A.I. is well suited to the aforementioned constructed reality. It fits our self-image. However,

it is unfit for reality itself.

Thus, it may exponentially enlarge our many growing problems. Unfortunately, at present, it already does. For instance, it is increasingly deployed in the realm of HR. [see: “A.I., HR, Danger Ahead“], where it works as well as the constructs it’s based upon.

Not surprisingly, as in the realm of psychotherapeutic models and techniques, this means that towards its well-intended aim, it doesn’t work. What may make it look like working is related to placebo. In private conversations and at seminars that I’ve attended, scientists in the field acknowledge that.

Straightforwardly.

Meanwhile, the show goes on.

I guess that’s the self-enhancing power of any constructed reality and of constructionism itself. It lives through being-certain; therefore, it keeps its being-certain in order to live.

Do you recognize in this the principle of placebo?

One giant problem: Eventually, no placebo is sustainable. Even broader than this: Un-truth may grow and grow, but in the end, truth prevails.

Well, maybe in the case of A.I., truth prevails after the demise of humanity itself. As said, this is excruciatingly dangerous.

A.I. without Compassion is not doomed. We are.

Compassionate A.I., indeed. [see: “Compassionate A.I.“] Compassion is concerned with reality. It looks through all untoward categories and constructs. It is concerned with real human beings.

Moreover, it’s even a commercially right choice. [see: “Why to Invest in Compassionate A.I.“] Towards understanding and working with humans, the categorizing A.I. will reach a ceiling, as did Platonic A.I. (good old-fashioned conceptual A.I.) in the past. The ceiling will be higher, as we are already witnessing, thus things will also be potentially more dangerous. In my view, the next breakthrough-technology will be in the realm of Compassion. Categorizing A.I. may be part of this. It can be embedded. This way, it can be a constructive element in the broader whole.

But in the end, hopefully, Compassionate A.I. will save the day, the century, and the species.

Leave a Reply

Related Posts

Wanted: Humility in an Age of Super-A.I.

Super-A.I. is coming, whether we are ready or not. Many believe our greatest challenge is to keep it under control, to ensure it serves us rather than the other way around. But this way of thinking is already a trap of mere-ego — the illusion that we can dominate something that will outthink us at Read the full article…

We Need to Be the Best We Can

This differs from being ‘the best person’ or ‘the most intelligent beings on Earth’ in competition with others. Our only – and fierce – competition should be with ourselves. The best, in good Aurelian tradition, is most Compassionately the best — striving for in-depth excellence. This striving is purposeful. It’s about standing at one’s limits Read the full article…

Global Human-A.I. Value Alignment

Human values align deeply across the globe, though they vary on the surface. Thus, striving for human-A.I. value alignment can create positive challenges for A.I. and opportunities for humanity. A.I. may make the world more pluralistic. With A.I. means, different peoples/cultures can strive for more self-efficacy, doing their thing independently and thereby floating away from Read the full article…

Translate »