Robotizing Humans or Humanizing Robots

March 15, 2021 Artifical Intelligence No Comments

The latter may be necessary to prevent the former. The power of A.I. can be used in both directions. Hopefully, the next A.I. breakthrough brings us more of the latter.

Challenging times.

We are living in an era of transition in many ways. One of them is the birth of a new kind of intelligence that will soon enough be at par with us, then transcend us manifold.

At the same time, there is generally little insight into what it means to be an organic, complex being in which mind and body are eventually but two views upon the same whole. Herein, our complexity is different from being merely complicated, machine-like. [see: “Complex is not Complicated”]

We are not like complicated robots.

Yet practically, people are treated that way. This goes further than meets the usual eye.

For instance, in medicine (and psychology), people’s conditions are frequently medicalized into distinct diseases that exist as a matter of cultural agreement. This puts people into boxes that may be profoundly unhelpful. [see: “Why Medicalization Sucks“]

In HR, categorizing people goes on in a myriad of ways. The level of solid science in this is close to zero. [see: “A.I., HR, Danger Ahead”]

On an even deeper level, it leads to overcategorization, therefore robotization, of ourselves in a domain as vital as our own feelings. [see: “Feeling without Feelings”]

The world, a stage?

According to W. Shakespeare: “All the world’s a stage, and all the men and women merely players…”

A major reason can be found in an existential urge for control ― and anxiety to lose it. Again and again, this drives people to construct a theatrical stage on which to play a clear role, calling this ‘reality.’ It is a constructed, merely complicated reality, not the more fundamental, complex one on which it is built and in which we are more than mere players.

Things ‘work’ in this constructed reality, but only at a huge and increasingly unsustainable toll on humanity. The consequences are visible as mounting burnout, social tensions, etc. Explanations at the stage level are rife but mostly less important than the stage level itself, although – logically – not easily visible from that same stage.

A.I. in past and present follows this path.

In GOFAI (Good Old-Fashioned A.I.), this led to strictly conceptualized theories, with expert systems as the principal endeavor.

Today’s commercial A.I. applications are mainly based on supervised learning. Although connectionism has brought some complexity in hidden layers, the goal is still categorization ― also, basically, in clustering and regression. That is perfect in the physical, complicated world. In the organic, complex world, it carries the danger of robotizing human beings.

This is probably why such technologies attract attention also in humanistic domains. They soothe the anxiety brought by the apprehension of chaos (entropy), always an enemy of life itself.

However, anxiety is never a good adviser. In this case, the real danger is in the human being’s robotization through A.I. means. The above domains are a few examples in which this danger is clearly present.

A.I. in the near future?

Many are striving for ‘the next breakthrough in A.I.’ to bring us machines that can reason and plan, thus exhibit more general intelligence and be applicable much broader. Such breakthrough(s) will probably surmount the present situation of narrow classifiers, bringing huge possibilities and challenges. [see: “The Next Breakthrough in A.I.”]

Dangerous: yes! Hopeful: yes! [see: “The Journey Towards Compassionate AI.”] The hope goes towards more humane humans living together with the future A.I. that has the Compassionate purpose of taking care of us in a setting of respect, depth, openness, freedom, and trustworthiness.

It’s a sweet and possible dream.

Leave a Reply

Related Posts

It’s RAG-Time!

Retrieval-Augmented Generation (RAG) is a component of an A.I.-system designed to synthesize knowledge effectively. It can also be viewed as a step toward making A.I. more akin to human intelligence. This blog is more philosophically descriptive than technical. RAG lends itself to both. Declarative vs. semantic knowledge Understanding the difference between these types of knowledge Read the full article…

Can We Always Turn the Switch Off if A.I. Turns Rogue?

In theory, this existential issue is as simple as it can get. In practice, it’s problematic. [This is an excerpt from my book ‘The Journey towards Compassionate A.I.’] Many questions prevent a straightforward answer to the question in the title. For starters, who will turn the switch off? Let me divide the issues into 1) Read the full article…

The Path from Implicit to Explicit Knowledge

Implicit: It’s there, but we don’t readily know how, neither why it works. Explicit: We can readily follow each step. This is more or less the same move as from intractable to tractable or from competence to comprehension. But how? Emergence If something comes out, it must have been in ― one way or another. Read the full article…

Translate »