Robotizing Humans or Humanizing Robots

March 15, 2021 Artifical Intelligence No Comments

The latter may be necessary to prevent the former. The power of A.I. can be used in both directions. Hopefully, the next A.I. breakthrough brings us more of the latter.

Challenging times.

We are living in an era of transition in many ways. One of them is the birth of a new kind of intelligence that will soon enough be at par with us, then transcend us manifold.

At the same time, there is generally little insight into what it means to be an organic, complex being in which mind and body are eventually but two views upon the same whole. Herein, our complexity is different from being merely complicated, machine-like. [see: “Complex is not Complicated”]

We are not like complicated robots.

Yet practically, people are treated that way. This goes further than meets the usual eye.

For instance, in medicine (and psychology), people’s conditions are frequently medicalized into distinct diseases that exist as a matter of cultural agreement. This puts people into boxes that may be profoundly unhelpful. [see: “Why Medicalization Sucks“]

In HR, categorizing people goes on in a myriad of ways. The level of solid science in this is close to zero. [see: “A.I., HR, Danger Ahead”]

On an even deeper level, it leads to overcategorization, therefore robotization, of ourselves in a domain as vital as our own feelings. [see: “Feeling without Feelings”]

The world, a stage?

According to W. Shakespeare: “All the world’s a stage, and all the men and women merely players…”

A major reason can be found in an existential urge for control ― and anxiety to lose it. Again and again, this drives people to construct a theatrical stage on which to play a clear role, calling this ‘reality.’ It is a constructed, merely complicated reality, not the more fundamental, complex one on which it is built and in which we are more than mere players.

Things ‘work’ in this constructed reality, but only at a huge and increasingly unsustainable toll on humanity. The consequences are visible as mounting burnout, social tensions, etc. Explanations at the stage level are rife but mostly less important than the stage level itself, although – logically – not easily visible from that same stage.

A.I. in past and present follows this path.

In GOFAI (Good Old-Fashioned A.I.), this led to strictly conceptualized theories, with expert systems as the principal endeavor.

Today’s commercial A.I. applications are mainly based on supervised learning. Although connectionism has brought some complexity in hidden layers, the goal is still categorization ― also, basically, in clustering and regression. That is perfect in the physical, complicated world. In the organic, complex world, it carries the danger of robotizing human beings.

This is probably why such technologies attract attention also in humanistic domains. They soothe the anxiety brought by the apprehension of chaos (entropy), always an enemy of life itself.

However, anxiety is never a good adviser. In this case, the real danger is in the human being’s robotization through A.I. means. The above domains are a few examples in which this danger is clearly present.

A.I. in the near future?

Many are striving for ‘the next breakthrough in A.I.’ to bring us machines that can reason and plan, thus exhibit more general intelligence and be applicable much broader. Such breakthrough(s) will probably surmount the present situation of narrow classifiers, bringing huge possibilities and challenges. [see: “The Next Breakthrough in A.I.”]

Dangerous: yes! Hopeful: yes! [see: “The Journey Towards Compassionate AI.”] The hope goes towards more humane humans living together with the future A.I. that has the Compassionate purpose of taking care of us in a setting of respect, depth, openness, freedom, and trustworthiness.

It’s a sweet and possible dream.

Leave a Reply

Related Posts

A Tapestry of Compassionate Patterns

The AURELIS blogs form a vast network of interconnected insights woven together by a common thread: Compassion. This tapestry transcends the boundaries between seemingly unrelated domains, revealing deep connections and offering a foundation for meaningful understanding. Through this, Lisa emerges as a lens for uncovering the hidden bridges of meaning, amplifying the potential of this Read the full article…

Compassionate Intelligence is Multilayered

Compassionate [Artificial] Intelligence (C.A.I.) represents a system of depth, rationality, and adaptability. It merges the richness of human-like subtlety with the precision of intelligent systems. Lisa, as an embodiment of C.A.I., illustrates how these layers work in harmony, creating responses that are meaningful and deeply attuned to human experiences. Her design reflects a dynamic interplay Read the full article…

Human-Centered or Ego-Centered A.I.?

‘Humanism’ is supposed to be human-centered. ‘Human-A.I. Value Alignment’ is supposed to be human-centered. Or is it ego-centered? Especially concerning (non-)Compassionate A.I., this is the crucial question that will make or break us. Unfortunately, this is intrinsically unclear to most people. Mere-ego versus total self See also The Big Mistake. This is not about ‘I’ Read the full article…

Translate »