Robotizing Humans or Humanizing Robots

March 15, 2021 Artifical Intelligence No Comments

The latter may be necessary to prevent the former. The power of A.I. can be used in both directions. Hopefully, the next A.I. breakthrough brings us more of the latter.

Challenging times.

We are living in an era of transition in many ways. One of them is the birth of a new kind of intelligence that will soon enough be at par with us, then transcend us manifold.

At the same time, there is generally little insight into what it means to be an organic, complex being in which mind and body are eventually but two views upon the same whole. Herein, our complexity is different from being merely complicated, machine-like. [see: “Complex is not Complicated”]

We are not like complicated robots.

Yet practically, people are treated that way. This goes further than meets the usual eye.

For instance, in medicine (and psychology), people’s conditions are frequently medicalized into distinct diseases that exist as a matter of cultural agreement. This puts people into boxes that may be profoundly unhelpful. [see: “Why Medicalization Sucks“]

In HR, categorizing people goes on in a myriad of ways. The level of solid science in this is close to zero. [see: “A.I., HR, Danger Ahead”]

On an even deeper level, it leads to overcategorization, therefore robotization, of ourselves in a domain as vital as our own feelings. [see: “Feeling without Feelings”]

The world, a stage?

According to W. Shakespeare: “All the world’s a stage, and all the men and women merely players…”

A major reason can be found in an existential urge for control ― and anxiety to lose it. Again and again, this drives people to construct a theatrical stage on which to play a clear role, calling this ‘reality.’ It is a constructed, merely complicated reality, not the more fundamental, complex one on which it is built and in which we are more than mere players.

Things ‘work’ in this constructed reality, but only at a huge and increasingly unsustainable toll on humanity. The consequences are visible as mounting burnout, social tensions, etc. Explanations at the stage level are rife but mostly less important than the stage level itself, although – logically – not easily visible from that same stage.

A.I. in past and present follows this path.

In GOFAI (Good Old-Fashioned A.I.), this led to strictly conceptualized theories, with expert systems as the principal endeavor.

Today’s commercial A.I. applications are mainly based on supervised learning. Although connectionism has brought some complexity in hidden layers, the goal is still categorization ― also, basically, in clustering and regression. That is perfect in the physical, complicated world. In the organic, complex world, it carries the danger of robotizing human beings.

This is probably why such technologies attract attention also in humanistic domains. They soothe the anxiety brought by the apprehension of chaos (entropy), always an enemy of life itself.

However, anxiety is never a good adviser. In this case, the real danger is in the human being’s robotization through A.I. means. The above domains are a few examples in which this danger is clearly present.

A.I. in the near future?

Many are striving for ‘the next breakthrough in A.I.’ to bring us machines that can reason and plan, thus exhibit more general intelligence and be applicable much broader. Such breakthrough(s) will probably surmount the present situation of narrow classifiers, bringing huge possibilities and challenges. [see: “The Next Breakthrough in A.I.”]

Dangerous: yes! Hopeful: yes! [see: “The Journey Towards Compassionate AI.”] The hope goes towards more humane humans living together with the future A.I. that has the Compassionate purpose of taking care of us in a setting of respect, depth, openness, freedom, and trustworthiness.

It’s a sweet and possible dream.

Leave a Reply

Related Posts

Why to Invest in Compassionate A.I.

Most A.I. engineers have a limited view on organic intelligence, let alone consciousness or Compassion. That’s a huge problem. Indeed, I’ve written a book about Compassionate A.I. See: “The Journey Towards Compassionate A.I.: Who We Are – What A.I. Can Become – Why It Matters” Am I trying to attract investors for this now? Or Read the full article…

Open Letter about Compassionate A.I. (C.A.I.) to Elon Musk

And to any Value-Driven Investors (VDI) in or out of worldly spotlights. This is a timely call for Compassionate A.I. (C.A.I.) Compassion and A.I. are seldom mentioned together. Yet C.A.I. may be the most crucial development in the near as well as far-away future of humanity. Please see my book about the Journey Towards Compassionate Read the full article…

Are LLMs Parrots or Truly Creative?

Large Language Models (LLMs, such as GPT) are, at present, just mathematical distillations of human-made textual patterns — very many of them. They are, therefore, frequently described as parrots. Size matters. The parrot feature may be applied when there is little input or little diversity in input. Then, clearly, the result is a pattern-based average Read the full article…

Translate »