Human-Centered A.I.: Total-Person or Ego?

March 22, 2021 Artifical Intelligence No Comments

This makes for a huge difference, especially since the future of humanity is at stake.

With good intentions, one may pave the road to disaster.

Everybody, including me, should take this at heart. ‘Doing good’ may take much effort in understanding what one is tinkering with. This is relevant in any domain. [see:” ‘Doing Good’: Not as Easy as It Seems”]

Including A.I., probably even most of all in the domain of A.I. In this, perhaps most of all concerning the difference between total-person and ego.

You really should read this before proceeding. [see: “The Story of Ego”]

Needless to say, in the world of A.I.

… OK, I’ll say it anyway. In this world, few people are oriented towards deep insights about the difference between total-person and ego. Many are quite one-sidedly STEM-oriented. [see: “Culture over STEM”] Understandable as this is, the domain of A.I. should not be the sheer exclusive domain of engineers. Even more, the next breakthrough may come from a very different direction. [see: “The Next Breakthrough in A.I.”]

On top of this, present-day psychological science may not be good enough ― as is the case according to me. It’s a domain in crisis, as I know from many scientific publications, seminars, and personal conversations also at a high level. Many insights of today may be very different within a decade. In my view, most of this is related to the above difference.

So, people may talk a lot, and next year, hopefully, they come together and talk again about the new scientific and other evolutions.

But A.I. doesn’t wait for that.

Given its magnifying factor, the consequences may be extraordinarily huge even before those who are knowledgeable get to know better. [see: “Robotizing Humans or Humanizing Robots”]

I believe that engineers – having a different mindset – are not well aware of the degree to which psychological insights are shifting and may shift further. After a while, due to the inertia of applications, A.I. technology may even run behind on evolving psychological insights, heightening a divide between research and practice ― not a new phenomenon.

Consequences

This may lead to applications that treat humans in the mere-ego sense. For example, in HR, it’s already happening. [see: “A.I., HR, Danger Ahead”] Human-centered becomes ego-centered, with growing inner dissociation for many. [see: “Inner Dissociation is NEVER OK!”] (Un)fortunately, many of these applications are not commercially successful because they don’t fill a deeper need, or after a while, show not to lead to expected results in the real-world.

The most significant danger is in the long term when human-A.I. value alignment becomes increasingly pertinent. Which human values are we then talking about? Grossly, those of ego or total-person?

In short, at present, we absolutely need to get to know ourselves very much better. As you may know, I have extensively written about these issues in my book [see: “The Journey Towards Compassionate AI.”].

My view about the future in this

You may already know. Compassionate A.I. may help us to become more Compassionate human beings. Types of applications, as well as their inner workings, are crucial to this,

especially since the future of humanity is at stake.

Leave a Reply

Related Posts

Therapist vs. LLMs and Lisa

A recent article [“Expressing stigma and inappropriate responses prevents LLMs from safely replacing mental health providers” (2025)] takes a critical view of LLMs being used as therapists. This blog is a dialogue with Lisa about the article. We didn’t talk about my personal critical view of the quality of mainstream mental health providers, nor was Read the full article…

Future A.I.: Fluid or Solid?

Humans are fluid thinkers. That gives us huge strength and some major challenges. The one does not go without the other. A.I. – including Semantic A.I. – is still a very different matter. Through proper context, data becomes information. Still, the information as it is stored in a book is not in any way like Read the full article…

Legal vs. Deontological in A.I.

The trolley problem This is a well-known problem in A.I. A trolley driver gets into a situation where he must choose between killing one person by taking a deliberate action or letting five others get killed by not reacting to the situation. Deontologically, people tend not to choose purely logically and statistically in such situations. Read the full article…

Translate »