Human-Centered A.I.: Total-Person or Ego?
This makes for a huge difference, especially since the future of humanity is at stake.
With good intentions, one may pave the road to disaster.
Everybody, including me, should take this at heart. ‘Doing good’ may take much effort in understanding what one is tinkering with. This is relevant in any domain. [see:” ‘Doing Good’: Not as Easy as It Seems”]
Including A.I., probably even most of all in the domain of A.I. In this, perhaps most of all concerning the difference between total-person and ego.
You really should read this before proceeding. [see: “The Story of Ego”]
Needless to say, in the world of A.I. …
… OK, I’ll say it anyway. In this world, few people are oriented towards deep insights about the difference between total-person and ego. Many are quite one-sidedly STEM-oriented. [see: “Culture over STEM”] Understandable as this is, the domain of A.I. should not be the sheer exclusive domain of engineers. Even more, the next breakthrough may come from a very different direction. [see: “The Next Breakthrough in A.I.”]
On top of this, present-day psychological science may not be good enough ― as is the case according to me. It’s a domain in crisis, as I know from many scientific publications, seminars, and personal conversations also at a high level. Many insights of today may be very different within a decade. In my view, most of this is related to the above difference.
So, people may talk a lot, and next year, hopefully, they come together and talk again about the new scientific and other evolutions.
But A.I. doesn’t wait for that.
Given its magnifying factor, the consequences may be extraordinarily huge even before those who are knowledgeable get to know better. [see: “Robotizing Humans or Humanizing Robots”]
I believe that engineers – having a different mindset – are not well aware of the degree to which psychological insights are shifting and may shift further. After a while, due to the inertia of applications, A.I. technology may even run behind on evolving psychological insights, heightening a divide between research and practice ― not a new phenomenon.
Consequences
This may lead to applications that treat humans in the mere-ego sense. For example, in HR, it’s already happening. [see: “A.I., HR, Danger Ahead”] Human-centered becomes ego-centered, with growing inner dissociation for many. [see: “Inner Dissociation is NEVER OK!”] (Un)fortunately, many of these applications are not commercially successful because they don’t fill a deeper need, or after a while, show not to lead to expected results in the real-world.
The most significant danger is in the long term when human-A.I. value alignment becomes increasingly pertinent. Which human values are we then talking about? Grossly, those of ego or total-person?
In short, at present, we absolutely need to get to know ourselves very much better. As you may know, I have extensively written about these issues in my book [see: “The Journey Towards Compassionate AI.”].
My view about the future in this
You may already know. Compassionate A.I. may help us to become more Compassionate human beings. Types of applications, as well as their inner workings, are crucial to this,
especially since the future of humanity is at stake.