Human-Centered or Ego-Centered A.I.?

November 2, 2023 Artifical Intelligence No Comments

‘Humanism’ is supposed to be human-centered. ‘Human-A.I. Value Alignment’ is supposed to be human-centered. Or is it ego-centered?

Especially concerning (non-)Compassionate A.I., this is the crucial question that will make or break us. Unfortunately, this is intrinsically unclear to most people.

Mere-ego versus total self

See also The Big Mistake.

This is not about ‘I’ versus other people. It’s about what happens inside ‘I’ and, of course, the consequences to oneself and others.

The consequences are vast and terrible. Examples are abundant in healthcare (the whole domain of psycho-somatics), sociopolitical issues, etc. What are we doing?!

Ego versus total self’ PLUS the power of A.I.

This is not about ‘bad actors’ who willingly abuse A.I. to harm people. It’s also not about autonomous weapons being used by two sides — each against the ‘bad ones.’

Ego-centered A.I. can be developed by people with the best intentions, even calling it ‘human-centered A.I.’ Nevertheless, it is prone to diminish the wholeness of the human being — heightening dissociation in depth and scaling. Therefore, the distinction between ego and total self should be made explicitly, again and again. If that is not done, rest assured, we’re deluded by ego ― in the first place, our own.

Big problem: humanity still grapples very much with this in theory and practice.

Millennia of philosophy haven’t brought profoundly satisfying insights. Also, many religions have worked with many closed doors ― inciting people to battle for mere doors, goodness.

Science makes progress, at least, by investigating how the brain and the mind are related. However, this is so counter-cultural that, for the time being, these insights are kept unwittingly in a silo without much societal impact.

As a result, practice lags.

We’re in the Middle Ages concerning the issue ‘ego versus total self.’ Meanwhile, waves of technology have increasingly engendered situations for which humanity hasn’t been ready.

But the times, they are a-changing ― unfortunately, toward becoming even more dangerous.

Ego-centered A.I. is challenging at large.

Specific problems stemming from A.I. lie in depth and scaling: the influence upon one and many. After all, A.I. is a powerful tool ― actually, a never-ending stream of new tools and combinations. Thus, many humanly challenges that have been containable until now – more or less, with relatively minor damage – can grow into quite uncontainable issues.

A pressing question remains whether we should focus mainly on existential risks from A.I. (from bad actors to killer robots) or ego-centered uses of A.I.

The answer is both.

The urgent dangers are manifold in both directions.

The positive possibilities also: Better A.I. for Better Humans.

Leave a Reply

Related Posts

A.I. Business Sustainability

Following a chapter of my book. [see: “The Journey Towards Compassionate A.I.“] As the cliché goes: One thing is certain, and that is the uncertainty of the future. Trillions “PricewaterhouseCoopers estimates AI deployment will add $15.7 trillion to global GDP by 2030.” [Lee, 2018] If this is not a business opportunity, then what is? And Read the full article…

Principles of Being an Intelligent Being

Strange times. We are living at the borders of old and new intelligences. We’ll need some agreement in seeing what it’s about. Intelligence is in the eye of the beholder. Definitions of intelligence abound. Therefore, it is better to start from the really basic, where it can be hardly even more basic. There, it’s the Read the full article…

Is Lisa Safe?

There are two directions of safety for complex A.I.-projects: general and particular. Lisa must forever conform to the highest standards in both. Let’s assume Lisa becomes the immense success that she deserves. Lisa can then help many people in many ways and for a very long time — a millennium to start with. About Lisa Read the full article…

Translate »