Human-Centered or Ego-Centered A.I.?

November 2, 2023 Artifical Intelligence No Comments

‘Humanism’ is supposed to be human-centered. ‘Human-A.I. Value Alignment’ is supposed to be human-centered. Or is it ego-centered?

Especially concerning (non-)Compassionate A.I., this is the crucial question that will make or break us. Unfortunately, this is intrinsically unclear to most people.

Mere-ego versus total self

See also The Big Mistake.

This is not about ‘I’ versus other people. It’s about what happens inside ‘I’ and, of course, the consequences to oneself and others.

The consequences are vast and terrible. Examples are abundant in healthcare (the whole domain of psycho-somatics), sociopolitical issues, etc. What are we doing?!

Ego versus total self’ PLUS the power of A.I.

This is not about ‘bad actors’ who willingly abuse A.I. to harm people. It’s also not about autonomous weapons being used by two sides — each against the ‘bad ones.’

Ego-centered A.I. can be developed by people with the best intentions, even calling it ‘human-centered A.I.’ Nevertheless, it is prone to diminish the wholeness of the human being — heightening dissociation in depth and scaling. Therefore, the distinction between ego and total self should be made explicitly, again and again. If that is not done, rest assured, we’re deluded by ego ― in the first place, our own.

Big problem: humanity still grapples very much with this in theory and practice.

Millennia of philosophy haven’t brought profoundly satisfying insights. Also, many religions have worked with many closed doors ― inciting people to battle for mere doors, goodness.

Science makes progress, at least, by investigating how the brain and the mind are related. However, this is so counter-cultural that, for the time being, these insights are kept unwittingly in a silo without much societal impact.

As a result, practice lags.

We’re in the Middle Ages concerning the issue ‘ego versus total self.’ Meanwhile, waves of technology have increasingly engendered situations for which humanity hasn’t been ready.

But the times, they are a-changing ― unfortunately, toward becoming even more dangerous.

Ego-centered A.I. is challenging at large.

Specific problems stemming from A.I. lie in depth and scaling: the influence upon one and many. After all, A.I. is a powerful tool ― actually, a never-ending stream of new tools and combinations. Thus, many humanly challenges that have been containable until now – more or less, with relatively minor damage – can grow into quite uncontainable issues.

A pressing question remains whether we should focus mainly on existential risks from A.I. (from bad actors to killer robots) or ego-centered uses of A.I.

The answer is both.

The urgent dangers are manifold in both directions.

The positive possibilities also: Better A.I. for Better Humans.

Leave a Reply

Related Posts

Will Super-A.I. Make People Happier?

This is the paramount question — more vital than any debate about intelligence. It’s a bit weird that it is seldom put at the forefront, as if we’re more concerned about who is the most knowledgeable, therefore the most powerful. What people? It should not be about a few, as it should not exclude billions. Read the full article…

Can A.I. be Neutral?

I mean, concerning individuation vs. inner dissociation — in other words, total self vs. ego. If we don’t take care, are we doomed to enter a future of ever more ego, engendered by our ‘latest invention’? So, how can we take care? The illusion of neutrality At first glance, A.I. might appear neutral. After all, Read the full article…

When does A.I. Become Creative?

Soon enough. By creating new intelligence, we create something that will be creative by itself, and vice versa. From mere repetitions to new associations to the very unexpected. Continua These are not entirely distinct categories. There are possible continua in many ways, especially when working from the subconceptual level onwards ― such as in present-day Read the full article…

Translate »