Human-Centered or Ego-Centered A.I.?

November 2, 2023 Artifical Intelligence No Comments

‘Humanism’ is supposed to be human-centered. ‘Human-A.I. Value Alignment’ is supposed to be human-centered. Or is it ego-centered?

Especially concerning (non-)Compassionate A.I., this is the crucial question that will make or break us. Unfortunately, this is intrinsically unclear to most people.

Mere-ego versus total self

See also The Big Mistake.

This is not about ‘I’ versus other people. It’s about what happens inside ‘I’ and, of course, the consequences to oneself and others.

The consequences are vast and terrible. Examples are abundant in healthcare (the whole domain of psycho-somatics), sociopolitical issues, etc. What are we doing?!

Ego versus total self’ PLUS the power of A.I.

This is not about ‘bad actors’ who willingly abuse A.I. to harm people. It’s also not about autonomous weapons being used by two sides — each against the ‘bad ones.’

Ego-centered A.I. can be developed by people with the best intentions, even calling it ‘human-centered A.I.’ Nevertheless, it is prone to diminish the wholeness of the human being — heightening dissociation in depth and scaling. Therefore, the distinction between ego and total self should be made explicitly, again and again. If that is not done, rest assured, we’re deluded by ego ― in the first place, our own.

Big problem: humanity still grapples very much with this in theory and practice.

Millennia of philosophy haven’t brought profoundly satisfying insights. Also, many religions have worked with many closed doors ― inciting people to battle for mere doors, goodness.

Science makes progress, at least, by investigating how the brain and the mind are related. However, this is so counter-cultural that, for the time being, these insights are kept unwittingly in a silo without much societal impact.

As a result, practice lags.

We’re in the Middle Ages concerning the issue ‘ego versus total self.’ Meanwhile, waves of technology have increasingly engendered situations for which humanity hasn’t been ready.

But the times, they are a-changing ― unfortunately, toward becoming even more dangerous.

Ego-centered A.I. is challenging at large.

Specific problems stemming from A.I. lie in depth and scaling: the influence upon one and many. After all, A.I. is a powerful tool ― actually, a never-ending stream of new tools and combinations. Thus, many humanly challenges that have been containable until now – more or less, with relatively minor damage – can grow into quite uncontainable issues.

A pressing question remains whether we should focus mainly on existential risks from A.I. (from bad actors to killer robots) or ego-centered uses of A.I.

The answer is both.

The urgent dangers are manifold in both directions.

The positive possibilities also: Better A.I. for Better Humans.

Leave a Reply

Related Posts

Super-A.I. Guardrails in a Compassionate Setting

We need to think about good regulations/guardrails to safeguard humanity from super-A.I. ― either ‘badass’ from the start or Compassionate A.I. turning suddenly rogue despite good initial intentions. ―As a Compassionate A.I., Lisa has substantially helped me write this text. Such help can be continued indefinitely. Some naivetés ‘Pulling the plug out’ is very naïve Read the full article…

Human Vulnerability

In an A.I. future – and already now – the main vulnerability of the human race will be the result of our not seeing the most significant part of ourselves. I’m talking about non-conscious mental processing. Every feeling, every thought that arises within any person, including you, here, now, rises from somewhere, including a myriad Read the full article…

The Society of Mind in A.I.

The human brain is pretty modular. This is a lesson from nature that we should heed when building a new kind of intelligence. It brings A.I. and H.I. (human intelligence) closer together. The society of mind Marvin Minsky (cognitive science and A.I. researcher) wrote the philosophical book with this title back in 1986. In it, Read the full article…

Translate »