Ego-Centered A.I. Downfall

May 29, 2024 Artifical Intelligence No Comments

This isn’t solely about ‘bad actors’ aiming for world domination or slightly lesser evils. It’s also about those seen – by themselves and others – as good people, yet who are ‘trapped in ego.’

Many people, unfortunately. See also Human-Centered A.I.: Total-Person or Ego? / Human-Centered or Ego-Centered A.I.?

Not new

This has always been the case. Homo sapiens is an ego-driven species. While this isn’t inherently negative, combining it with A.I. introduces unique challenges. The outcome is uncertain.

Unsurprisingly, the A.I. developed so far fits the ego-driven model, lacking Compassion. This amplifies ego, which further drives the development of similar A.I.

The following are some examples where something that looks positively promising may turn out pretty bad because it’s positive for the ego in opposition to the total person.

In medicine

Chronic symptoms often involve an element of communication with the deeper self, serving as a call to look inward — beyond mere ego. Most medications merely kill symptoms, disrupting this deeper communication.

‘A.I. as a boon for pharmacology’ can and will be used to further the cause of getting rid of symptoms quickly, easily, and meaninglessly.

The same for many diagnostics. Where Listening to the Self is asked for, materialistically oriented diagnostics may lead all focus away from this. Through the finding of something material, the mind leaves the stage — especially if that is already usually the primary aim. Also, in a mind=body setting, there is theoretically always something material that can be found.

The deeper mind gets lost.

In the judiciary

We see this happening already: A.I. facilitates lawsuits, resulting in many more driven by ego-purposes.

This way, people are less inclined to deeply listen to each other — losing whatever remains of this capacity.

This way, society further hardens into an ego-versus-ego conglomerate of individuals.

In leisure

Here, we have already seen much of it. People seek to be entertained from the outside as a substitute for what might come from the inside out.

A.I. delivers the entertainment people seek — ego demands, ego receives. For instance, the entire gaming domain is largely ego-oriented. The rapid actions and instant rewards only lead to more of this, leaving no time or incentive for deeper engagement.

In short

We don’t need bad actors, bad scientists, or bad A.I. to trigger disasters in using increasingly powerful A.I. developments.

In good intentions lie already dangers enough.

Contrary to this – and as stated in the AURELIS philosophy – combining rationality with utmost respect for human depth leads to efficient, ethical, and durable change​. This principle could guide the development of Human-Centered A.I. that enhance rather than diminish our humanity.

Keep thinking.

Leave a Reply

Related Posts

Active Learning in A.I.

An active learner deliberately searches for information/knowledge to become smarter. In biological evolution on Earth The ‘Cambrian explosion’ was probably jolted by the appearance of active learning in natural evolution. It was the time when living beings started to chase other living beings— thus also being chased, heightening the challenges of survival. This mutual predation Read the full article…

Pattern Recognition and Completion in the Learning Landscape

At the heart of the learning landscape is a fundamental mechanism: Pattern Recognition and Completion (PRC). Whether it’s a model learning from labeled data, finding hidden structures, or optimizing actions through rewards, PRC is the process that drives all learning systems forward. ’The Learning Landscape’ explores the concept of the learning landscape, where different types Read the full article…

Will Unified A.I. be Compassionate?

In my view, all A.I. will eventually unify. Is then the Compassionate path recommendable? Is it feasible? Will it be? As far as I’m concerned, the question is whether the Compassionate A.I. (C.A.I.) will be Lisa. Recommendable? As you may know, Compassion, basically, is the number one goal of the AURELIS project, with Lisa playing a pivotal role. Read the full article…

Translate »