Ego-Centered A.I. Downfall

May 29, 2024 Artifical Intelligence No Comments

This isn’t solely about ‘bad actors’ aiming for world domination or slightly lesser evils. It’s also about those seen – by themselves and others – as good people, yet who are ‘trapped in ego.’

Many people, unfortunately. See also Human-Centered A.I.: Total-Person or Ego? / Human-Centered or Ego-Centered A.I.?

Not new

This has always been the case. Homo sapiens is an ego-driven species. While this isn’t inherently negative, combining it with A.I. introduces unique challenges. The outcome is uncertain.

Unsurprisingly, the A.I. developed so far fits the ego-driven model, lacking Compassion. This amplifies ego, which further drives the development of similar A.I.

The following are some examples where something that looks positively promising may turn out pretty bad because it’s positive for the ego in opposition to the total person.

In medicine

Chronic symptoms often involve an element of communication with the deeper self, serving as a call to look inward — beyond mere ego. Most medications merely kill symptoms, disrupting this deeper communication.

‘A.I. as a boon for pharmacology’ can and will be used to further the cause of getting rid of symptoms quickly, easily, and meaninglessly.

The same for many diagnostics. Where Listening to the Self is asked for, materialistically oriented diagnostics may lead all focus away from this. Through the finding of something material, the mind leaves the stage — especially if that is already usually the primary aim. Also, in a mind=body setting, there is theoretically always something material that can be found.

The deeper mind gets lost.

In the judiciary

We see this happening already: A.I. facilitates lawsuits, resulting in many more driven by ego-purposes.

This way, people are less inclined to deeply listen to each other — losing whatever remains of this capacity.

This way, society further hardens into an ego-versus-ego conglomerate of individuals.

In leisure

Here, we have already seen much of it. People seek to be entertained from the outside as a substitute for what might come from the inside out.

A.I. delivers the entertainment people seek — ego demands, ego receives. For instance, the entire gaming domain is largely ego-oriented. The rapid actions and instant rewards only lead to more of this, leaving no time or incentive for deeper engagement.

In short

We don’t need bad actors, bad scientists, or bad A.I. to trigger disasters in using increasingly powerful A.I. developments.

In good intentions lie already dangers enough.

Contrary to this – and as stated in the AURELIS philosophy – combining rationality with utmost respect for human depth leads to efficient, ethical, and durable change​. This principle could guide the development of Human-Centered A.I. that enhance rather than diminish our humanity.

Keep thinking.

Leave a Reply

Related Posts

From Intractable to Tractable = Intelligent

This is what intelligence can do: simplifying intractable problems (not easily controlled, managed, or solved) to tractable ones without much loss of relevant information. Problems may be social, mathematical, or other. Depending on the meaning of the considered terms, it may be seen as an exhaustive characterization of intelligence. However, this is not meant as Read the full article…

Why A.I. is Less and Less about Technology

As A.I. technology advances, the research focus should shift from mere technological advancements to a higher level of development altogether. This blog is not about philosophical implications, but about philosophy as a technological driver ― the philosophy itself becoming the technology. Currently, the possibilities are so vast and diverse that integration can be considered independently Read the full article…

How to Contain Non-Compassionate Super-A.I.

We want super(-intelligent) A.I. to remain under meaningful human control to avoid that it will largely or fully destroy or subdue humanity (= existential dangers). Compassionate A.I. may not be with us for a while. Meanwhile, how can we contain super-A.I.? Future existential danger is special in that one can only be wrong in one Read the full article…

Translate »