Ego-Centered A.I. Downfall

May 29, 2024 Artifical Intelligence No Comments

This isn’t solely about ‘bad actors’ aiming for world domination or slightly lesser evils. It’s also about those seen – by themselves and others – as good people, yet who are ‘trapped in ego.’

Many people, unfortunately. See also Human-Centered A.I.: Total-Person or Ego? / Human-Centered or Ego-Centered A.I.?

Not new

This has always been the case. Homo sapiens is an ego-driven species. While this isn’t inherently negative, combining it with A.I. introduces unique challenges. The outcome is uncertain.

Unsurprisingly, the A.I. developed so far fits the ego-driven model, lacking Compassion. This amplifies ego, which further drives the development of similar A.I.

The following are some examples where something that looks positively promising may turn out pretty bad because it’s positive for the ego in opposition to the total person.

In medicine

Chronic symptoms often involve an element of communication with the deeper self, serving as a call to look inward — beyond mere ego. Most medications merely kill symptoms, disrupting this deeper communication.

‘A.I. as a boon for pharmacology’ can and will be used to further the cause of getting rid of symptoms quickly, easily, and meaninglessly.

The same for many diagnostics. Where Listening to the Self is asked for, materialistically oriented diagnostics may lead all focus away from this. Through the finding of something material, the mind leaves the stage — especially if that is already usually the primary aim. Also, in a mind=body setting, there is theoretically always something material that can be found.

The deeper mind gets lost.

In the judiciary

We see this happening already: A.I. facilitates lawsuits, resulting in many more driven by ego-purposes.

This way, people are less inclined to deeply listen to each other — losing whatever remains of this capacity.

This way, society further hardens into an ego-versus-ego conglomerate of individuals.

In leisure

Here, we have already seen much of it. People seek to be entertained from the outside as a substitute for what might come from the inside out.

A.I. delivers the entertainment people seek — ego demands, ego receives. For instance, the entire gaming domain is largely ego-oriented. The rapid actions and instant rewards only lead to more of this, leaving no time or incentive for deeper engagement.

In short

We don’t need bad actors, bad scientists, or bad A.I. to trigger disasters in using increasingly powerful A.I. developments.

In good intentions lie already dangers enough.

Contrary to this – and as stated in the AURELIS philosophy – combining rationality with utmost respect for human depth leads to efficient, ethical, and durable change​. This principle could guide the development of Human-Centered A.I. that enhance rather than diminish our humanity.

Keep thinking.

Leave a Reply

Related Posts

Can Assistance Games Save Us from A.I.?

As artificial intelligence advances toward ever greater capabilities, the question of safety becomes urgent. One widely discussed solution is the use of assistance games — interactive frameworks in which A.I. learns to support human preferences through observation and adaptation. But can such a method, rooted in formal modeling, truly protect us in the long run? Read the full article…

Is All Learning Associational?

Most probably. This is a domain where animal/human learning and A.I. learning can learn much from each other. Three forms of learning in A.I. Generally, learning in A.I. is divided into two distinct kinds, with a third one dangling in the appendix, referring to another book (*). supervised learning: training with specific input and labeled Read the full article…

Reinforcement Learning & Compassionate A.I.

This is rather abstract. There is an agent with a goal, a sensor, and an actor. Occasionally, the agent uses a model of the environment. There are rewards and one or more value functions that value the rewards. Maximizing the goal (through acting) based on rewards (through sensing) is reinforcement learning (R.L.). The agent’s policy Read the full article…

Translate »