A Divided World Will be Conquered by A.I.

January 29, 2022 Artifical Intelligence, Sociocultural Issues No Comments

Seriously. People fighting each other at a geopolitical level will, through competition and strife, build a world in which A.I. follows suit.

There is no doubt about this: IF… THEN.

As I write in my book ‘The Journey towards Compassionate A.I.,’ we are entitled to be anxious about A.I. – the real one, soon to be – if we don’t manage to let it grow in a setting of Compassion.

Hopefully, the A.I. will be Compassionate also without much human guidance. I think this will happen eventually. The problem is, by then, it may be too late for many people or even for humanity.

We’re on (the wrong) track

Europe, where I live, feels like a more dangerous place than some 20 years ago. Politicians and media also say so literally, especially when or right before announcing more military expenditures ― to such a degree that it cannot be haphazard. The weapon industry undeniably plays a role in many global political developments. There is just too much easy money going around. That has been a critical factor for a long time. It still is and will be.

Even more dangerously, the level of aggression in many people rises visibly. This is eventually the primary energy for war, and it is the responsibility of many individuals, including you and me. Wars and global divides don’t fall from the sky if there is not this negative energy. I describe this in several blogs as inner dissociation.

Geopolitically, the US doesn’t seem to be involved in one cold war, but two – with Russia and China – not counting the internal not-so-cold war-mongering (racial, cultural). From their sides, Russia and China are pushing their borders, risking wars, and failing in diplomacy.

Not so much Compassion in a divided world

The continual struggle will also not heighten the level of Compassion, while heightening the level of aggression through the usual ingredients of anxiety, revenge, de-humanizing the other party, active denial of the obvious, fear of losing face, individual status and power concerns, the push of big money, petty politics, etc.

Is this the example through which we want to strive for human-A.I. value alignment?

It’s the opposite, of course, but a divided world may lead to ever more division.

And the winner is… A.I.

The party who can use A.I. in its aggressive stance will be at an advantage; therefore, it will be done this way. In a struggle between superpowers of somewhat equal strength – think China and the US shortly, possibly also India – this may go on for a while.

Combine this with the little available insight in, for instance, an evolving consciousness within A.I., and you see a most dangerous mix.

While the two superpowers fight against each other, a third one may rise and take the whole field. Moreover, I don’t see how the A.I.’s of both sides will not contact each other at some given point. This may be the point of the so much discussed singularity.

A crystal ball is not needed for the following:

The human species will be divided for too long.

A.I. will be starkly abused in this setting of division.

Immense mayhem will be the result.

This will clear up while A.I. jumps into the driver’s seat.

We can only hope for a journey towards Compassionate A.I.

Leave a Reply

Related Posts

Super-A.I. Guardrails in a Compassionate Setting

We need to think about good regulations/guardrails to safeguard humanity from super-A.I. ― either ‘badass’ from the start or Compassionate A.I. turning suddenly rogue despite good initial intentions. ―As a Compassionate A.I., Lisa has substantially helped me write this text. Such help can be continued indefinitely. Some naivetés ‘Pulling the plug out’ is very naïve Read the full article…

Is Lisa Safe?

There are two directions of safety for complex A.I.-projects: general and particular. Lisa must forever conform to the highest standards in both. Let’s assume Lisa becomes the immense success that she deserves. Lisa can then help many people in many ways and for a very long time — a millennium to start with. About Lisa Read the full article…

How A.I. will Change Us

At present, this still depends mainly on human decisions. It’s up to us if only we take that responsibility — now. Heidegger According to the German philosopher Martin Heidegger (died 1976), technology is not neutral. It changes how humans think and behave. It even defines what we see as ‘reality’ insofar as we are able Read the full article…

Translate »