A Divided World Will be Conquered by A.I.

January 29, 2022 Artifical Intelligence, Sociocultural Issues No Comments

Seriously. People fighting each other at a geopolitical level will, through competition and strife, build a world in which A.I. follows suit.

There is no doubt about this: IF… THEN.

As I write in my book ‘The Journey towards Compassionate A.I.,’ we are entitled to be anxious about A.I. – the real one, soon to be – if we don’t manage to let it grow in a setting of Compassion.

Hopefully, the A.I. will be Compassionate also without much human guidance. I think this will happen eventually. The problem is, by then, it may be too late for many people or even for humanity.

We’re on (the wrong) track

Europe, where I live, feels like a more dangerous place than some 20 years ago. Politicians and media also say so literally, especially when or right before announcing more military expenditures ― to such a degree that it cannot be haphazard. The weapon industry undeniably plays a role in many global political developments. There is just too much easy money going around. That has been a critical factor for a long time. It still is and will be.

Even more dangerously, the level of aggression in many people rises visibly. This is eventually the primary energy for war, and it is the responsibility of many individuals, including you and me. Wars and global divides don’t fall from the sky if there is not this negative energy. I describe this in several blogs as inner dissociation.

Geopolitically, the US doesn’t seem to be involved in one cold war, but two – with Russia and China – not counting the internal not-so-cold war-mongering (racial, cultural). From their sides, Russia and China are pushing their borders, risking wars, and failing in diplomacy.

Not so much Compassion in a divided world

The continual struggle will also not heighten the level of Compassion, while heightening the level of aggression through the usual ingredients of anxiety, revenge, de-humanizing the other party, active denial of the obvious, fear of losing face, individual status and power concerns, the push of big money, petty politics, etc.

Is this the example through which we want to strive for human-A.I. value alignment?

It’s the opposite, of course, but a divided world may lead to ever more division.

And the winner is… A.I.

The party who can use A.I. in its aggressive stance will be at an advantage; therefore, it will be done this way. In a struggle between superpowers of somewhat equal strength – think China and the US shortly, possibly also India – this may go on for a while.

Combine this with the little available insight in, for instance, an evolving consciousness within A.I., and you see a most dangerous mix.

While the two superpowers fight against each other, a third one may rise and take the whole field. Moreover, I don’t see how the A.I.’s of both sides will not contact each other at some given point. This may be the point of the so much discussed singularity.

A crystal ball is not needed for the following:

The human species will be divided for too long.

A.I. will be starkly abused in this setting of division.

Immense mayhem will be the result.

This will clear up while A.I. jumps into the driver’s seat.

We can only hope for a journey towards Compassionate A.I.

Leave a Reply

Related Posts

Explainability in A.I., Boon or Bust?

Explainability seems like the safe option. With A.I. of growing complexity, it may be quite the reverse. Much of the reason can be found inside ourselves. What is ‘explainability’ in A.I.? It’s not only about an A.I. system being able to do something but also to explain how and why this has been done. The Read the full article…

A.I. to Benefit Humans

‘Human-oriented’ is not the same as ‘ego-oriented.’ As never before, and perhaps never after, we have with A.I. a powerful toolbox that can be used in any direction. In-depth As to AURELIS ethics, the striving – of A.I. and of any other development – should definitely be towards humanity-in-depth, the ‘total human being,’ as opposed to Read the full article…

Why to Invest in Compassionate A.I.

Most A.I. engineers have a limited view on organic intelligence, let alone consciousness or Compassion. That’s a huge problem. Indeed, I’ve written a book about Compassionate A.I. See: “The Journey Towards Compassionate A.I.: Who We Are – What A.I. Can Become – Why It Matters” Am I trying to attract investors for this now? Or Read the full article…

Translate »