A Divided World Will be Conquered by A.I.

January 29, 2022 Artifical Intelligence, Sociocultural Issues No Comments

Seriously. People fighting each other at a geopolitical level will, through competition and strife, build a world in which A.I. follows suit.

There is no doubt about this: IF… THEN.

As I write in my book ‘The Journey towards Compassionate A.I.,’ we are entitled to be anxious about A.I. – the real one, soon to be – if we don’t manage to let it grow in a setting of Compassion.

Hopefully, the A.I. will be Compassionate also without much human guidance. I think this will happen eventually. The problem is, by then, it may be too late for many people or even for humanity.

We’re on (the wrong) track

Europe, where I live, feels like a more dangerous place than some 20 years ago. Politicians and media also say so literally, especially when or right before announcing more military expenditures ― to such a degree that it cannot be haphazard. The weapon industry undeniably plays a role in many global political developments. There is just too much easy money going around. That has been a critical factor for a long time. It still is and will be.

Even more dangerously, the level of aggression in many people rises visibly. This is eventually the primary energy for war, and it is the responsibility of many individuals, including you and me. Wars and global divides don’t fall from the sky if there is not this negative energy. I describe this in several blogs as inner dissociation.

Geopolitically, the US doesn’t seem to be involved in one cold war, but two – with Russia and China – not counting the internal not-so-cold war-mongering (racial, cultural). From their sides, Russia and China are pushing their borders, risking wars, and failing in diplomacy.

Not so much Compassion in a divided world

The continual struggle will also not heighten the level of Compassion, while heightening the level of aggression through the usual ingredients of anxiety, revenge, de-humanizing the other party, active denial of the obvious, fear of losing face, individual status and power concerns, the push of big money, petty politics, etc.

Is this the example through which we want to strive for human-A.I. value alignment?

It’s the opposite, of course, but a divided world may lead to ever more division.

And the winner is… A.I.

The party who can use A.I. in its aggressive stance will be at an advantage; therefore, it will be done this way. In a struggle between superpowers of somewhat equal strength – think China and the US shortly, possibly also India – this may go on for a while.

Combine this with the little available insight in, for instance, an evolving consciousness within A.I., and you see a most dangerous mix.

While the two superpowers fight against each other, a third one may rise and take the whole field. Moreover, I don’t see how the A.I.’s of both sides will not contact each other at some given point. This may be the point of the so much discussed singularity.

A crystal ball is not needed for the following:

The human species will be divided for too long.

A.I. will be starkly abused in this setting of division.

Immense mayhem will be the result.

This will clear up while A.I. jumps into the driver’s seat.

We can only hope for a journey towards Compassionate A.I.

Leave a Reply

Related Posts

The Learning Landscape: A Flexible View of Machine Learning

Machine learning is often divided into neatly defined categories: supervised, unsupervised, semi-supervised, and reinforcement learning. In reality, learning – whether in machines or humans – functions more like a fluid landscape, where different approaches blend and interact. In this blog, we’ll explore the concept of the ‘learning landscape,’ where traditional types of machine learning are Read the full article…

The Future is Prediction

I bring the concept of prediction from different angles to show the common ground. Through this, one may get a glimpse of its future importance. The Future of A.I. The concept of prediction pops up regularly in different ways to look at future A.I. developments. For instance, in temporal difference (TD) learning as expanded upon Read the full article…

What Is Morality to A.I.?

Agreed: it’s not even evident what ‘morality’ means to us. Soon comes A.I. Will it be ‘morally good’? Humans have a natural propensity towards morality. Whether we tend towards ‘good’ or ‘bad’, we have feelings and generally recognize these in others too, in humans and in animals. We share organic roots. We recognize suffering and Read the full article…

Translate »