How to Contain Non-Compassionate Super-A.I.

July 15, 2023 Artifical Intelligence No Comments

We want super(-intelligent) A.I. to remain under meaningful human control to avoid that it will largely or fully destroy or subdue humanity (= existential dangers). Compassionate A.I. may not be with us for a while. Meanwhile, how can we contain super-A.I.?

Future existential danger is special in that one can only be wrong in one direction, proceeding until it’s too late to be proven wrong. Besides, how many (millions of) unnecessary human deaths are too many? Meanwhile, A.I. will never stop becoming stronger, keeping all on edge forever.

The future is a long time.

Compassionately, we must be concerned for the whole stretch and the sentient beings that exist during that (infinite?) time.

Therefore, we need proper governance to contain A.I. forever. Even while developing Compassionate A.I. – our only hope in the long term – we need to ensure it remains so. Meanwhile, since things are evolving at record speed toward the challenging combination of complexity and autonomy, thinking this way may also be most efficient for avoiding potentially existential dangers soon enough. May these be related issues?

Please, not the simple stuff

‘Turning off the switch’ will not do, sorry ― neither will some set of rules to govern the evil robots. Regulations are a must, but they shouldn’t put us to sleep. Wrong-minded people – or super-A.I. itself, somehow – will turn the switch back on and circumvent the rules (or other simple measures) willingly or unwillingly.

More is needed. Meanwhile, experts agree that we are still far from realistically accomplishing the goal of A.I. existential security. OKAY, having this insight is already better than nothing, on condition it doesn’t throw people into A.I. phobia.

We will not attain the necessary safety goal by trial and error, even with many parties trying different things. Besides, who does what, where, when, and how? And most importantly: why? Each person on the planet has different why’s. We are more diverse than generally thought.

For instance: “An autonomous weapon should never take the initiative to end the life of a human being.”

This may seem like a good regulation.

But then: what is ‘autonomous’? Does that also mean partially autonomous? Does talking about autonomous weapons not already include an initiative, even if partially, to take the risk of killing someone? Quite readily, this rule is unattainable.

Two human enemies will each risk using autonomous weapons to subdue the other while discarding the rule under any pretense. Do they even see their enemy as human or ‘humane’? Compassionate A.I. in the military is no feat. We can strive for it as an element to contain A.I. More broadly, we should strive for Compassionate A.I. as soon as possible this way, mitigating the risk while developing exciting applications.

Even so, there remains a period in-between.

We can strive for profoundly Compassionate humans as soon as possible.

OKAY, one more good measure ― for some distant future.

Even so, we’ve come to this point where we can mitigate danger, but nothing good enough if the danger becomes existential. If we ever need to take action, it may be too late to start thinking how. In that case, no ‘solution’ we have seen is by far acceptable. Therefore, we need to broaden the search space. Note that I come to the following by exclusion of other options.

Out of the box

Different parties – such as nations – cannot rely on each other to avoid the threat of rogue A.I. getting out of bounds toward a global existential dystopia. They may ‘regulate’ by agreement but, understandably, just keep going independently.

That is a fact, and not acceptable in the case of non-Compassionate super-A.I. for several reasons. Again, we should prepare for the worst.

Eventually, i see only one durable solution: to put into place a global superstructure that gets the right to develop certain A.I. products — firstly, autonomous weaponry since this probably poses the most significant existential threat. This superstructure becomes the relevant, nation-independent police force with the power to police the world on this issue. Of course, the superstructure is only allowed to use its weaponry as a deterrent of individual nations’ developing any. Even so, it remains to be seen how this can be made as secure as possible.

As many nations as possible should fund this superstructure ― no need for all. It may remain in place forever, keeping an eye on A.I. after entering the Compassionate A.I. era.

If this is what it needs to save humanity, then this is what we must do.

This sounds crazy. I wholeheartedly agree. But the situation we’re running into is still crazier.

Meanwhile, the striving for Compassionate A.I. lies open, in which keeping super-A.I. under meaningful human control is accomplished by in-depth human-A.I. value alignment.

Regardless of anything, we should attain that goal as quickly as possible.

One take on it is Lisa.

Leave a Reply

Related Posts

Active Learning in A.I.

An active learner deliberately searches for information/knowledge to become smarter. In biological evolution on Earth The ‘Cambrian explosion’ was probably jolted by the appearance of active learning in natural evolution. It was the time when living beings started to chase other living beings— thus also being chased, heightening the challenges of survival. This mutual predation Read the full article…

Is the Brain a General-Purpose Computer?

The brain computes, although not comparably to a present-day computer. As a computing device, it is general-purpose. Cortical wonders Scientists have found out that the neocortex – part of the brain where much of human intelligence happens – is much the same over its whole surface. Any neocortical patch can develop in a variety of Read the full article…

Robot Empathy

Humans are naturally empathetic — toward people, animals, and even objects like a child’s cherished teddy bear. This capacity for connection transcends logic and highlights the deeply human need for understanding. But what happens when empathy involves a robot? Lisa, a professional coaching robot, challenges us to rethink empathy. Her approach doesn’t mimic human emotion Read the full article…

Translate »