How to Contain Non-Compassionate Super-A.I.

July 15, 2023 Artifical Intelligence No Comments

We want super(-intelligent) A.I. to remain under meaningful human control to avoid that it will largely or fully destroy or subdue humanity (= existential dangers). Compassionate A.I. may not be with us for a while. Meanwhile, how can we contain super-A.I.?

Future existential danger is special in that one can only be wrong in one direction, proceeding until it’s too late to be proven wrong. Besides, how many (millions of) unnecessary human deaths are too many? Meanwhile, A.I. will never stop becoming stronger, keeping all on edge forever.

The future is a long time.

Compassionately, we must be concerned for the whole stretch and the sentient beings that exist during that (infinite?) time.

Therefore, we need proper governance to contain A.I. forever. Even while developing Compassionate A.I. – our only hope in the long term – we need to ensure it remains so. Meanwhile, since things are evolving at record speed toward the challenging combination of complexity and autonomy, thinking this way may also be most efficient for avoiding potentially existential dangers soon enough. May these be related issues?

Please, not the simple stuff

‘Turning off the switch’ will not do, sorry ― neither will some set of rules to govern the evil robots. Regulations are a must, but they shouldn’t put us to sleep. Wrong-minded people – or super-A.I. itself, somehow – will turn the switch back on and circumvent the rules (or other simple measures) willingly or unwillingly.

More is needed. Meanwhile, experts agree that we are still far from realistically accomplishing the goal of A.I. existential security. OKAY, having this insight is already better than nothing, on condition it doesn’t throw people into A.I. phobia.

We will not attain the necessary safety goal by trial and error, even with many parties trying different things. Besides, who does what, where, when, and how? And most importantly: why? Each person on the planet has different why’s. We are more diverse than generally thought.

For instance: “An autonomous weapon should never take the initiative to end the life of a human being.”

This may seem like a good regulation.

But then: what is ‘autonomous’? Does that also mean partially autonomous? Does talking about autonomous weapons not already include an initiative, even if partially, to take the risk of killing someone? Quite readily, this rule is unattainable.

Two human enemies will each risk using autonomous weapons to subdue the other while discarding the rule under any pretense. Do they even see their enemy as human or ‘humane’? Compassionate A.I. in the military is no feat. We can strive for it as an element to contain A.I. More broadly, we should strive for Compassionate A.I. as soon as possible this way, mitigating the risk while developing exciting applications.

Even so, there remains a period in-between.

We can strive for profoundly Compassionate humans as soon as possible.

OKAY, one more good measure ― for some distant future.

Even so, we’ve come to this point where we can mitigate danger, but nothing good enough if the danger becomes existential. If we ever need to take action, it may be too late to start thinking how. In that case, no ‘solution’ we have seen is by far acceptable. Therefore, we need to broaden the search space. Note that I come to the following by exclusion of other options.

Out of the box

Different parties – such as nations – cannot rely on each other to avoid the threat of rogue A.I. getting out of bounds toward a global existential dystopia. They may ‘regulate’ by agreement but, understandably, just keep going independently.

That is a fact, and not acceptable in the case of non-Compassionate super-A.I. for several reasons. Again, we should prepare for the worst.

Eventually, i see only one durable solution: to put into place a global superstructure that gets the right to develop certain A.I. products — firstly, autonomous weaponry since this probably poses the most significant existential threat. This superstructure becomes the relevant, nation-independent police force with the power to police the world on this issue. Of course, the superstructure is only allowed to use its weaponry as a deterrent of individual nations’ developing any. Even so, it remains to be seen how this can be made as secure as possible.

As many nations as possible should fund this superstructure ― no need for all. It may remain in place forever, keeping an eye on A.I. after entering the Compassionate A.I. era.

If this is what it needs to save humanity, then this is what we must do.

This sounds crazy. I wholeheartedly agree. But the situation we’re running into is still crazier.

Meanwhile, the striving for Compassionate A.I. lies open, in which keeping super-A.I. under meaningful human control is accomplished by in-depth human-A.I. value alignment.

Regardless of anything, we should attain that goal as quickly as possible.

One take on it is Lisa.

Leave a Reply

Related Posts

A.I. Business Sustainability

Following a chapter of my book. [see: “The Journey Towards Compassionate A.I.“] As the cliché goes: One thing is certain, and that is the uncertainty of the future. Trillions “PricewaterhouseCoopers estimates AI deployment will add $15.7 trillion to global GDP by 2030.” [Lee, 2018] If this is not a business opportunity, then what is? And Read the full article…

Forward-Forward Neur(on)al Networks

Rest assured, I don’t stuff technical details into this blog. Nevertheless, this new framework lies closer to how the brain works, which is interesting enough to go somewhat into it. Backprop In Artificial Neural Networks (ANN) – the subfield that sways a big scepter in A.I. nowadays – backpropagation (backprop) is one of the main Read the full article…

Lisa in Times of Suicide Danger

Can A.I.-video-coach-bot Lisa prevent suicide or bring someone to it? The question needs to be looked upon broadly and openly. Yesterday, a Belgian person committed suicide after long conversations with a chatbot. Doubtlessly, once in a while, some coach-bot will be accused of having brought someone closer to suicide. Such accusations cannot be prevented, even Read the full article…

Translate »