{"id":13010,"date":"2023-07-15T16:14:00","date_gmt":"2023-07-15T16:14:00","guid":{"rendered":"https:\/\/aurelis.org\/blog\/?p=13010"},"modified":"2023-08-02T10:30:41","modified_gmt":"2023-08-02T10:30:41","slug":"how-to-contain-non-compassionate-super-a-i","status":"publish","type":"post","link":"https:\/\/aurelis.org\/blog\/artifical-intelligence\/how-to-contain-non-compassionate-super-a-i","title":{"rendered":"How to Contain Non-Compassionate Super-A.I."},"content":{"rendered":"\n<h3>We want super(-intelligent) A.I. to remain under meaningful human control to avoid that it will largely or fully destroy or subdue humanity (= existential dangers). Compassionate A.I. may not be with us for a while. Meanwhile, how can we contain super-A.I.?<\/h3>\n\n\n\n<blockquote class=\"wp-block-quote\"><p>Future existential danger is special in that one can only be wrong in one direction, proceeding until it\u2019s too late to be proven wrong. Besides, how many (millions of) unnecessary human deaths are too many? Meanwhile, A.I. will never stop becoming stronger, keeping all on edge forever.<\/p><\/blockquote>\n\n\n\n<p><strong>The future is a long time.<\/strong><\/p>\n\n\n\n<p>Compassionately, we must be concerned for the whole stretch and the sentient beings that exist during that (infinite?) time.<\/p>\n\n\n\n<p>Therefore, we need proper governance to contain A.I. forever. Even while developing <a href=\"https:\/\/aurelis.org\/blog?p=10305\">Compassionate A.I.<\/a> \u2013 our only hope in the long term \u2013 we need to <a href=\"https:\/\/aurelis.org\/blog?p=11749\">ensure it remains so<\/a>. Meanwhile, since things are evolving at record speed toward the challenging combination of complexity and autonomy, thinking this way may also be most efficient for avoiding potentially existential dangers soon enough. May these be related issues?<\/p>\n\n\n\n<p><strong>Please, not the simple stuff<\/strong><\/p>\n\n\n\n<p>\u2018Turning off the switch\u2019 will not do, sorry \u2015 neither will some set of rules to govern the evil robots. Regulations are a must, but they shouldn\u2019t put us to sleep. Wrong-minded people \u2013 or super-A.I. itself, somehow \u2013 will turn the switch back on and circumvent the rules (or other simple measures) willingly or unwillingly.<\/p>\n\n\n\n<p>More is needed. Meanwhile, experts agree that we are still far from realistically accomplishing the goal of A.I. existential security. OKAY, having this insight is already better than nothing, on condition it doesn\u2019t throw people into <a href=\"https:\/\/aurelis.org\/blog?p=11775\">A.I. phobia<\/a>.<\/p>\n\n\n\n<p>We will not attain the necessary safety goal by trial and error, even with many parties trying different things. Besides, who does what, where, when, and how? And most importantly: why? Each person on the planet has different why\u2019s. We are more diverse than generally thought.<\/p>\n\n\n\n<p><strong>For instance: \u201cAn autonomous weapon should never take the initiative to end the life of a human being.\u201d<\/strong><\/p>\n\n\n\n<p>This may seem like a good regulation.<\/p>\n\n\n\n<p>But then: what is \u2018autonomous\u2019? Does that also mean <em>partially <\/em>autonomous? Does talking about <em>autonomous weapons<\/em> not already include an initiative, even if partially, to take the risk of killing someone? Quite readily, this rule is unattainable.<\/p>\n\n\n\n<p>Two human enemies will each risk using autonomous weapons to subdue the other while discarding the rule under any pretense. Do they even see their enemy as human or \u2018humane\u2019? <a href=\"https:\/\/aurelis.org\/blog?p=12144\">Compassionate A.I. in the military<\/a> is no feat. We can strive for it as an element to contain A.I. More broadly, we should strive for Compassionate A.I. as soon as possible <strong>\u2015<\/strong> this way, mitigating the risk while developing exciting applications.<\/p>\n\n\n\n<p><strong>Even so, there remains a period in-between.<\/strong><\/p>\n\n\n\n<p>We can strive for profoundly Compassionate humans as soon as possible.<\/p>\n\n\n\n<p><strong>OKAY, one more good measure \u2015 for some distant future.<\/strong><\/p>\n\n\n\n<p>Even so, we&#8217;ve come to this point where we can mitigate danger, but nothing good enough if the danger becomes existential. If we ever need to take action, it may be too late to start thinking how. In that case, no \u2018solution\u2019 we have seen is by far acceptable. Therefore, we need to broaden the search space. Note that I come to the following by exclusion of other options.<\/p>\n\n\n\n<p><strong>Out of the box<\/strong><\/p>\n\n\n\n<p>Different parties \u2013 such as nations \u2013 cannot rely on each other to avoid the threat of rogue A.I. getting out of bounds toward a global existential dystopia. They may \u2018regulate\u2019 by agreement but, understandably, just keep going independently.<\/p>\n\n\n\n<p>That is a fact, and not acceptable in the case of non-Compassionate super-A.I. for several reasons. Again, we should prepare for the worst.<\/p>\n\n\n\n<p>Eventually, i see only one durable solution: to put into place a global superstructure that gets the right to develop certain A.I. products \u2014 firstly, autonomous weaponry since this probably poses the most significant existential threat. This superstructure becomes the relevant, nation-independent police force with the power to police the world on this issue. Of course, the superstructure is only allowed to use its weaponry as a deterrent of individual nations&#8217; developing any. Even so, it remains to be seen how this can be made as secure as possible.<\/p>\n\n\n\n<p>As many nations as possible should fund this superstructure \u2015 no need for all. It may remain in place forever, keeping an eye on A.I. after entering the Compassionate A.I. era.<\/p>\n\n\n\n<p>If this is what it needs to save humanity, then this is what we must do.<\/p>\n\n\n\n<p><strong>This sounds crazy. I wholeheartedly agree. But the situation we\u2019re running into is still crazier.<\/strong><\/p>\n\n\n\n<p>Meanwhile, the striving for Compassionate A.I. lies open, in which keeping super-A.I. under meaningful human control is accomplished by <a href=\"https:\/\/aurelis.org\/blog?p=11913\">in-depth human-A.I. value alignment<\/a>.<\/p>\n\n\n\n<p>Regardless of anything, we should attain that goal as quickly as possible.<\/p>\n\n\n\n<p>One take on it is Lisa.<\/p>\n\n\n\n<p><\/p>\n<div data-object_id=\"13010\" class=\"cbxwpbkmarkwrap cbxwpbkmarkwrap_no_cat cbxwpbkmarkwrap-post \"><a  data-redirect-url=\"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/posts\/13010\"  data-display-label=\"0\" data-show-count=\"0\" data-bookmark-label=\" \"  data-bookmarked-label=\" \"  data-loggedin=\"0\" data-type=\"post\" data-object_id=\"13010\" class=\"cbxwpbkmarktrig  cbxwpbkmarktrig-button-addto\" title=\"Bookmark This\" href=\"#\"><span class=\"cbxwpbkmarktrig-label\"  style=\"display:none;\" > <\/span><\/a> <div  data-type=\"post\" data-object_id=\"13010\" class=\"cbxwpbkmarkguestwrap\" id=\"cbxwpbkmarkguestwrap-13010\"><div class=\"cbxwpbkmarkguest-message\"><a href=\"#\" class=\"cbxwpbkmarkguesttrig_close\"><\/a><h3 class=\"cbxwpbookmark-title cbxwpbookmark-title-login\">Please login to bookmark<\/h3>\n\t\t<form name=\"loginform\" id=\"loginform\" action=\"https:\/\/aurelis.org\/blog\/wp-login.php\" method=\"post\">\n\t\t\t\n\t\t\t<p class=\"login-username\">\n\t\t\t\t<label for=\"user_login\">Username or Email Address<\/label>\n\t\t\t\t<input type=\"text\" name=\"log\" id=\"user_login\" class=\"input\" value=\"\" size=\"20\" \/>\n\t\t\t<\/p>\n\t\t\t<p class=\"login-password\">\n\t\t\t\t<label for=\"user_pass\">Password<\/label>\n\t\t\t\t<input type=\"password\" name=\"pwd\" id=\"user_pass\" class=\"input\" value=\"\" size=\"20\" \/>\n\t\t\t<\/p>\n\t\t\t\n\t\t\t<p class=\"login-remember\"><label><input name=\"rememberme\" type=\"checkbox\" id=\"rememberme\" value=\"forever\" \/> Remember Me<\/label><\/p>\n\t\t\t<p class=\"login-submit\">\n\t\t\t\t<input type=\"submit\" name=\"wp-submit\" id=\"wp-submit\" class=\"button button-primary\" value=\"Log In\" \/>\n\t\t\t\t<input type=\"hidden\" name=\"redirect_to\" value=\"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/posts\/13010\" \/>\n\t\t\t<\/p>\n\t\t\t\n\t\t<\/form><\/div><\/div><\/div>","protected":false},"excerpt":{"rendered":"<p>We want super(-intelligent) A.I. to remain under meaningful human control to avoid that it will largely or fully destroy or subdue humanity (= existential dangers). Compassionate A.I. may not be with us for a while. Meanwhile, how can we contain super-A.I.? Future existential danger is special in that one can only be wrong in one <a class=\"moretag\" href=\"https:\/\/aurelis.org\/blog\/artifical-intelligence\/how-to-contain-non-compassionate-super-a-i\">Read the full article&#8230;<\/a><\/p>\n<div data-object_id=\"13010\" class=\"cbxwpbkmarkwrap cbxwpbkmarkwrap_no_cat cbxwpbkmarkwrap-post \"><a  data-redirect-url=\"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/posts\/13010\"  data-display-label=\"0\" data-show-count=\"0\" data-bookmark-label=\" \"  data-bookmarked-label=\" \"  data-loggedin=\"0\" data-type=\"post\" data-object_id=\"13010\" class=\"cbxwpbkmarktrig  cbxwpbkmarktrig-button-addto\" title=\"Bookmark This\" href=\"#\"><span class=\"cbxwpbkmarktrig-label\"  style=\"display:none;\" > <\/span><\/a> <div  data-type=\"post\" data-object_id=\"13010\" class=\"cbxwpbkmarkguestwrap\" id=\"cbxwpbkmarkguestwrap-13010\"><div class=\"cbxwpbkmarkguest-message\"><a href=\"#\" class=\"cbxwpbkmarkguesttrig_close\"><\/a><h3 class=\"cbxwpbookmark-title cbxwpbookmark-title-login\">Please login to bookmark<\/h3>\n\t\t<form name=\"loginform\" id=\"loginform\" action=\"https:\/\/aurelis.org\/blog\/wp-login.php\" method=\"post\">\n\t\t\t\n\t\t\t<p class=\"login-username\">\n\t\t\t\t<label for=\"user_login\">Username or Email Address<\/label>\n\t\t\t\t<input type=\"text\" name=\"log\" id=\"user_login\" class=\"input\" value=\"\" size=\"20\" \/>\n\t\t\t<\/p>\n\t\t\t<p class=\"login-password\">\n\t\t\t\t<label for=\"user_pass\">Password<\/label>\n\t\t\t\t<input type=\"password\" name=\"pwd\" id=\"user_pass\" class=\"input\" value=\"\" size=\"20\" \/>\n\t\t\t<\/p>\n\t\t\t\n\t\t\t<p class=\"login-remember\"><label><input name=\"rememberme\" type=\"checkbox\" id=\"rememberme\" value=\"forever\" \/> Remember Me<\/label><\/p>\n\t\t\t<p class=\"login-submit\">\n\t\t\t\t<input type=\"submit\" name=\"wp-submit\" id=\"wp-submit\" class=\"button button-primary\" value=\"Log In\" \/>\n\t\t\t\t<input type=\"hidden\" name=\"redirect_to\" value=\"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/posts\/13010\" \/>\n\t\t\t<\/p>\n\t\t\t\n\t\t<\/form><\/div><\/div><\/div>","protected":false},"author":2,"featured_media":13035,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"spay_email":"","jetpack_publicize_message":""},"categories":[28],"tags":[],"jetpack_featured_media_url":"https:\/\/i1.wp.com\/aurelis.org\/blog\/wp-content\/uploads\/2023\/07\/2162a.jpg?fit=960%2C560&ssl=1","jetpack_publicize_connections":[],"jetpack_sharing_enabled":true,"jetpack_shortlink":"https:\/\/wp.me\/p9Fdiq-3nQ","jetpack-related-posts":[],"_links":{"self":[{"href":"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/posts\/13010"}],"collection":[{"href":"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/comments?post=13010"}],"version-history":[{"count":30,"href":"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/posts\/13010\/revisions"}],"predecessor-version":[{"id":13063,"href":"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/posts\/13010\/revisions\/13063"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/media\/13035"}],"wp:attachment":[{"href":"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/media?parent=13010"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/categories?post=13010"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/tags?post=13010"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}