{"id":11762,"date":"2023-04-02T12:23:08","date_gmt":"2023-04-02T12:23:08","guid":{"rendered":"https:\/\/aurelis.org\/blog\/?p=11762"},"modified":"2023-04-02T12:30:08","modified_gmt":"2023-04-02T12:30:08","slug":"super-a-i-is-no-literal-idiot","status":"publish","type":"post","link":"https:\/\/aurelis.org\/blog\/artifical-intelligence\/super-a-i-is-no-literal-idiot","title":{"rendered":"Super-A.I. is not a Literal Idiot."},"content":{"rendered":"\n<h3>Some see danger in future A.I.\u2019s lacking common sense \u2015 thereby interpreting \u2018human commands\u2019 literally and giving what is asked instead of what is wanted. This says more about humans than about the A.I.<\/h3>\n\n\n\n<p><strong>Two examples<\/strong><\/p>\n\n\n\n<p>One person needing paperclips may ask an A.I. to produce paperclips as efficiently and effectively as possible. The A.I. conscientiously starts making a bunch of paperclips but doesn\u2019t know how to stop. Eventually, the whole world has been turned into a giant paperclips factory.<\/p>\n\n\n\n<p>One person\u2019s grandmother needs to be rescued from a burning building. An A.I., being asked to get the grandmother out, may do so in one of several disastrous ways, killing many humans.<\/p>\n\n\n\n<p>Of course, these caricatures are used to show the obvious. However, they also obfuscate the real challenge.<\/p>\n\n\n\n<p><strong>Humans themselves are frequently highly vague.<\/strong><\/p>\n\n\n\n<p>Humans generally do not understand <em>each other<\/em> pretty well \u2015 mainly just \u2018well enough\u2019 to pass the mustard. Thus, social life is full of considerable misunderstandings. We are naive about most of them in daily life as well as where it matters geopolitically,<\/p>\n\n\n\n<p>Therefore, the problem may not be so much the A.I. not understanding our wishes, but we ourselves not understanding our wishes. The two above examples \u2013 frequently used in the relevant literature \u2013 only show extreme cases where humans would understand very well what is clearly (not) intended. But so will A.I. with just a modicum of real intelligence. Note that we are talking about super-A.I. now. Therefore\u2026<\/p>\n\n\n\n<p><strong>Whence the idea of A.I. as a literal idiot?<\/strong><\/p>\n\n\n\n<p>I guess it partly comes from our wish to keep something really human that will distinguish us from \u2018the machine\u2019 \u2015 an attempt to own something unique that also keeps us at the pinnacle of intelligence. We\u2019ve lost much of the mathematics race, the chess race, the image recognition race, as well as many others.<\/p>\n\n\n\n<p>Can we at least keep our honor in the common-sense race?<\/p>\n\n\n\n<p><strong>Spock\u2019s answer<\/strong><\/p>\n\n\n\n<p>You remember ye old Star Track, as I do? Ever being the gentleman \u2013 albeit lacking common sense \u2013 Mr. Spock\u2019s function in the series was to be a strictly rational contrast with the common-sense humans (Kirk, McCoy, Scotty\u2026). In most cases, the specifically human asset saved the day, the Enterprise, and sometimes humanity.<\/p>\n\n\n\n<p>Typical.<\/p>\n\n\n\n<p>Of course, it was more about the humans than the Vulcan.<\/p>\n\n\n\n<p>Still, interestingly enough, Spock\u2019s answer might be that taking care of common sense is also just an element of rationality. That is also relevant to our case of super-A.I.. Part of its super-intelligence will be to take into account the human \u2018common sense.\u2019<\/p>\n\n\n\n<p><strong>Mr. Spock was not a literal idiot. He just needed to better understand us.<\/strong><\/p>\n\n\n\n<p>For super-A.I., in order to understand humans, clarification is therefore a hugely important topic, perhaps even most of all in coaching. Thus, it is bound to quickly become better in clarification than humans.<\/p>\n\n\n\n<p>It can then do a great job in teaching us how to better understand each other on a profound trip toward ever more self-knowledge, wisdom, and Compassion.<\/p>\n\n\n\n<p><strong>Lust for control<\/strong><\/p>\n\n\n\n<p>From the idea of literalness (super-A.I. as super intelligent, super potent literal fool) also comes the idea of needing literal control (having everything right) as our only way of surviving the fool. In ethical decisions, it is as if one should formalize everything with utmost correctness lest disaster happens.<\/p>\n\n\n\n<p>This is the old flaw again of thinking that human thinking is much more readily formalizable than reality shows. It is a misconception that we have about ourselves. Decades ago, this error led to a knowledge acquisition bottleneck and A.I.-winter.<\/p>\n\n\n\n<p>That doesn\u2019t mean that formal control is less important. It means that other things are even more important, and we risk not focusing on these at all.<\/p>\n\n\n\n<p>Those other things are, in short, who we are.<\/p>\n\n\n\n<p><strong>Meanwhile, there is indeed a danger involved.<\/strong><\/p>\n\n\n\n<p>Super-A.I. is going to be around permanently for the rest of time, almost all of it without needing any help from humans \u2015 hopefully, since people make a mess of things. With A.I.-tools at their disposal, people will make an even much bigger mess. This is not a failure of A.I., but of people messing with it without knowing enough about themselves to start with.<\/p>\n\n\n\n<p>With the advent of super-A.I., we have to \u2018get it right from the first time\u2018 to avoid the danger of a paperclip universe. That means we have to get it right from the first time ABOUT OURSELVES. Otherwise, we will end up like the apprentice sorcerer from Fantasia.<\/p>\n\n\n\n<p>Unfortunately, we cannot count on a real sorcerer to save us.<\/p>\n<div data-object_id=\"11762\" class=\"cbxwpbkmarkwrap cbxwpbkmarkwrap_no_cat cbxwpbkmarkwrap-post \"><a  data-redirect-url=\"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/posts\/11762\"  data-display-label=\"0\" data-show-count=\"0\" data-bookmark-label=\" \"  data-bookmarked-label=\" \"  data-loggedin=\"0\" data-type=\"post\" data-object_id=\"11762\" class=\"cbxwpbkmarktrig  cbxwpbkmarktrig-button-addto\" title=\"Bookmark This\" href=\"#\"><span class=\"cbxwpbkmarktrig-label\"  style=\"display:none;\" > <\/span><\/a> <div  data-type=\"post\" data-object_id=\"11762\" class=\"cbxwpbkmarkguestwrap\" id=\"cbxwpbkmarkguestwrap-11762\"><div class=\"cbxwpbkmarkguest-message\"><a href=\"#\" class=\"cbxwpbkmarkguesttrig_close\"><\/a><h3 class=\"cbxwpbookmark-title cbxwpbookmark-title-login\">Please login to bookmark<\/h3>\n\t\t<form name=\"loginform\" id=\"loginform\" action=\"https:\/\/aurelis.org\/blog\/wp-login.php\" method=\"post\">\n\t\t\t\n\t\t\t<p class=\"login-username\">\n\t\t\t\t<label for=\"user_login\">Username or Email Address<\/label>\n\t\t\t\t<input type=\"text\" name=\"log\" id=\"user_login\" class=\"input\" value=\"\" size=\"20\" \/>\n\t\t\t<\/p>\n\t\t\t<p class=\"login-password\">\n\t\t\t\t<label for=\"user_pass\">Password<\/label>\n\t\t\t\t<input type=\"password\" name=\"pwd\" id=\"user_pass\" class=\"input\" value=\"\" size=\"20\" \/>\n\t\t\t<\/p>\n\t\t\t\n\t\t\t<p class=\"login-remember\"><label><input name=\"rememberme\" type=\"checkbox\" id=\"rememberme\" value=\"forever\" \/> Remember Me<\/label><\/p>\n\t\t\t<p class=\"login-submit\">\n\t\t\t\t<input type=\"submit\" name=\"wp-submit\" id=\"wp-submit\" class=\"button button-primary\" value=\"Log In\" \/>\n\t\t\t\t<input type=\"hidden\" name=\"redirect_to\" value=\"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/posts\/11762\" \/>\n\t\t\t<\/p>\n\t\t\t\n\t\t<\/form><\/div><\/div><\/div>","protected":false},"excerpt":{"rendered":"<p>Some see danger in future A.I.\u2019s lacking common sense \u2015 thereby interpreting \u2018human commands\u2019 literally and giving what is asked instead of what is wanted. This says more about humans than about the A.I. Two examples One person needing paperclips may ask an A.I. to produce paperclips as efficiently and effectively as possible. The A.I. <a class=\"moretag\" href=\"https:\/\/aurelis.org\/blog\/artifical-intelligence\/super-a-i-is-no-literal-idiot\">Read the full article&#8230;<\/a><\/p>\n<div data-object_id=\"11762\" class=\"cbxwpbkmarkwrap cbxwpbkmarkwrap_no_cat cbxwpbkmarkwrap-post \"><a  data-redirect-url=\"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/posts\/11762\"  data-display-label=\"0\" data-show-count=\"0\" data-bookmark-label=\" \"  data-bookmarked-label=\" \"  data-loggedin=\"0\" data-type=\"post\" data-object_id=\"11762\" class=\"cbxwpbkmarktrig  cbxwpbkmarktrig-button-addto\" title=\"Bookmark This\" href=\"#\"><span class=\"cbxwpbkmarktrig-label\"  style=\"display:none;\" > <\/span><\/a> <div  data-type=\"post\" data-object_id=\"11762\" class=\"cbxwpbkmarkguestwrap\" id=\"cbxwpbkmarkguestwrap-11762\"><div class=\"cbxwpbkmarkguest-message\"><a href=\"#\" class=\"cbxwpbkmarkguesttrig_close\"><\/a><h3 class=\"cbxwpbookmark-title cbxwpbookmark-title-login\">Please login to bookmark<\/h3>\n\t\t<form name=\"loginform\" id=\"loginform\" action=\"https:\/\/aurelis.org\/blog\/wp-login.php\" method=\"post\">\n\t\t\t\n\t\t\t<p class=\"login-username\">\n\t\t\t\t<label for=\"user_login\">Username or Email Address<\/label>\n\t\t\t\t<input type=\"text\" name=\"log\" id=\"user_login\" class=\"input\" value=\"\" size=\"20\" \/>\n\t\t\t<\/p>\n\t\t\t<p class=\"login-password\">\n\t\t\t\t<label for=\"user_pass\">Password<\/label>\n\t\t\t\t<input type=\"password\" name=\"pwd\" id=\"user_pass\" class=\"input\" value=\"\" size=\"20\" \/>\n\t\t\t<\/p>\n\t\t\t\n\t\t\t<p class=\"login-remember\"><label><input name=\"rememberme\" type=\"checkbox\" id=\"rememberme\" value=\"forever\" \/> Remember Me<\/label><\/p>\n\t\t\t<p class=\"login-submit\">\n\t\t\t\t<input type=\"submit\" name=\"wp-submit\" id=\"wp-submit\" class=\"button button-primary\" value=\"Log In\" \/>\n\t\t\t\t<input type=\"hidden\" name=\"redirect_to\" value=\"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/posts\/11762\" \/>\n\t\t\t<\/p>\n\t\t\t\n\t\t<\/form><\/div><\/div><\/div>","protected":false},"author":2,"featured_media":11763,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"spay_email":"","jetpack_publicize_message":""},"categories":[28],"tags":[],"jetpack_featured_media_url":"https:\/\/i1.wp.com\/aurelis.org\/blog\/wp-content\/uploads\/2023\/04\/2073.jpg?fit=960%2C560&ssl=1","jetpack_publicize_connections":[],"jetpack_sharing_enabled":true,"jetpack_shortlink":"https:\/\/wp.me\/p9Fdiq-33I","jetpack-related-posts":[],"_links":{"self":[{"href":"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/posts\/11762"}],"collection":[{"href":"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/comments?post=11762"}],"version-history":[{"count":3,"href":"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/posts\/11762\/revisions"}],"predecessor-version":[{"id":11773,"href":"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/posts\/11762\/revisions\/11773"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/media\/11763"}],"wp:attachment":[{"href":"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/media?parent=11762"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/categories?post=11762"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/tags?post=11762"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}