{"id":10305,"date":"2022-11-04T07:58:47","date_gmt":"2022-11-04T07:58:47","guid":{"rendered":"https:\/\/aurelis.org\/blog\/?p=10305"},"modified":"2023-07-28T11:24:09","modified_gmt":"2023-07-28T11:24:09","slug":"why-a-i-must-be-compassionate","status":"publish","type":"post","link":"https:\/\/aurelis.org\/blog\/empathy-compassion\/why-a-i-must-be-compassionate","title":{"rendered":"Why A.I. Must Be Compassionate"},"content":{"rendered":"\n<h3>This is bound to become the most critical issue in humankind\u2019s history until now, and probably also from now on \u2015 to be taken seriously. Not thinking about it is like driving blindfolded on a highway.<\/h3>\n\n\n\n<p>If you have read my book <em><a href=\"https:\/\/aurelis.org\/blog?p=2819\">The Journey Towards Compassionate A.I.<\/a><\/em>, you know much of what\u2019s in this text. Nevertheless \u2015 no harm in some concise repetition while I expose the reasoning from another angle. The <em>why<\/em> of this text is more about causes than reasons.<\/p>\n\n\n\n<p><strong>Humanly used terminology<\/strong><\/p>\n\n\n\n<p>Terms like intelligence, free will, consciousness, and Compassion can be used to denote concepts that are only applicable to humans. This is a straightforward way to ascertain that no other creature will attain any of these. [With due respect to large octopi\u2019s intelligence, one may change \u2018human\u2019 into \u2018organic.\u2019] An A.I. will never be humanly intelligent nor conscious since, to be humanly, it needs to be human \u2015 obviously, at the surface level.<\/p>\n\n\n\n<p>Yet, with an additional twist, this by itself becomes more interesting. An A.I. system will <em>principally<\/em> never be humanly intelligent because of the <a href=\"https:\/\/aurelis.org\/blog?p=3910\">intractable complexity<\/a> of human intelligence. The latter is just too complex. Therefore, a system\u2019s intelligence will always be an approximation to human intelligence.<\/p>\n\n\n\n<p>For the same reason, there will probably never be an upload of human intelligence to some server. This, too, will always be an approximation.<\/p>\n\n\n\n<p><strong>Abstractly used terminology<\/strong><\/p>\n\n\n\n<p>The above list of terms can also be used to denote more abstract concepts. That is a straightforward way to ascertain that super-A.I. will attain them all \u2015 soon enough. It just depends on the choice of concepts.<\/p>\n\n\n\n<p>In this case, it\u2019s interesting to take a comprehensive view and see where our human realizations of these abstract concepts may differ from others. Are we the absolute pinnacle or just some point in a multi-featured landscape?<\/p>\n\n\n\n<p>Or are we humans the only creatures who can ever be intelligent\/conscious because of an esoteric conflation of the human and the abstract? This idea of human exceptionality seems the result of a dangerously neurotic case of <a href=\"https:\/\/aurelis.org\/blog?p=8111\">Eigenangst<\/a>, the unwillingness to look deep inside oneself. In this respect, we certainly need Compassionate A.I.<\/p>\n\n\n\n<p><strong>Information Integration<\/strong><\/p>\n\n\n\n<p>With a lot of information (data in context) and much internal integration, any system gradually starts looking like what one may call <em>intelligent<\/em>.<\/p>\n\n\n\n<p>According to some, this is enough to make it conscious \u2015 called the \u2018Information Integration Theory of consciousness.\u2019 In that sense, humans are supposed to be conscious <em>because<\/em> we are intelligent. Given the above, this is a conflation of humanly and abstractly used terminology, making us exceptional because we are, well, exceptional, aren\u2019t we?<\/p>\n\n\n\n<p>Sure we are exceptional, not because of our intelligence but our <em>human<\/em> intelligence.<\/p>\n\n\n\n<p><strong>Autonomy<\/strong><\/p>\n\n\n\n<p>According to some, autonomy is equal to free will \u2015 of course, depending on the degrees of freedom. An autonomous weapon hardly shows free will, having autonomy only in searching or even somehow choosing its target within constraints.<\/p>\n\n\n\n<p>Consider making the constraints wider (relaxing them) and providing the system with more features in which to act autonomously. Does that lead to free will?<\/p>\n\n\n\n<p>Again, the human case brings intractable complexity. No system will attain humanly complex free will \u2015 without which we humans also do not have what we would call consciousness.<\/p>\n\n\n\n<p><strong>Information Integration + autonomy<\/strong><\/p>\n\n\n\n<p>Lots of both \u2015 then, here we are. What is the fundamental difference between this synthesis and a reasonably abstract concept of consciousness? How do we abstractly (not humanly) rebuke such a creature when it says, \u201cI feel conscious?\u201d<\/p>\n\n\n\n<p>It is perfectly conceivable to design an A.I. that goes in this direction. With eyes wide closed, many organizations are already striving to precisely attain this. The advantages are immense and immediate in many ways, as well as the dangers of seeing \u2018them\u2019 reaching before \u2018us\u2019 many essential advantages in economic or even military competition.<\/p>\n\n\n\n<p><a href=\"https:\/\/aurelis.org\/blog?p=4879\">Invisible dangers<\/a> (closed eyes) are not going to stop decision-takers.<\/p>\n\n\n\n<p>That is why <a href=\"https:\/\/aurelis.org\/blog?p=2744\">A.I. must be Compassionate<\/a>.<\/p>\n<div data-object_id=\"10305\" class=\"cbxwpbkmarkwrap cbxwpbkmarkwrap_no_cat cbxwpbkmarkwrap-post \"><a  data-redirect-url=\"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/posts\/10305\"  data-display-label=\"0\" data-show-count=\"0\" data-bookmark-label=\" \"  data-bookmarked-label=\" \"  data-loggedin=\"0\" data-type=\"post\" data-object_id=\"10305\" class=\"cbxwpbkmarktrig  cbxwpbkmarktrig-button-addto\" title=\"Bookmark This\" href=\"#\"><span class=\"cbxwpbkmarktrig-label\"  style=\"display:none;\" > <\/span><\/a> <div  data-type=\"post\" data-object_id=\"10305\" class=\"cbxwpbkmarkguestwrap\" id=\"cbxwpbkmarkguestwrap-10305\"><div class=\"cbxwpbkmarkguest-message\"><a href=\"#\" class=\"cbxwpbkmarkguesttrig_close\"><\/a><h3 class=\"cbxwpbookmark-title cbxwpbookmark-title-login\">Please login to bookmark<\/h3>\n\t\t<form name=\"loginform\" id=\"loginform\" action=\"https:\/\/aurelis.org\/blog\/wp-login.php\" method=\"post\">\n\t\t\t\n\t\t\t<p class=\"login-username\">\n\t\t\t\t<label for=\"user_login\">Username or Email Address<\/label>\n\t\t\t\t<input type=\"text\" name=\"log\" id=\"user_login\" class=\"input\" value=\"\" size=\"20\" \/>\n\t\t\t<\/p>\n\t\t\t<p class=\"login-password\">\n\t\t\t\t<label for=\"user_pass\">Password<\/label>\n\t\t\t\t<input type=\"password\" name=\"pwd\" id=\"user_pass\" class=\"input\" value=\"\" size=\"20\" \/>\n\t\t\t<\/p>\n\t\t\t\n\t\t\t<p class=\"login-remember\"><label><input name=\"rememberme\" type=\"checkbox\" id=\"rememberme\" value=\"forever\" \/> Remember Me<\/label><\/p>\n\t\t\t<p class=\"login-submit\">\n\t\t\t\t<input type=\"submit\" name=\"wp-submit\" id=\"wp-submit\" class=\"button button-primary\" value=\"Log In\" \/>\n\t\t\t\t<input type=\"hidden\" name=\"redirect_to\" value=\"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/posts\/10305\" \/>\n\t\t\t<\/p>\n\t\t\t\n\t\t<\/form><\/div><\/div><\/div>","protected":false},"excerpt":{"rendered":"<p>This is bound to become the most critical issue in humankind\u2019s history until now, and probably also from now on \u2015 to be taken seriously. Not thinking about it is like driving blindfolded on a highway. If you have read my book The Journey Towards Compassionate A.I., you know much of what\u2019s in this text. <a class=\"moretag\" href=\"https:\/\/aurelis.org\/blog\/empathy-compassion\/why-a-i-must-be-compassionate\">Read the full article&#8230;<\/a><\/p>\n<div data-object_id=\"10305\" class=\"cbxwpbkmarkwrap cbxwpbkmarkwrap_no_cat cbxwpbkmarkwrap-post \"><a  data-redirect-url=\"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/posts\/10305\"  data-display-label=\"0\" data-show-count=\"0\" data-bookmark-label=\" \"  data-bookmarked-label=\" \"  data-loggedin=\"0\" data-type=\"post\" data-object_id=\"10305\" class=\"cbxwpbkmarktrig  cbxwpbkmarktrig-button-addto\" title=\"Bookmark This\" href=\"#\"><span class=\"cbxwpbkmarktrig-label\"  style=\"display:none;\" > <\/span><\/a> <div  data-type=\"post\" data-object_id=\"10305\" class=\"cbxwpbkmarkguestwrap\" id=\"cbxwpbkmarkguestwrap-10305\"><div class=\"cbxwpbkmarkguest-message\"><a href=\"#\" class=\"cbxwpbkmarkguesttrig_close\"><\/a><h3 class=\"cbxwpbookmark-title cbxwpbookmark-title-login\">Please login to bookmark<\/h3>\n\t\t<form name=\"loginform\" id=\"loginform\" action=\"https:\/\/aurelis.org\/blog\/wp-login.php\" method=\"post\">\n\t\t\t\n\t\t\t<p class=\"login-username\">\n\t\t\t\t<label for=\"user_login\">Username or Email Address<\/label>\n\t\t\t\t<input type=\"text\" name=\"log\" id=\"user_login\" class=\"input\" value=\"\" size=\"20\" \/>\n\t\t\t<\/p>\n\t\t\t<p class=\"login-password\">\n\t\t\t\t<label for=\"user_pass\">Password<\/label>\n\t\t\t\t<input type=\"password\" name=\"pwd\" id=\"user_pass\" class=\"input\" value=\"\" size=\"20\" \/>\n\t\t\t<\/p>\n\t\t\t\n\t\t\t<p class=\"login-remember\"><label><input name=\"rememberme\" type=\"checkbox\" id=\"rememberme\" value=\"forever\" \/> Remember Me<\/label><\/p>\n\t\t\t<p class=\"login-submit\">\n\t\t\t\t<input type=\"submit\" name=\"wp-submit\" id=\"wp-submit\" class=\"button button-primary\" value=\"Log In\" \/>\n\t\t\t\t<input type=\"hidden\" name=\"redirect_to\" value=\"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/posts\/10305\" \/>\n\t\t\t<\/p>\n\t\t\t\n\t\t<\/form><\/div><\/div><\/div>","protected":false},"author":2,"featured_media":10307,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"spay_email":"","jetpack_publicize_message":""},"categories":[28,12,80],"tags":[],"jetpack_featured_media_url":"https:\/\/i0.wp.com\/aurelis.org\/blog\/wp-content\/uploads\/2022\/10\/1790.jpg?fit=960%2C560&ssl=1","jetpack_publicize_connections":[],"jetpack_sharing_enabled":true,"jetpack_shortlink":"https:\/\/wp.me\/p9Fdiq-2Gd","jetpack-related-posts":[],"_links":{"self":[{"href":"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/posts\/10305"}],"collection":[{"href":"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/comments?post=10305"}],"version-history":[{"count":9,"href":"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/posts\/10305\/revisions"}],"predecessor-version":[{"id":10657,"href":"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/posts\/10305\/revisions\/10657"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/media\/10307"}],"wp:attachment":[{"href":"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/media?parent=10305"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/categories?post=10305"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/tags?post=10305"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}