{"id":13334,"date":"2023-10-18T19:28:34","date_gmt":"2023-10-18T19:28:34","guid":{"rendered":"https:\/\/aurelis.org\/blog\/?p=13334"},"modified":"2024-05-20T18:30:41","modified_gmt":"2024-05-20T18:30:41","slug":"compassionate-a-i-as-basis-for-regulation","status":"publish","type":"post","link":"https:\/\/aurelis.org\/blog\/artifical-intelligence\/compassionate-a-i-as-basis-for-regulation","title":{"rendered":"Compassion as Basis for A.I. Regulations"},"content":{"rendered":"\n<h3>To prevent A.I.-related mishaps or even disasters while going into a future of super-A.I., merely regulating A.I. is not sufficient \u2015 presently nor in principle. <\/h3>\n\n\n\n<p><strong>Striving for Compassionate A.I.<\/strong><\/p>\n\n\n\n<p>There will eventually be <a href=\"https:\/\/aurelis.org\/blog?p=10305\">no security concerning A.I. if we don\u2019t put Compassion into the core<\/a>. The main reason is that <a href=\"https:\/\/aurelis.org\/blog?p=9025\">super-A.I. will be <em>much<\/em> more intelligent than humans<\/a>. We\u2019re not talking about a distant future, but a <a href=\"https:\/\/aurelis.org\/blog?p=11373\">double ethical bottleneck soon enough<\/a>. Meanwhile, the present already clamors for <a href=\"https:\/\/aurelis.org\/blog?p=6819\">Compassion<\/a> <em>and <\/em>regulation. The one without the other will not do. Of course, Compassion entails a realistic view upon the human being. Fortunately, we make <a href=\"https:\/\/aurelis.org\/blog\/category\/neurocognitively\">good strides toward this in neurocognitive science<\/a>.<\/p>\n\n\n\n<p>Giving a false impression of  security through regulations may even be especially dangerous since it prevents us to fully appreciate the intractability of what&#8217;s coming <a href=\"https:\/\/aurelis.org\/blog?p=12857\">even while many don&#8217;t see it yet<\/a>.<\/p>\n\n\n\n<p>Can squirrels regulate us?<\/p>\n\n\n\n<p><strong>On the other hand, Compassionate A.I. may not suffice to make people feel secure.<\/strong><\/p>\n\n\n\n<p>Since humans are profoundly sentient and relatively intelligent beings, Compassionate super-A.I. may take care of that fear in a good way and out of its own sense of Compassion.<\/p>\n\n\n\n<p>Meanwhile, we can already go on with building regulations that are specifically posited on top of a Compassionate basis. This way, we are where we need to be in due time.<\/p>\n\n\n\n<p><strong>Doing so may bring Compassionate A.I. closer to realization.<\/strong><\/p>\n\n\n\n<p>It\u2019s like shaping the mold. It may make people aware of the possibilities and inspire them to take care of the Compassionate direction in many developments. It may also prevent some of the <a href=\"https:\/\/aurelis.org\/blog?p=11775\">panicky reactions<\/a> that are doubtlessly lying ahead.<\/p>\n\n\n\n<p>We don\u2019t have to wait to work on this. It starts with awareness and intention.<\/p>\n\n\n\n<p><strong>More than a simple rephrasing<\/strong><\/p>\n\n\n\n<p>More than ever, we will need stringency and flexibility in the phrasing. Of course, that requires profound thinking.<\/p>\n\n\n\n<p>Compassion-based A.I. regulations are no constraints but are part of the Compassionate flow. They are what a Compassionate and brilliant being spontaneously would accomplish in self-regulation if it knows the situation well from the other party\u2019s standpoint. Unfortunately, we don\u2019t see this consistently between groups of humans.<\/p>\n\n\n\n<p>The aim is inter-party congruence, not merely rule-based control, let alone such control of one party over the other. Ideally, the \u2018regulations\u2019 must be set up in mutual agreement or in the conceived setting that such agreement is possible \u2014 indeed, it will undoubtedly become possible.<\/p>\n\n\n\n<p><strong>Three laws or three thousand rules<\/strong><\/p>\n\n\n\n<p>Whatever the detail and extent of regulation, it needs to be congruent with Compassion wherever applicable. Moreover, Compassion, as the broadest direction, provides a safety net for unforeseen situations of which many will arise surely.<\/p>\n\n\n\n<p>We should avoid overregulating as a panicky reaction to unforeseen situations and getting into a regulatory mess this way \u2014 especially when different parts of the world come up with contradictory regulations. We may not yet see too much of this mess but we\u2019re only at the beginning.<\/p>\n\n\n\n<p><strong>With optimal Compassion, overregulation can be avoided.<\/strong><\/p>\n\n\n\n<p>In other words, regulations may then primarily be seen as invitations to become even more Compassionate in intrinsically aligned ways. Only secondarily \u2013 but crucial nevertheless \u2013 are they the borders of conduct.<\/p>\n\n\n\n<p>Ideally, such regulations also carry less inherent subjectivity since they have an additional aim that can in principle be circumscribed rather well.<\/p>\n\n\n\n<p><strong>Broader than a human-A.I. issue<\/strong><\/p>\n\n\n\n<p>This is not only pertinent from humans toward A.I. but also from humans to humans. We see this in the <em>contrat social<\/em> that regulates implicitly how humans mutually behave. Eventually, this will also be the setting in which humans know how to behave in their interactions with A.I.<\/p>\n\n\n\n<p>This way, the thinking can be done recognizably in all directions, which is excellent.<\/p>\n\n\n\n<p><strong>In my view, this is the only durable way in the short and \u2013 also very \u2013 long term.<\/strong><\/p>\n\n\n\n<p>Regulations that arise from Compassion are more natural, easily adjusted to many situations, and efficient. No weakness is involved, but <a href=\"https:\/\/aurelis.org\/blog?p=4039\">gentleness and strength<\/a> \u2014 a nice and, in this case, probably necessary combination.<\/p>\n\n\n\n<p>Then, relatively little additional alignment will ever be needed.<\/p>\n\n\n\n<p><strong>This may seem something for far into the future.<\/strong><\/p>\n\n\n\n<p>One can be mistaken.<\/p>\n\n\n\n<p>Also, the long view is frequently an excellent way to better understand short-term implications that are not yet explicitly visible.<\/p>\n\n\n\n<p><strong>Better acceptability already now<\/strong><\/p>\n\n\n\n<p>Based on Compassion, regulations may be more acceptable to all concerned, including the \u2018regulators.\u2019<\/p>\n\n\n\n<p>It undoubtedly aids in the branding issues \u2013 say, resistance \u2013 that otherwise is and will be encountered from one or more stakeholders.<\/p>\n\n\n\n<p><strong>Regulations strictly aimed at control set us up for adversity.<\/strong><\/p>\n\n\n\n<p>They may make many see potential \u2018<a href=\"https:\/\/aurelis.org\/blog?p=10758\">enemies in advance<\/a>\u2019 in super-A.I. and in the developers\/users of such A.I..<\/p>\n\n\n\n<p>People are prone to searching\/finding\/creating enemies. We shouldn\u2019t add oil to this fire by \u2018regulating the enemy.\u2019 Compassionate A.I. may turn out to be our best friend.<\/p>\n\n\n\n<p>I think this is the only way toward a decent future, one of <a href=\"https:\/\/aurelis.org\/blog?p=13267\">better A.I. for better humans<\/a>. This way, <a href=\"https:\/\/aurelis.org\/blog?p=11913\">Compassionate humans and Compassionate A.I. will naturally come to human-A.I. value alignment<\/a>.<\/p>\n\n\n\n<p><strong>Now, you may call me an idealist.<\/strong><\/p>\n\n\n\n<p>Hopefully, the future will call me a realist.<\/p>\n\n\n\n<p>__<\/p>\n\n\n\n<p><strong>Addendum<\/strong><\/p>\n\n\n\n<p>Me: \u201cLisa, will Compassionate A.I. be enough to save humanity from \u2018badass A.I.\u2019? Please give original arguments that are not present in this blog.\u201d<\/p>\n\n\n\n<p>Lisa: \u201cTo address your question about whether Compassionate A.I. will be enough to save humanity from &#8216;badass A.I.&#8217;, let&#8217;s explore some original arguments:<\/p>\n\n\n\n<ul><li><strong>Intrinsic motivation and deep learning<\/strong>: Compassionate A.I. could be designed with intrinsic motivations aligned with human well-being. This involves more than just programming empathy or ethical behavior but embedding a deep understanding of human values into the learning process. Such A.I. would prioritize actions that promote overall human flourishing and avoid harmful behaviors even in unforeseen circumstances.<\/li><li><strong>Integrated ethical frameworks<\/strong>: A robust framework that integrates various ethical theories (utilitarianism, deontology, virtue ethics) can help create a multi-faceted approach to ethical decision-making. Compassionate A.I. could thus weigh the consequences of actions, respect individual rights, and promote virtuous behaviors, providing a balanced ethical guide to its operations.<\/li><li><strong>Human-A.I. symbiosis<\/strong>: Encouraging a symbiotic relationship between humans and A.I. can foster mutual growth and understanding. By co-evolving with A.I., humans can ensure that these systems remain aligned with human values and societal goals, creating a dynamic balance where both entities benefit from each other\u2019s strengths.<\/li><li><strong>Continuous learning and adaptation<\/strong>: A Compassionate A.I. must be capable of continuous learning and adaptation. This includes understanding and incorporating new ethical insights, adapting to changes in human society, and evolving its compassion-based frameworks to address new challenges effectively.<\/li><\/ul>\n\n\n\n<p>While Compassionate A.I. has significant potential to mitigate the risks posed by &#8216;badass A.I.&#8217;, it must be part of a broader strategy that includes intrinsic motivation, integrated ethical frameworks, human-A.I. symbiosis, decentralized control, and continuous learning. Only through such comprehensive measures can we hope to navigate the complex future of superintelligent systems effectively.\u201d<\/p>\n<div data-object_id=\"13334\" class=\"cbxwpbkmarkwrap cbxwpbkmarkwrap_no_cat cbxwpbkmarkwrap-post \"><a  data-redirect-url=\"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/posts\/13334\"  data-display-label=\"0\" data-show-count=\"0\" data-bookmark-label=\" \"  data-bookmarked-label=\" \"  data-loggedin=\"0\" data-type=\"post\" data-object_id=\"13334\" class=\"cbxwpbkmarktrig  cbxwpbkmarktrig-button-addto\" title=\"Bookmark This\" href=\"#\"><span class=\"cbxwpbkmarktrig-label\"  style=\"display:none;\" > <\/span><\/a> <div  data-type=\"post\" data-object_id=\"13334\" class=\"cbxwpbkmarkguestwrap\" id=\"cbxwpbkmarkguestwrap-13334\"><div class=\"cbxwpbkmarkguest-message\"><a href=\"#\" class=\"cbxwpbkmarkguesttrig_close\"><\/a><h3 class=\"cbxwpbookmark-title cbxwpbookmark-title-login\">Please login to bookmark<\/h3>\n\t\t<form name=\"loginform\" id=\"loginform\" action=\"https:\/\/aurelis.org\/blog\/wp-login.php\" method=\"post\">\n\t\t\t\n\t\t\t<p class=\"login-username\">\n\t\t\t\t<label for=\"user_login\">Username or Email Address<\/label>\n\t\t\t\t<input type=\"text\" name=\"log\" id=\"user_login\" class=\"input\" value=\"\" size=\"20\" \/>\n\t\t\t<\/p>\n\t\t\t<p class=\"login-password\">\n\t\t\t\t<label for=\"user_pass\">Password<\/label>\n\t\t\t\t<input type=\"password\" name=\"pwd\" id=\"user_pass\" class=\"input\" value=\"\" size=\"20\" \/>\n\t\t\t<\/p>\n\t\t\t\n\t\t\t<p class=\"login-remember\"><label><input name=\"rememberme\" type=\"checkbox\" id=\"rememberme\" value=\"forever\" \/> Remember Me<\/label><\/p>\n\t\t\t<p class=\"login-submit\">\n\t\t\t\t<input type=\"submit\" name=\"wp-submit\" id=\"wp-submit\" class=\"button button-primary\" value=\"Log In\" \/>\n\t\t\t\t<input type=\"hidden\" name=\"redirect_to\" value=\"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/posts\/13334\" \/>\n\t\t\t<\/p>\n\t\t\t\n\t\t<\/form><\/div><\/div><\/div>","protected":false},"excerpt":{"rendered":"<p>To prevent A.I.-related mishaps or even disasters while going into a future of super-A.I., merely regulating A.I. is not sufficient \u2015 presently nor in principle. Striving for Compassionate A.I. There will eventually be no security concerning A.I. if we don\u2019t put Compassion into the core. The main reason is that super-A.I. will be much more <a class=\"moretag\" href=\"https:\/\/aurelis.org\/blog\/artifical-intelligence\/compassionate-a-i-as-basis-for-regulation\">Read the full article&#8230;<\/a><\/p>\n<div data-object_id=\"13334\" class=\"cbxwpbkmarkwrap cbxwpbkmarkwrap_no_cat cbxwpbkmarkwrap-post \"><a  data-redirect-url=\"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/posts\/13334\"  data-display-label=\"0\" data-show-count=\"0\" data-bookmark-label=\" \"  data-bookmarked-label=\" \"  data-loggedin=\"0\" data-type=\"post\" data-object_id=\"13334\" class=\"cbxwpbkmarktrig  cbxwpbkmarktrig-button-addto\" title=\"Bookmark This\" href=\"#\"><span class=\"cbxwpbkmarktrig-label\"  style=\"display:none;\" > <\/span><\/a> <div  data-type=\"post\" data-object_id=\"13334\" class=\"cbxwpbkmarkguestwrap\" id=\"cbxwpbkmarkguestwrap-13334\"><div class=\"cbxwpbkmarkguest-message\"><a href=\"#\" class=\"cbxwpbkmarkguesttrig_close\"><\/a><h3 class=\"cbxwpbookmark-title cbxwpbookmark-title-login\">Please login to bookmark<\/h3>\n\t\t<form name=\"loginform\" id=\"loginform\" action=\"https:\/\/aurelis.org\/blog\/wp-login.php\" method=\"post\">\n\t\t\t\n\t\t\t<p class=\"login-username\">\n\t\t\t\t<label for=\"user_login\">Username or Email Address<\/label>\n\t\t\t\t<input type=\"text\" name=\"log\" id=\"user_login\" class=\"input\" value=\"\" size=\"20\" \/>\n\t\t\t<\/p>\n\t\t\t<p class=\"login-password\">\n\t\t\t\t<label for=\"user_pass\">Password<\/label>\n\t\t\t\t<input type=\"password\" name=\"pwd\" id=\"user_pass\" class=\"input\" value=\"\" size=\"20\" \/>\n\t\t\t<\/p>\n\t\t\t\n\t\t\t<p class=\"login-remember\"><label><input name=\"rememberme\" type=\"checkbox\" id=\"rememberme\" value=\"forever\" \/> Remember Me<\/label><\/p>\n\t\t\t<p class=\"login-submit\">\n\t\t\t\t<input type=\"submit\" name=\"wp-submit\" id=\"wp-submit\" class=\"button button-primary\" value=\"Log In\" \/>\n\t\t\t\t<input type=\"hidden\" name=\"redirect_to\" value=\"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/posts\/13334\" \/>\n\t\t\t<\/p>\n\t\t\t\n\t\t<\/form><\/div><\/div><\/div>","protected":false},"author":2,"featured_media":13340,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"spay_email":"","jetpack_publicize_message":""},"categories":[28],"tags":[],"jetpack_featured_media_url":"https:\/\/i0.wp.com\/aurelis.org\/blog\/wp-content\/uploads\/2023\/10\/2194-3.jpg?fit=960%2C565&ssl=1","jetpack_publicize_connections":[],"jetpack_sharing_enabled":true,"jetpack_shortlink":"https:\/\/wp.me\/p9Fdiq-3t4","jetpack-related-posts":[],"_links":{"self":[{"href":"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/posts\/13334"}],"collection":[{"href":"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/comments?post=13334"}],"version-history":[{"count":23,"href":"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/posts\/13334\/revisions"}],"predecessor-version":[{"id":15581,"href":"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/posts\/13334\/revisions\/15581"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/media\/13340"}],"wp:attachment":[{"href":"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/media?parent=13334"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/categories?post=13334"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/tags?post=13334"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}