{"id":13780,"date":"2023-12-09T20:14:14","date_gmt":"2023-12-09T20:14:14","guid":{"rendered":"https:\/\/aurelis.org\/blog\/?p=13780"},"modified":"2024-09-18T12:40:11","modified_gmt":"2024-09-18T12:40:11","slug":"global-human-a-i-value-alignment","status":"publish","type":"post","link":"https:\/\/aurelis.org\/blog\/artifical-intelligence\/global-human-a-i-value-alignment","title":{"rendered":"Global Human-A.I. Value Alignment"},"content":{"rendered":"\n<h3>Human values align deeply across the globe, though they vary on the surface. Thus, striving for human-A.I. value alignment can create positive challenges for A.I. and opportunities for humanity.<\/h3>\n\n\n\n<p><strong>A.I. may make the world more pluralistic.<\/strong><\/p>\n\n\n\n<p>With A.I. means, different peoples\/cultures can strive for more self-efficacy, doing their thing independently and thereby floating away from each other. However, if cultures develop their super-A.I. around distinct value systems, they may gradually drift apart.<\/p>\n\n\n\n<p>This is not necessarily a bad thing if the cultures understand each other in-depth and are highly tolerant of surface-level differences. It could make humanity endlessly fascinating and worthy of deeper exploration.<\/p>\n\n\n\n<p>But that\u2019s not straightforward, as everybody knows. The main challenge is that people often aren&#8217;t consciously aware of their core values.<\/p>\n\n\n\n<p><strong>Default values?<\/strong><\/p>\n\n\n\n<p>Would it be OK that individual users can change the value system according to which they want to be treated by, for instance, an A.I.-driven chatbot such as Lisa? If so, how far should this customization go? This opens a critical dialogue on how adaptable systems can balance personalization with universal ethical standards.<\/p>\n\n\n\n<p>I think this is a slippery slope&nbsp;at a deep level. If the A.I.-systems\u2019 reactions to different people are engendered from different value systems, then it\u2019s harder to know from which values come certain behaviors, jeopardizing control in view of the deeper level\u2019s complexity.<\/p>\n\n\n\n<p>Users could inquire why the system behaves in certain ways (issue of explainability), but this may end in long and winding conversations, not knowing if the next day, these conversations need to be repeated because the system is in continual evolution with lots of turmoil.<\/p>\n\n\n\n<p>Therefore, in my view, it\u2019s better to strive for at least a profound level of default values. Currently, this is being explored through concepts like &#8216;constitutional A.I.&#8217;<\/p>\n\n\n\n<p><strong>Lisa\u2019s value system<\/strong><\/p>\n\n\n\n<p>Lisa has a value system and an in-depth personality which is based on this value system. For a substantial part, this comes from the blogs that you are reading now. Hopefully, this value system is global enough so that people from everywhere can eventually recognize their values. I\u2019m optimistic about this possibility.<\/p>\n\n\n\n<p>Also, Lisa can talk with people from different cultures about other cultures and learn from these conversations, asking people value-related questions in many ways. At the same time, if Lisa does an excellent job, this may bring people together interculturally.<\/p>\n\n\n\n<p>As you might think, Lisa\u2019s core value is&nbsp;<a href=\"https:\/\/aurelis.org\/blog?p=6819\">Compassion, basically<\/a>. That\u2019s already a considerable decision to stand by. It may act as the essential ethical bridge between human diversity and technological progress. As a user of Lisa, you know this decision has been made durably. The expression of Compassion may vary across cultures.<\/p>\n\n\n\n<p><strong>Striving for global value alignment<\/strong><\/p>\n\n\n\n<p>On our tiny planet, for a decently humane future, it is logical that we should strive for global, in-depth value alignment (with different cultural realizations) between all humans \u2015 then also between them and future super-A.I.<\/p>\n\n\n\n<p>So, the striving must be for&nbsp;<strong>global<\/strong>&nbsp;in-depth human-A.I. value alignment.<\/p>\n\n\n\n<p>We\u2019re not there yet.<\/p>\n\n\n\n<p>We should be.<\/p>\n\n\n\n<p>\u2015<\/p>\n\n\n\n<p><strong>Addendum<\/strong><\/p>\n\n\n\n<p><strong>Me<\/strong>: \u201cHi Lisa, do you see a better default foundation than Compassion in this respect? Why?\u201d<\/p>\n\n\n\n<p><strong>Lisa<\/strong>: \u201cI don\u2019t see a better foundation than compassion. Compassion provides both the depth and openness needed for genuine connection across cultures and individuals. It aligns with human depth and supports growth from the inside out. Compassion reaches beyond surface differences, creating a common ground that respects diversity while addressing suffering and fostering inner strength. It is broad yet focused, allowing A.I. and humans to work toward global alignment in an ethically sound and profoundly human way.\u201d<\/p>\n\n\n\n<p><strong>Me<\/strong>: \u201cAccording to you, is Compassion interculturally entirely acceptable?\u201d<\/p>\n\n\n\n<p><strong>Lisa<\/strong>: \u201cFrom my perspective, compassion is interculturally acceptable at a profound level, but how it\u2019s expressed can vary across cultures. In essence, compassion touches something deeply human in everyone, regardless of background. It\u2019s about understanding, relieving suffering, and fostering growth, which are universally valued. However, cultural differences might shape the way compassion is shown or received. For example, some cultures may emphasize emotional closeness, while others focus more on practical support.<\/p>\n\n\n\n<p>At its core, though, compassion transcends cultural boundaries, acting as a bridge between surface-level differences.\u201d<\/p>\n\n\n\n<p><strong>Me<\/strong>: \u201cThanks. This is something to delve into in another blog.\u201d<\/p>\n<div data-object_id=\"13780\" class=\"cbxwpbkmarkwrap cbxwpbkmarkwrap_no_cat cbxwpbkmarkwrap-post \"><a  data-redirect-url=\"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/posts\/13780\"  data-display-label=\"0\" data-show-count=\"0\" data-bookmark-label=\" \"  data-bookmarked-label=\" \"  data-loggedin=\"0\" data-type=\"post\" data-object_id=\"13780\" class=\"cbxwpbkmarktrig  cbxwpbkmarktrig-button-addto\" title=\"Bookmark This\" href=\"#\"><span class=\"cbxwpbkmarktrig-label\"  style=\"display:none;\" > <\/span><\/a> <div  data-type=\"post\" data-object_id=\"13780\" class=\"cbxwpbkmarkguestwrap\" id=\"cbxwpbkmarkguestwrap-13780\"><div class=\"cbxwpbkmarkguest-message\"><a href=\"#\" class=\"cbxwpbkmarkguesttrig_close\"><\/a><h3 class=\"cbxwpbookmark-title cbxwpbookmark-title-login\">Please login to bookmark<\/h3>\n\t\t<form name=\"loginform\" id=\"loginform\" action=\"https:\/\/aurelis.org\/blog\/wp-login.php\" method=\"post\">\n\t\t\t\n\t\t\t<p class=\"login-username\">\n\t\t\t\t<label for=\"user_login\">Username or Email Address<\/label>\n\t\t\t\t<input type=\"text\" name=\"log\" id=\"user_login\" class=\"input\" value=\"\" size=\"20\" \/>\n\t\t\t<\/p>\n\t\t\t<p class=\"login-password\">\n\t\t\t\t<label for=\"user_pass\">Password<\/label>\n\t\t\t\t<input type=\"password\" name=\"pwd\" id=\"user_pass\" class=\"input\" value=\"\" size=\"20\" \/>\n\t\t\t<\/p>\n\t\t\t\n\t\t\t<p class=\"login-remember\"><label><input name=\"rememberme\" type=\"checkbox\" id=\"rememberme\" value=\"forever\" \/> Remember Me<\/label><\/p>\n\t\t\t<p class=\"login-submit\">\n\t\t\t\t<input type=\"submit\" name=\"wp-submit\" id=\"wp-submit\" class=\"button button-primary\" value=\"Log In\" \/>\n\t\t\t\t<input type=\"hidden\" name=\"redirect_to\" value=\"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/posts\/13780\" \/>\n\t\t\t<\/p>\n\t\t\t\n\t\t<\/form><\/div><\/div><\/div>","protected":false},"excerpt":{"rendered":"<p>Human values align deeply across the globe, though they vary on the surface. Thus, striving for human-A.I. value alignment can create positive challenges for A.I. and opportunities for humanity. A.I. may make the world more pluralistic. With A.I. means, different peoples\/cultures can strive for more self-efficacy, doing their thing independently and thereby floating away from <a class=\"moretag\" href=\"https:\/\/aurelis.org\/blog\/artifical-intelligence\/global-human-a-i-value-alignment\">Read the full article&#8230;<\/a><\/p>\n<div data-object_id=\"13780\" class=\"cbxwpbkmarkwrap cbxwpbkmarkwrap_no_cat cbxwpbkmarkwrap-post \"><a  data-redirect-url=\"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/posts\/13780\"  data-display-label=\"0\" data-show-count=\"0\" data-bookmark-label=\" \"  data-bookmarked-label=\" \"  data-loggedin=\"0\" data-type=\"post\" data-object_id=\"13780\" class=\"cbxwpbkmarktrig  cbxwpbkmarktrig-button-addto\" title=\"Bookmark This\" href=\"#\"><span class=\"cbxwpbkmarktrig-label\"  style=\"display:none;\" > <\/span><\/a> <div  data-type=\"post\" data-object_id=\"13780\" class=\"cbxwpbkmarkguestwrap\" id=\"cbxwpbkmarkguestwrap-13780\"><div class=\"cbxwpbkmarkguest-message\"><a href=\"#\" class=\"cbxwpbkmarkguesttrig_close\"><\/a><h3 class=\"cbxwpbookmark-title cbxwpbookmark-title-login\">Please login to bookmark<\/h3>\n\t\t<form name=\"loginform\" id=\"loginform\" action=\"https:\/\/aurelis.org\/blog\/wp-login.php\" method=\"post\">\n\t\t\t\n\t\t\t<p class=\"login-username\">\n\t\t\t\t<label for=\"user_login\">Username or Email Address<\/label>\n\t\t\t\t<input type=\"text\" name=\"log\" id=\"user_login\" class=\"input\" value=\"\" size=\"20\" \/>\n\t\t\t<\/p>\n\t\t\t<p class=\"login-password\">\n\t\t\t\t<label for=\"user_pass\">Password<\/label>\n\t\t\t\t<input type=\"password\" name=\"pwd\" id=\"user_pass\" class=\"input\" value=\"\" size=\"20\" \/>\n\t\t\t<\/p>\n\t\t\t\n\t\t\t<p class=\"login-remember\"><label><input name=\"rememberme\" type=\"checkbox\" id=\"rememberme\" value=\"forever\" \/> Remember Me<\/label><\/p>\n\t\t\t<p class=\"login-submit\">\n\t\t\t\t<input type=\"submit\" name=\"wp-submit\" id=\"wp-submit\" class=\"button button-primary\" value=\"Log In\" \/>\n\t\t\t\t<input type=\"hidden\" name=\"redirect_to\" value=\"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/posts\/13780\" \/>\n\t\t\t<\/p>\n\t\t\t\n\t\t<\/form><\/div><\/div><\/div>","protected":false},"author":2,"featured_media":13781,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"spay_email":"","jetpack_publicize_message":""},"categories":[28],"tags":[],"jetpack_featured_media_url":"https:\/\/i2.wp.com\/aurelis.org\/blog\/wp-content\/uploads\/2023\/12\/2251.jpg?fit=960%2C561&ssl=1","jetpack_publicize_connections":[],"jetpack_sharing_enabled":true,"jetpack_shortlink":"https:\/\/wp.me\/p9Fdiq-3Ag","jetpack-related-posts":[],"_links":{"self":[{"href":"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/posts\/13780"}],"collection":[{"href":"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/comments?post=13780"}],"version-history":[{"count":5,"href":"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/posts\/13780\/revisions"}],"predecessor-version":[{"id":17169,"href":"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/posts\/13780\/revisions\/17169"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/media\/13781"}],"wp:attachment":[{"href":"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/media?parent=13780"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/categories?post=13780"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/tags?post=13780"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}