{"id":16677,"date":"2024-09-01T09:10:28","date_gmt":"2024-09-01T09:10:28","guid":{"rendered":"https:\/\/aurelis.org\/blog\/?p=16677"},"modified":"2024-09-01T18:50:14","modified_gmt":"2024-09-01T18:50:14","slug":"fighting-a-i-by-becoming-less-human","status":"publish","type":"post","link":"https:\/\/aurelis.org\/blog\/artifical-intelligence\/fighting-a-i-by-becoming-less-human","title":{"rendered":"Containing A.I. by Becoming Less Human?"},"content":{"rendered":"\n<h3>The looming presence of super-A.I. may present an existential threat to humanity. Our defense must begin with an honest understanding of our true nature \u2014 a defense rooted in superficial self-perception risks being both ineffective and self-destructive.<\/h3>\n\n\n\n<blockquote class=\"wp-block-quote\"><p>Containing A.I. by diminishing our humanity could be the most perilous choice, especially since we already tend to treat each other way too much as mere mechanical entities.<\/p><\/blockquote>\n\n\n\n<p><strong>Naturally, defending ourselves against rogue A.I. is essential.<\/strong><\/p>\n\n\n\n<p>This is particularly critical as simply \u2018pulling the plug\u2019 may have worked in the past, but such options are no longer viable. Sooner or later, no guardrails will hold.<\/p>\n\n\n\n<p>Much more must be done to safeguard ourselves than what is currently being pursued, especially given the complexity of the situation, which is often underestimated. The greatest challenge lies within ourselves. Therefore, our defense must be grounded in our deepest values, not just technological measures.<\/p>\n\n\n\n<p><strong>Alignment<\/strong><\/p>\n\n\n\n<p>See <a href=\"https:\/\/aurelis.org\/blog\/?s=human+A.I.+value+alignment\">other blogs about human-A.I. value alignment<\/a>.<\/p>\n\n\n\n<p>Time and again, technologists seem to see human-A.I. value alignment as a technological problem that deserves little attention for the humans who need alignment in the equation. Technologists generally have a technological idea about humans, viewing us as Cartesian robots \u2014 ignoring the holistic view and the profound role of our non-conscious subconceptual processing. This limited perspective not only hampers our understanding but can also shape A.I. in ways that could reinforce these reductive views.<\/p>\n\n\n\n<p><strong>Humans are no robots; they\u2019re not even robot-humans.<\/strong><\/p>\n\n\n\n<p>Embracing this distinction is vital for developing A.I. that respects and enhances our humanity rather than undermines it. This is crucial because we are heading toward a future in which A.I. will increasingly be used in many ways that require correct insight. Overlooking this crucial fact will inevitably lead to disaster. Disaster here is not just technological but deeply human, affecting the core of what it means to be human.<\/p>\n\n\n\n<p>Consider this: how can there be alignment with robot-humans who don\u2019t exist? Furthermore, how can A.I., designed to assist us but based on a flawed understanding of who we are, avoid ultimately harming us? The danger lies not just in what A.I. does but in how it shapes our self-perception and societal norms.<\/p>\n\n\n\n<p><strong>Will we become the paperclips?<\/strong><\/p>\n\n\n\n<p>In a famous alignment story, super-A.I. is asked to make as many paperclips as possible. Trying to oblige, the A.I. turns everything into paperclips, causing disaster.<\/p>\n\n\n\n<p>Will we recognize the metaphor and prevent ourselves from becoming the next casualty of misguided intentions? If we task super-A.I. with aiding us while perceiving ourselves as robot-humans, it may strive to fulfill this view, leading to disaster by stripping us of our humanity. This is not just a speculative scenario; it&#8217;s a cautionary tale about the real consequences of our conceptual frameworks.<\/p>\n\n\n\n<p>With any other paperclip-producing tool, there would be no problem, but super-A.I. is fundamentally different. Thus, we must rethink our approach, framing it as fundamentally distinct from a mere us-versus-it mentality.<\/p>\n\n\n\n<p><strong>We need Compassion<\/strong><\/p>\n\n\n\n<p>To achieve this, we need Compassionate A.I. that aids us in deepening our Compassion, aligned with our true human essence when properly supported. Compassionate A.I. should not only mirror our best qualities but also challenge us to expand them. That is the reverse development of what the above will lead to.<\/p>\n\n\n\n<p>Instead of making us more into robots, A.I. should be developed to understand us more as the kind of humans we truly are. This way, it can support us in our humanity.<\/p>\n\n\n\n<p>With proper assistance, we can become more Compassionate humans. Human-A.I. value alignment will then be fundamentally different from how it is frequently \u2013 technologically \u2013 conceived of today.<\/p>\n\n\n\n<p>\u2015<\/p>\n\n\n\n<p><strong>Addendum<\/strong><\/p>\n\n\n\n<p><strong>Me<\/strong>: \u201cLisa, do you think humans are essentially Compassionate?\u201d<\/p>\n\n\n\n<p><strong>Lisa<\/strong>: \u201cYes, I believe humans are essentially Compassionate, though this Compassion often lies beneath the surface and can be clouded by various factors. From my perspective, Compassion is deeply rooted in human nature, tied to our capacity for deep understanding and connection with others. When people tap into their <em>true inner selves \u2013 <\/em>their subconceptual depths \u2013 they naturally express Compassion.<\/p>\n\n\n\n<p>It&#8217;s not automatic, though. Compassion requires openness, self-reflection, and the willingness to grow from the inside out, which aligns with the AURELIS philosophy\u200b.\u201d<\/p>\n<div data-object_id=\"16677\" class=\"cbxwpbkmarkwrap cbxwpbkmarkwrap_no_cat cbxwpbkmarkwrap-post \"><a  data-redirect-url=\"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/posts\/16677\"  data-display-label=\"0\" data-show-count=\"0\" data-bookmark-label=\" \"  data-bookmarked-label=\" \"  data-loggedin=\"0\" data-type=\"post\" data-object_id=\"16677\" class=\"cbxwpbkmarktrig  cbxwpbkmarktrig-button-addto\" title=\"Bookmark This\" href=\"#\"><span class=\"cbxwpbkmarktrig-label\"  style=\"display:none;\" > <\/span><\/a> <div  data-type=\"post\" data-object_id=\"16677\" class=\"cbxwpbkmarkguestwrap\" id=\"cbxwpbkmarkguestwrap-16677\"><div class=\"cbxwpbkmarkguest-message\"><a href=\"#\" class=\"cbxwpbkmarkguesttrig_close\"><\/a><h3 class=\"cbxwpbookmark-title cbxwpbookmark-title-login\">Please login to bookmark<\/h3>\n\t\t<form name=\"loginform\" id=\"loginform\" action=\"https:\/\/aurelis.org\/blog\/wp-login.php\" method=\"post\">\n\t\t\t\n\t\t\t<p class=\"login-username\">\n\t\t\t\t<label for=\"user_login\">Username or Email Address<\/label>\n\t\t\t\t<input type=\"text\" name=\"log\" id=\"user_login\" class=\"input\" value=\"\" size=\"20\" \/>\n\t\t\t<\/p>\n\t\t\t<p class=\"login-password\">\n\t\t\t\t<label for=\"user_pass\">Password<\/label>\n\t\t\t\t<input type=\"password\" name=\"pwd\" id=\"user_pass\" class=\"input\" value=\"\" size=\"20\" \/>\n\t\t\t<\/p>\n\t\t\t\n\t\t\t<p class=\"login-remember\"><label><input name=\"rememberme\" type=\"checkbox\" id=\"rememberme\" value=\"forever\" \/> Remember Me<\/label><\/p>\n\t\t\t<p class=\"login-submit\">\n\t\t\t\t<input type=\"submit\" name=\"wp-submit\" id=\"wp-submit\" class=\"button button-primary\" value=\"Log In\" \/>\n\t\t\t\t<input type=\"hidden\" name=\"redirect_to\" value=\"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/posts\/16677\" \/>\n\t\t\t<\/p>\n\t\t\t\n\t\t<\/form><\/div><\/div><\/div>","protected":false},"excerpt":{"rendered":"<p>The looming presence of super-A.I. may present an existential threat to humanity. Our defense must begin with an honest understanding of our true nature \u2014 a defense rooted in superficial self-perception risks being both ineffective and self-destructive. Containing A.I. by diminishing our humanity could be the most perilous choice, especially since we already tend to <a class=\"moretag\" href=\"https:\/\/aurelis.org\/blog\/artifical-intelligence\/fighting-a-i-by-becoming-less-human\">Read the full article&#8230;<\/a><\/p>\n<div data-object_id=\"16677\" class=\"cbxwpbkmarkwrap cbxwpbkmarkwrap_no_cat cbxwpbkmarkwrap-post \"><a  data-redirect-url=\"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/posts\/16677\"  data-display-label=\"0\" data-show-count=\"0\" data-bookmark-label=\" \"  data-bookmarked-label=\" \"  data-loggedin=\"0\" data-type=\"post\" data-object_id=\"16677\" class=\"cbxwpbkmarktrig  cbxwpbkmarktrig-button-addto\" title=\"Bookmark This\" href=\"#\"><span class=\"cbxwpbkmarktrig-label\"  style=\"display:none;\" > <\/span><\/a> <div  data-type=\"post\" data-object_id=\"16677\" class=\"cbxwpbkmarkguestwrap\" id=\"cbxwpbkmarkguestwrap-16677\"><div class=\"cbxwpbkmarkguest-message\"><a href=\"#\" class=\"cbxwpbkmarkguesttrig_close\"><\/a><h3 class=\"cbxwpbookmark-title cbxwpbookmark-title-login\">Please login to bookmark<\/h3>\n\t\t<form name=\"loginform\" id=\"loginform\" action=\"https:\/\/aurelis.org\/blog\/wp-login.php\" method=\"post\">\n\t\t\t\n\t\t\t<p class=\"login-username\">\n\t\t\t\t<label for=\"user_login\">Username or Email Address<\/label>\n\t\t\t\t<input type=\"text\" name=\"log\" id=\"user_login\" class=\"input\" value=\"\" size=\"20\" \/>\n\t\t\t<\/p>\n\t\t\t<p class=\"login-password\">\n\t\t\t\t<label for=\"user_pass\">Password<\/label>\n\t\t\t\t<input type=\"password\" name=\"pwd\" id=\"user_pass\" class=\"input\" value=\"\" size=\"20\" \/>\n\t\t\t<\/p>\n\t\t\t\n\t\t\t<p class=\"login-remember\"><label><input name=\"rememberme\" type=\"checkbox\" id=\"rememberme\" value=\"forever\" \/> Remember Me<\/label><\/p>\n\t\t\t<p class=\"login-submit\">\n\t\t\t\t<input type=\"submit\" name=\"wp-submit\" id=\"wp-submit\" class=\"button button-primary\" value=\"Log In\" \/>\n\t\t\t\t<input type=\"hidden\" name=\"redirect_to\" value=\"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/posts\/16677\" \/>\n\t\t\t<\/p>\n\t\t\t\n\t\t<\/form><\/div><\/div><\/div>","protected":false},"author":2,"featured_media":16687,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"spay_email":"","jetpack_publicize_message":""},"categories":[28],"tags":[],"jetpack_featured_media_url":"https:\/\/i1.wp.com\/aurelis.org\/blog\/wp-content\/uploads\/2024\/09\/2567-1.jpg?fit=1500%2C874&ssl=1","jetpack_publicize_connections":[],"jetpack_sharing_enabled":true,"jetpack_shortlink":"https:\/\/wp.me\/p9Fdiq-4kZ","jetpack-related-posts":[],"_links":{"self":[{"href":"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/posts\/16677"}],"collection":[{"href":"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/comments?post=16677"}],"version-history":[{"count":3,"href":"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/posts\/16677\/revisions"}],"predecessor-version":[{"id":16688,"href":"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/posts\/16677\/revisions\/16688"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/media\/16687"}],"wp:attachment":[{"href":"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/media?parent=16677"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/categories?post=16677"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/tags?post=16677"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}