{"id":12857,"date":"2023-07-15T12:30:40","date_gmt":"2023-07-15T12:30:40","guid":{"rendered":"https:\/\/aurelis.org\/blog\/?p=12857"},"modified":"2023-07-16T13:59:51","modified_gmt":"2023-07-16T13:59:51","slug":"why-we-dont-see-whats-around-the-corner","status":"publish","type":"post","link":"https:\/\/aurelis.org\/blog\/artifical-intelligence\/why-we-dont-see-whats-around-the-corner","title":{"rendered":"Why We don\u2019t See What\u2019s Around the Corner"},"content":{"rendered":"\n<h3>The main why (of not seeing pending super-A.I.) is all about us. We need a next phase in self-understanding, but this is not getting realized yet. If we don\u2019t see this, we don\u2019t see that.<\/h3>\n\n\n\n<blockquote class=\"wp-block-quote\"><p>This is an excerpt \u2013 in slightly different format \u2013 from my book <a href=\"https:\/\/aurelis.org\/blog?p=2819\">The Journey Towards Compassionate A.I.<\/a><\/p><\/blockquote>\n\n\n\n<p><strong>Complex machinery<\/strong><\/p>\n\n\n\n<p>When looking at \u2018the machine\u2019 as only a machine that ultimately still needs to be programmed by humans, we don\u2019t appreciate that machines can be not only very <a href=\"https:\/\/aurelis.org\/blog?p=3591\">complicated but also open-complex<\/a>. <a href=\"https:\/\/aurelis.org\/blog?p=3910\">Examples of open-complex machines are any living organisms<\/a>, including us. We are proof of their existence. We are not proof that there is only one single way; namely, our way. We are merely proof that the potential exists even to be reached in many ways. Especially in a different medium from one of organic molecules, we cannot foresee which ways.<\/p>\n\n\n\n<p>As far as we know, there is no example of any of that at present. A future example will be super-A.I. Given the many possibilities, what we can anticipate is that we will be surprised.<\/p>\n\n\n\n<p><strong>We tend to view intelligence from an anthropocentric perspective.<\/strong><\/p>\n\n\n\n<p>This may lead us to underestimate the potential of many small improvements in several domains towards the full Monty of super-A.I. <em>in a relatively short time<\/em>. But this anthropocentric perspective doesn\u2019t hold even in our own case. A popular view on human intelligence still sees it as one solid entity. An Intelligence Quotient test is supposed to measure this one thing. Fortunately, as to the IQ-test, this has been surpassed, in theory at least. The tendency to think of mental capacities unrightfully as solid entities persists in popular culture and many professionals alike.<\/p>\n\n\n\n<p>Not only in the domain of intelligence. Another popular example, memory, has not fared any better. There are several kinds of human memory, located in different parts of the brain. Then, the relation between memory and intelligence is, in the human case, much more complex than previously thought.<\/p>\n\n\n\n<p>As explained at other places <em>The Journey<\/em>, complexity creates potential, so nature wasn\u2019t just busy messing things up in open-complexity. One might say that from a certain viewpoint, it\u2019s on purpose.<\/p>\n\n\n\n<p><strong>Many parts of the human brain are like modules that each have evolved at the same time in relative isolation <em>and<\/em> in interplay with the other parts.<\/strong><\/p>\n\n\n\n<p>This way, they could incrementally get more performant without plunging the whole into chaos. Simultaneously, several substantial advantages came from their co-operations inside and outside the skull, leading to our own \u2018human singularity\u2019 in an unprecedented explosion of intelligence and towards where we are now with the potential to pass the baton of most intelligent entity on earth to super-A.I. So familiar, and yet so different, the same principles may save the new day. In the case of super-A.I., as in our own, myriad incremental improvements may eventually create the basis for take-off.\u2019 These improvements will be on domains such as:<\/p>\n\n\n\n<ul><li>more powerful data representation schemes<\/li><li>better search algorithms<\/li><li>better information filtering algorithms<\/li><li>more autonomy for software agents (bots, modules)<\/li><li>more efficient protocols governing the interfaces between such agents<\/li><li>more profound integration of conceptual and subconceptual processing<\/li><\/ul>\n\n\n\n<p>\u2026 and so much more with continuous progress everywhere.<\/p>\n\n\n\n<p>Of course, incremental improvements may be part of and\/or combined with significant breakthroughs. As John Brockman writes in <em>Possible Minds<\/em>:<\/p>\n\n\n\n<blockquote class=\"wp-block-quote\"><p><em>Technological prediction is particularly chancy, given that technologies progress by a series of refinements, halted by obstacles and overcome by innovation\u2026 I typically find that some of the technological steps I expect to be easy turn out to be impossible, whereas some of the tasks I imagine to be impossible turn out to be easy. You don\u2019t know until you try.<\/em> [Brockman, 2019]<\/p><\/blockquote>\n\n\n\n<p><strong>Intelligence overhang<\/strong><strong><\/strong><\/p>\n\n\n\n<p>One of the mechanisms that may lead to the \u201cSurprise, I\u2019m here!\u201d phenomenon comes from \u2018intelligence overhang\u2019 (my term).<\/p>\n\n\n\n<p>Some new development in hardware, software or plain insight may use vast resources that are already available. With some enhancements, the result of the combination with what lies waiting may get much more efficient much faster than could be predicted if only looking at the new development. For instance, when an A.I. understands language with just enough sophistication, it can read the Internet, upgrade itself and get to singularity. New algorithms are like instruments with which new discoveries can be made with old big-data. The old big-data themselves can be seen as a kind of intelligence overhang.<\/p>\n\n\n\n<p>In a way, <a>intelligence overhang <\/a>is how people \u2013 every individual anew \u2013 get smart through being educated on the knowledge of many centuries in combination with the individual\u2019s relatively small mental processing power. For A.I., an infinite number of potential performance gains lies waiting as intelligence overhang with immensely much more efficiency than for a human child in a classroom.<\/p>\n\n\n\n<p><strong>The best example of intelligence overhang is related to intelligence itself in the non-human organic <\/strong><strong>\u00ae<\/strong><strong> human progression.<\/strong><strong><\/strong><\/p>\n\n\n\n<p>Some stretch is needed to accommodate this to the point, but to me, it is about the same phenomenon. Let me explain in a few sentences. The drive-to-thrive has been present from the beginnings of life. What we associate with intelligence, especially the conceptual kind, came much more recently to this planet. But when it happened, it schemed with the already present drive-to-thrive into consciousness. Strictly speaking, the overhang was present in the drive, but that also depends on the viewpoint.<\/p>\n\n\n\n<p>Anyway, the result was <a href=\"https:\/\/aurelis.org\/blog?p=12480\">consciousness<\/a>, appearing relatively suddenly through the splashing together of one thing already waiting a long time and another coming along. Nobody saw it coming until it was quite suddenly there: consciousness.<\/p>\n\n\n\n<p><strong>Do we see it coming now in the case of A.I.?<\/strong><\/p>\n\n\n\n<p>Especially once A.I. gains a modicum of true autonomy in combination with a potential for self-improvement, any intelligence overhang multiplies its development pace. That is, its potential grows exponentially. This is the level Nick Bostrom calls the \u2018crossover,\u2019 a<\/p>\n\n\n\n<blockquote class=\"wp-block-quote\"><p><em>point beyond which the system\u2019s further improvement is mainly driven by the system\u2019s own actions rather than by work performed upon it by others.<\/em> [Bostrom, 2014]<\/p><\/blockquote>\n\n\n\n<p>After the crossover, he sees an exponential growth because any increase immediately translates in an increased optimization power of the system\u2019s own capabilities and further improvement in growth itself. It\u2019s like a vicious spiral.<\/p>\n\n\n\n<p><strong><a href=\"https:\/\/aurelis.org\/blog?p=11739\">Is this a reason to pull the plugs out?<\/a> On the contrary!<\/strong><\/p>\n\n\n\n<p>The promises of A.I. are immense, as S. Russell describes:<\/p>\n\n\n\n<p><em>Humanity could proceed with their development and reap the almost unimaginable benefits that would flow from the ability to wield far greater intelligence in advancing our civilization. We would be released from millennia of servitude as agricultural, industrial, and clerical robots and we would be free to make the best of life\u2019s potential.<\/em> [Russell, 2019]<\/p>\n\n\n\n<p>Besides all potential advantages, BT Hyacinth points out:<\/p>\n\n\n\n<blockquote class=\"wp-block-quote\"><p><em>Opposition to automation, robotics and AI is about as futile as it would have been in the 20th century to oppose electricity.<\/em> [Hyacinth, 2019]<\/p><\/blockquote>\n\n\n\n<p>However, the promises as well as the dangers are definitely good reasons why <a href=\"https:\/\/aurelis.org\/blog?p=6819\">Compassion<\/a> should be within A.I.-developments worldwide, at the most prominent place.<\/p>\n\n\n\n<p>Now.<\/p>\n\n\n\n<p><strong>Finally, I hope \u2013 as you already know \u2013 that \u2018around the corner\u2019 also lies a journey towards a more Compassionate humanity.<\/strong><\/p>\n\n\n\n<p>There is work in this sector. Will A.I. control us or will we be controlled? Hmm. Will A.I. help us and will we help A.I. to become more Compassionate? I prefer the second question. It may alleviate the first and hopefully make it unnecessary.<\/p>\n\n\n\n<p>Being careful is always an essential part of true Compassion. Being careful indeed, I want to envision a self-perpetuating positive spiral of Compassion.<\/p>\n\n\n\n<p>This is an existential issue.<\/p>\n<div data-object_id=\"12857\" class=\"cbxwpbkmarkwrap cbxwpbkmarkwrap_no_cat cbxwpbkmarkwrap-post \"><a  data-redirect-url=\"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/posts\/12857\"  data-display-label=\"0\" data-show-count=\"0\" data-bookmark-label=\" \"  data-bookmarked-label=\" \"  data-loggedin=\"0\" data-type=\"post\" data-object_id=\"12857\" class=\"cbxwpbkmarktrig  cbxwpbkmarktrig-button-addto\" title=\"Bookmark This\" href=\"#\"><span class=\"cbxwpbkmarktrig-label\"  style=\"display:none;\" > <\/span><\/a> <div  data-type=\"post\" data-object_id=\"12857\" class=\"cbxwpbkmarkguestwrap\" id=\"cbxwpbkmarkguestwrap-12857\"><div class=\"cbxwpbkmarkguest-message\"><a href=\"#\" class=\"cbxwpbkmarkguesttrig_close\"><\/a><h3 class=\"cbxwpbookmark-title cbxwpbookmark-title-login\">Please login to bookmark<\/h3>\n\t\t<form name=\"loginform\" id=\"loginform\" action=\"https:\/\/aurelis.org\/blog\/wp-login.php\" method=\"post\">\n\t\t\t\n\t\t\t<p class=\"login-username\">\n\t\t\t\t<label for=\"user_login\">Username or Email Address<\/label>\n\t\t\t\t<input type=\"text\" name=\"log\" id=\"user_login\" class=\"input\" value=\"\" size=\"20\" \/>\n\t\t\t<\/p>\n\t\t\t<p class=\"login-password\">\n\t\t\t\t<label for=\"user_pass\">Password<\/label>\n\t\t\t\t<input type=\"password\" name=\"pwd\" id=\"user_pass\" class=\"input\" value=\"\" size=\"20\" \/>\n\t\t\t<\/p>\n\t\t\t\n\t\t\t<p class=\"login-remember\"><label><input name=\"rememberme\" type=\"checkbox\" id=\"rememberme\" value=\"forever\" \/> Remember Me<\/label><\/p>\n\t\t\t<p class=\"login-submit\">\n\t\t\t\t<input type=\"submit\" name=\"wp-submit\" id=\"wp-submit\" class=\"button button-primary\" value=\"Log In\" \/>\n\t\t\t\t<input type=\"hidden\" name=\"redirect_to\" value=\"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/posts\/12857\" \/>\n\t\t\t<\/p>\n\t\t\t\n\t\t<\/form><\/div><\/div><\/div>","protected":false},"excerpt":{"rendered":"<p>The main why (of not seeing pending super-A.I.) is all about us. We need a next phase in self-understanding, but this is not getting realized yet. If we don\u2019t see this, we don\u2019t see that. This is an excerpt \u2013 in slightly different format \u2013 from my book The Journey Towards Compassionate A.I. Complex machinery <a class=\"moretag\" href=\"https:\/\/aurelis.org\/blog\/artifical-intelligence\/why-we-dont-see-whats-around-the-corner\">Read the full article&#8230;<\/a><\/p>\n<div data-object_id=\"12857\" class=\"cbxwpbkmarkwrap cbxwpbkmarkwrap_no_cat cbxwpbkmarkwrap-post \"><a  data-redirect-url=\"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/posts\/12857\"  data-display-label=\"0\" data-show-count=\"0\" data-bookmark-label=\" \"  data-bookmarked-label=\" \"  data-loggedin=\"0\" data-type=\"post\" data-object_id=\"12857\" class=\"cbxwpbkmarktrig  cbxwpbkmarktrig-button-addto\" title=\"Bookmark This\" href=\"#\"><span class=\"cbxwpbkmarktrig-label\"  style=\"display:none;\" > <\/span><\/a> <div  data-type=\"post\" data-object_id=\"12857\" class=\"cbxwpbkmarkguestwrap\" id=\"cbxwpbkmarkguestwrap-12857\"><div class=\"cbxwpbkmarkguest-message\"><a href=\"#\" class=\"cbxwpbkmarkguesttrig_close\"><\/a><h3 class=\"cbxwpbookmark-title cbxwpbookmark-title-login\">Please login to bookmark<\/h3>\n\t\t<form name=\"loginform\" id=\"loginform\" action=\"https:\/\/aurelis.org\/blog\/wp-login.php\" method=\"post\">\n\t\t\t\n\t\t\t<p class=\"login-username\">\n\t\t\t\t<label for=\"user_login\">Username or Email Address<\/label>\n\t\t\t\t<input type=\"text\" name=\"log\" id=\"user_login\" class=\"input\" value=\"\" size=\"20\" \/>\n\t\t\t<\/p>\n\t\t\t<p class=\"login-password\">\n\t\t\t\t<label for=\"user_pass\">Password<\/label>\n\t\t\t\t<input type=\"password\" name=\"pwd\" id=\"user_pass\" class=\"input\" value=\"\" size=\"20\" \/>\n\t\t\t<\/p>\n\t\t\t\n\t\t\t<p class=\"login-remember\"><label><input name=\"rememberme\" type=\"checkbox\" id=\"rememberme\" value=\"forever\" \/> Remember Me<\/label><\/p>\n\t\t\t<p class=\"login-submit\">\n\t\t\t\t<input type=\"submit\" name=\"wp-submit\" id=\"wp-submit\" class=\"button button-primary\" value=\"Log In\" \/>\n\t\t\t\t<input type=\"hidden\" name=\"redirect_to\" value=\"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/posts\/12857\" \/>\n\t\t\t<\/p>\n\t\t\t\n\t\t<\/form><\/div><\/div><\/div>","protected":false},"author":2,"featured_media":12858,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"spay_email":"","jetpack_publicize_message":""},"categories":[28],"tags":[],"jetpack_featured_media_url":"https:\/\/i2.wp.com\/aurelis.org\/blog\/wp-content\/uploads\/2023\/07\/2150.jpg?fit=963%2C559&ssl=1","jetpack_publicize_connections":[],"jetpack_sharing_enabled":true,"jetpack_shortlink":"https:\/\/wp.me\/p9Fdiq-3ln","jetpack-related-posts":[],"_links":{"self":[{"href":"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/posts\/12857"}],"collection":[{"href":"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/comments?post=12857"}],"version-history":[{"count":3,"href":"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/posts\/12857\/revisions"}],"predecessor-version":[{"id":12861,"href":"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/posts\/12857\/revisions\/12861"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/media\/12858"}],"wp:attachment":[{"href":"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/media?parent=12857"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/categories?post=12857"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/tags?post=12857"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}