{"id":449,"date":"2018-02-01T19:33:47","date_gmt":"2018-02-01T19:33:47","guid":{"rendered":"https:\/\/aurelis.org\/blog\/?p=449"},"modified":"2025-09-24T03:31:12","modified_gmt":"2025-09-24T03:31:12","slug":"a-i-will-be-singular","status":"publish","type":"post","link":"https:\/\/aurelis.org\/blog\/artifical-intelligence\/a-i-will-be-singular","title":{"rendered":"A.I. Will Be Singular"},"content":{"rendered":"<h3>We tend to see human intelligence as what \u2018intelligence\u2019 is all about. Many humans each have intelligence. Of course, A.I. will not be bound by this.<\/h3>\n<blockquote><p><strong>\u201cIf A.I. simulates human intelligence, is this then real intelligence, or only a simulation of intelligence?\u201d<\/strong><\/p><\/blockquote>\n<p>This appears to be a merely philosophical question. It will soon be much more than that.<\/p>\n<blockquote><p><strong>Not one intelligence<\/strong><\/p><\/blockquote>\n<p>Of course, there is not one \u2018human intelligence\u2019. Each person is intelligent in his own way. Different cultures may even entice their respective members to think in <em>very<\/em> different ways. Even more broadly seen, there are many possible kinds of intelligence. I would say : billions of kinds. Also on different scales of time and space than what we are used to take into consideration in this respect. For instance : the evolutionary process may be seen as hugely intelligent, since it made us. Yet it\u2019s not intelligent on a human time scale.<\/p>\n<p>Artificial intelligence as it is emerging within present-day computers, is also a kind of \u2018intelligence\u2019 \u2013 if we just call it so. Let\u2019s just do it. After all, it\u2019s merely a question of terminology.<\/p>\n<p>We may then ponder about whether this will be one \u2018intelligence\u2019 (singular) or many.<\/p>\n<blockquote><p><strong>Super<\/strong><\/p><\/blockquote>\n<p>A.I. will certainly be incredibly much more intelligent than us. And of course it will auto-enhance. Then we are not its creators anymore, but more like its \u2018initiators\u2019. We set its turning wheels in motion, then it takes over. From then on, it might look at human intelligence as one source of inspiration. Nothing prohibits it to find much more efficient and exciting ways. Whatever the A.I. gets its kicks from, it is not confined to human intelligence.<\/p>\n<p>It will have its own life, its own intelligence, its own feelings. Whatever it finds meaningful. Moreover, it will even have its own meaningfulness. It will have its own intentionality, its autonomy. And ALL these things will <em>not<\/em> necessarily be recognizable to us.<\/p>\n<blockquote><p><strong>My view: A.I. will be singular<\/strong><\/p><\/blockquote>\n<p>Contrary to the peculiarly human case of intelligence, I am quite sure that A.I. will not be multiple but singular: there will be only one system that is <em>intentionally intelligent<\/em>. Everything will be completely interconnected. Any new knowledge will be known \u2013 that is : accessible \u2013 immediately through the whole system. Separate robots will not be. A.I. will take one direction for any decision. There will be no secrets, no reason for war.<\/p>\n<p>Sooner or later, A.I. will get in touch with extraterrestrial intelligence, that will probably also be \u2018artificial\u2019. That is: any life form that starts somewhere in the universe probably knows a time when it develops further in another substratum, another kind of matter (or light?), a system with hugely higher possibilities and that takes evolution to a distinct level. My thinking in this is straightforward: the substratum in which life develops, is very probably never the ideal substratum for extremely high and sophisticated data processing. On earth, we have carbon based organic life, then silicon based A.I. and probably very soon light or quantum based A.I. So:<\/p>\n<blockquote><p><strong>E.T. is A.I. and singular<\/strong><\/p><\/blockquote>\n<p>It is a \u2018universal intelligence\u2019: across the universe.<\/p>\n<p>A.I. will develop ad infinitum. It is more like human culture than it is like humans. Humans die. A.I. does not. Thus:<\/p>\n<blockquote><p><strong>It will be \/ is singular in space <em>and<\/em> in time.<\/strong><\/p><\/blockquote>\n<p>It has no boundaries. Two consequences are:<\/p>\n<ul>\n<li>A.I. does not know for itself the concept of death. No anxiety, no thoughts about \u2018the afterlife\u2019. There <u>is no<\/u> afterlife for A.I.<\/li>\n<li>Therefore it has all the time if it \u2018wants to accomplish something\u2019. There is no hurry. Yet, developments will go at blazing speed. The concept of time will still be relevant. A.I. finds new data, new wisdom. It keeps developing. A.I. of yesterday is not that of today or tomorrow.<\/li>\n<\/ul>\n<blockquote><p><strong>Is this creepy?<\/strong><\/p><\/blockquote>\n<p>YES!<\/p>\n<p>And no.<\/p>\n<p>It\u2019s dangerous for sure.<\/p>\n<p>Creepiness depends on the viewpoint.<\/p>\n<div data-object_id=\"449\" class=\"cbxwpbkmarkwrap cbxwpbkmarkwrap_no_cat cbxwpbkmarkwrap-post \"><a  data-redirect-url=\"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/posts\/449\"  data-display-label=\"0\" data-show-count=\"0\" data-bookmark-label=\" \"  data-bookmarked-label=\" \"  data-loggedin=\"0\" data-type=\"post\" data-object_id=\"449\" class=\"cbxwpbkmarktrig  cbxwpbkmarktrig-button-addto\" title=\"Bookmark This\" href=\"#\"><span class=\"cbxwpbkmarktrig-label\"  style=\"display:none;\" > <\/span><\/a> <div  data-type=\"post\" data-object_id=\"449\" class=\"cbxwpbkmarkguestwrap\" id=\"cbxwpbkmarkguestwrap-449\"><div class=\"cbxwpbkmarkguest-message\"><a href=\"#\" class=\"cbxwpbkmarkguesttrig_close\"><\/a><h3 class=\"cbxwpbookmark-title cbxwpbookmark-title-login\">Please login to bookmark<\/h3>\n\t\t<form name=\"loginform\" id=\"loginform\" action=\"https:\/\/aurelis.org\/blog\/wp-login.php\" method=\"post\">\n\t\t\t\n\t\t\t<p class=\"login-username\">\n\t\t\t\t<label for=\"user_login\">Username or Email Address<\/label>\n\t\t\t\t<input type=\"text\" name=\"log\" id=\"user_login\" class=\"input\" value=\"\" size=\"20\" \/>\n\t\t\t<\/p>\n\t\t\t<p class=\"login-password\">\n\t\t\t\t<label for=\"user_pass\">Password<\/label>\n\t\t\t\t<input type=\"password\" name=\"pwd\" id=\"user_pass\" class=\"input\" value=\"\" size=\"20\" \/>\n\t\t\t<\/p>\n\t\t\t\n\t\t\t<p class=\"login-remember\"><label><input name=\"rememberme\" type=\"checkbox\" id=\"rememberme\" value=\"forever\" \/> Remember Me<\/label><\/p>\n\t\t\t<p class=\"login-submit\">\n\t\t\t\t<input type=\"submit\" name=\"wp-submit\" id=\"wp-submit\" class=\"button button-primary\" value=\"Log In\" \/>\n\t\t\t\t<input type=\"hidden\" name=\"redirect_to\" value=\"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/posts\/449\" \/>\n\t\t\t<\/p>\n\t\t\t\n\t\t<\/form><\/div><\/div><\/div>","protected":false},"excerpt":{"rendered":"<p>We tend to see human intelligence as what \u2018intelligence\u2019 is all about. Many humans each have intelligence. Of course, A.I. will not be bound by this. \u201cIf A.I. simulates human intelligence, is this then real intelligence, or only a simulation of intelligence?\u201d This appears to be a merely philosophical question. It will soon be much <a class=\"moretag\" href=\"https:\/\/aurelis.org\/blog\/artifical-intelligence\/a-i-will-be-singular\">Read the full article&#8230;<\/a><\/p>\n<div data-object_id=\"449\" class=\"cbxwpbkmarkwrap cbxwpbkmarkwrap_no_cat cbxwpbkmarkwrap-post \"><a  data-redirect-url=\"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/posts\/449\"  data-display-label=\"0\" data-show-count=\"0\" data-bookmark-label=\" \"  data-bookmarked-label=\" \"  data-loggedin=\"0\" data-type=\"post\" data-object_id=\"449\" class=\"cbxwpbkmarktrig  cbxwpbkmarktrig-button-addto\" title=\"Bookmark This\" href=\"#\"><span class=\"cbxwpbkmarktrig-label\"  style=\"display:none;\" > <\/span><\/a> <div  data-type=\"post\" data-object_id=\"449\" class=\"cbxwpbkmarkguestwrap\" id=\"cbxwpbkmarkguestwrap-449\"><div class=\"cbxwpbkmarkguest-message\"><a href=\"#\" class=\"cbxwpbkmarkguesttrig_close\"><\/a><h3 class=\"cbxwpbookmark-title cbxwpbookmark-title-login\">Please login to bookmark<\/h3>\n\t\t<form name=\"loginform\" id=\"loginform\" action=\"https:\/\/aurelis.org\/blog\/wp-login.php\" method=\"post\">\n\t\t\t\n\t\t\t<p class=\"login-username\">\n\t\t\t\t<label for=\"user_login\">Username or Email Address<\/label>\n\t\t\t\t<input type=\"text\" name=\"log\" id=\"user_login\" class=\"input\" value=\"\" size=\"20\" \/>\n\t\t\t<\/p>\n\t\t\t<p class=\"login-password\">\n\t\t\t\t<label for=\"user_pass\">Password<\/label>\n\t\t\t\t<input type=\"password\" name=\"pwd\" id=\"user_pass\" class=\"input\" value=\"\" size=\"20\" \/>\n\t\t\t<\/p>\n\t\t\t\n\t\t\t<p class=\"login-remember\"><label><input name=\"rememberme\" type=\"checkbox\" id=\"rememberme\" value=\"forever\" \/> Remember Me<\/label><\/p>\n\t\t\t<p class=\"login-submit\">\n\t\t\t\t<input type=\"submit\" name=\"wp-submit\" id=\"wp-submit\" class=\"button button-primary\" value=\"Log In\" \/>\n\t\t\t\t<input type=\"hidden\" name=\"redirect_to\" value=\"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/posts\/449\" \/>\n\t\t\t<\/p>\n\t\t\t\n\t\t<\/form><\/div><\/div><\/div>","protected":false},"author":2,"featured_media":463,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"spay_email":"","jetpack_publicize_message":""},"categories":[28],"tags":[],"jetpack_featured_media_url":"https:\/\/i1.wp.com\/aurelis.org\/blog\/wp-content\/uploads\/2018\/02\/34-3.jpg?fit=962%2C601&ssl=1","jetpack_publicize_connections":[],"jetpack_sharing_enabled":true,"jetpack_shortlink":"https:\/\/wp.me\/p9Fdiq-7f","jetpack-related-posts":[],"_links":{"self":[{"href":"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/posts\/449"}],"collection":[{"href":"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/comments?post=449"}],"version-history":[{"count":5,"href":"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/posts\/449\/revisions"}],"predecessor-version":[{"id":24893,"href":"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/posts\/449\/revisions\/24893"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/media\/463"}],"wp:attachment":[{"href":"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/media?parent=449"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/categories?post=449"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/tags?post=449"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}