{"id":4755,"date":"2021-03-14T21:13:05","date_gmt":"2021-03-14T21:13:05","guid":{"rendered":"https:\/\/aurelis.org\/blog\/?p=4755"},"modified":"2021-03-16T10:29:18","modified_gmt":"2021-03-16T10:29:18","slug":"the-next-breakthrough-in-a-i","status":"publish","type":"post","link":"https:\/\/aurelis.org\/blog\/artifical-intelligence\/the-next-breakthrough-in-a-i","title":{"rendered":"The Next Breakthrough in A.I."},"content":{"rendered":"\n<h3>will not be technological, but philosophical.<\/h3>\n\n\n\n<p>Of course, technology will be necessary to realize the philosophical. It will not be one more technological breakthrough, but rather a combination of new and old technologies.<\/p>\n\n\n\n<p><strong>&#8220;Present-day A.I. = sophisticated perception&#8221;<\/strong><\/p>\n\n\n\n<p>These are the words of Yann LeCun, a leading A.I. scientist, founding father of <em>convolutional nets,<\/em> which at present play a major role in many deep learning applications.<\/p>\n\n\n\n<p>Yann sees DNNs (Deep Neural Networks) as quite straightforward devices in which basically a linear input vector goes through a non-linear function towards output in weights, over and over again.<\/p>\n\n\n\n<p>And amazingly, as counter-intuitively as anything gets, it works.<\/p>\n\n\n\n<p><strong>Why \u2018amazingly\u2019?<\/strong><\/p>\n\n\n\n<p>The problem before the breakthrough was that artificial neural networks were deemed to get quickly into a rut, caught in some local minimum from which they would never escape. However, with many layers (in DNNs), in (mathematically) multidimensional space, such local minima are easily avoided, especially using gradient descent.<\/p>\n\n\n\n<p>Dear reader, never mind if you don\u2019t fully get this, but you certainly get a feel of it.<\/p>\n\n\n\n<p>The same Yann points out that this many-layers construct works because reality is compositional: it can be broken down into pieces, and each piece can be processed by itself. A car has wheels and doors. A wheel has rubber parts and metal parts.<\/p>\n\n\n\n<p><strong>This is where, <strong>at first sight<\/strong><\/strong>,<strong> Yann and Jean-Luc (your humble writer) have different opinions.<\/strong><\/p>\n\n\n\n<p>In my view, the compositionality of reality is mainly a construct of the one who is perceiving this reality. It will fit more if the perceiver is also of the same kind as the builder of that (piece of) reality. A car is an excellent example of this, being a mechanical construct made by human beings. Its compositionality is part of the construction itself. This is also why it can readily be made en masse.<\/p>\n\n\n\n<p>Meanwhile, we live in such a human-made world that compositionality is intrinsically dominant everywhere<\/p>\n\n\n\n<p><strong>except in nature<\/strong><\/p>\n\n\n\n<p>where \u2013 I perfectly agree, of course \u2013 it is also present, but to a lesser degree and much more in synthesis with non-compositionality. Another term for the latter is complexity. [see: \u201c<a href=\"https:\/\/aurelis.org\/blog?p=3910\">Complexity of Complexity<\/a>\u201d]<\/p>\n\n\n\n<p>We have arms and legs and softer parts and harder parts. We also have a brain\/mind, and within that, a lot of complexity. Especially in the cerebrum (as opposed to the cerebellum, for instance), our brain\/mind is composed of a multitude of elements that act together in such ways that the result cannot be calculated from the sum of parts. In other words: It\u2019s a complex melting pot. If Nature were an engineer, she would also be surprised that it works. Moreover, the more melting, even the better it works!<\/p>\n\n\n\n<p>Back to Yann. He sees two major obstacles to the further development of A.I. based on present-day technologies:<\/p>\n\n\n\n<ul><li>There is no reasoning (nor planning\/predictability) involved.<\/li><li>Learning world models appears to be very difficult to impossible.<\/li><\/ul>\n\n\n\n<p>This makes present-day A.I. nothing short of not intelligent at all. The term A.I. is, therefore, a misnomer, as is also \u2018machine learning.\u2019 The machine does not actually \u2018learn\u2019 anything in the way that we humans would call \u2018learning\u2019 as applied to ourselves. It is closer to a book getting a few more pages. One still needs an intelligent interpreter to make anything of it. \u2018Deep learning\u2019 in A.I. until now comes down mainly to additional segmentation using an immense lot of data labeled by humans: supervised learning in DNN.<\/p>\n\n\n\n<p>I find this quite scary when applied to humanistic domains. It is highly control-based. In my view, it will not lead to ethically welcome results. Fortunately, it will probably lead to hardly any sustainable results.<\/p>\n\n\n\n<p><strong>The next revolution will be in real intelligence,<\/strong><\/p>\n\n\n\n<p>whether human-simulated or starting from a more abstract level.<\/p>\n\n\n\n<p>Thus, is real A.I. \u2018in the size\u2019 (more of the same) or in a qualitatively different endeavor? Will it be enabled through sheer brute force, breaking through with some simple procedure(s) as Yann described in the recent \u2018DNN revolution\u2019 (largely engendered by himself and few others)?<\/p>\n\n\n\n<p>Or?<\/p>\n\n\n\n<p>It will probably be something radically different. As you know by now, my view on this is a philosophical one, not in the sense of merely an armchair philosophy but a very pragmatic one. Technologically, there will be a search for the best combination of old and new technologies to make this philosophy optimally efficient. Much of this has already been done by me and others.<\/p>\n\n\n\n<p><strong>What will be this next revolution is not in the box.<\/strong><\/p>\n\n\n\n<p>It will be out of the box.<\/p>\n\n\n\n<p>Here, again, I\u2019m evolving parallel to Yann: One of the ways to view the core will be energy-based self-supervised learning. Yann also calls this \u2018unsupervised.\u2019 If supervised learning is the sugar coating of a cake, he sees the latter as the cake itself.<\/p>\n\n\n\n<p>An intriguing question is how much prior structure is required. A fact is that humans learn world models pretty quickly. They do this learning mainly by learning to predict, gradually filling in the gaps, dealing with uncertainty on the fly.<\/p>\n\n\n\n<p><strong>PLUS a lot of flexibility within a transformer (again, kind of compositional) architecture.<\/strong><\/p>\n\n\n\n<p>You may visualize this as a nicely burning fire within a circle of stones \u2015 an ancient image with a futuristic implication.<\/p>\n\n\n\n<p>Within a coaching setting, here comes Lisa. [see: \u201c<a href=\"https:\/\/aurelis.org\/blog?p=2486\">Lisa<\/a>\u201c] I think this is also the only way towards human-A.I. value alignment, on a journey towards Compassionate A.I. [see: \u201c<a href=\"https:\/\/aurelis.org\/blog?p=2819\">The Journey Towards Compassionate AI.<\/a>\u201d]<\/p>\n\n\n\n<p>A species-existential problem is that I don\u2019t see many evolving on this path.<\/p>\n\n\n\n<p><\/p>\n<div data-object_id=\"4755\" class=\"cbxwpbkmarkwrap cbxwpbkmarkwrap_no_cat cbxwpbkmarkwrap-post \"><a  data-redirect-url=\"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/posts\/4755\"  data-display-label=\"0\" data-show-count=\"0\" data-bookmark-label=\" \"  data-bookmarked-label=\" \"  data-loggedin=\"0\" data-type=\"post\" data-object_id=\"4755\" class=\"cbxwpbkmarktrig  cbxwpbkmarktrig-button-addto\" title=\"Bookmark This\" href=\"#\"><span class=\"cbxwpbkmarktrig-label\"  style=\"display:none;\" > <\/span><\/a> <div  data-type=\"post\" data-object_id=\"4755\" class=\"cbxwpbkmarkguestwrap\" id=\"cbxwpbkmarkguestwrap-4755\"><div class=\"cbxwpbkmarkguest-message\"><a href=\"#\" class=\"cbxwpbkmarkguesttrig_close\"><\/a><h3 class=\"cbxwpbookmark-title cbxwpbookmark-title-login\">Please login to bookmark<\/h3>\n\t\t<form name=\"loginform\" id=\"loginform\" action=\"https:\/\/aurelis.org\/blog\/wp-login.php\" method=\"post\">\n\t\t\t\n\t\t\t<p class=\"login-username\">\n\t\t\t\t<label for=\"user_login\">Username or Email Address<\/label>\n\t\t\t\t<input type=\"text\" name=\"log\" id=\"user_login\" class=\"input\" value=\"\" size=\"20\" \/>\n\t\t\t<\/p>\n\t\t\t<p class=\"login-password\">\n\t\t\t\t<label for=\"user_pass\">Password<\/label>\n\t\t\t\t<input type=\"password\" name=\"pwd\" id=\"user_pass\" class=\"input\" value=\"\" size=\"20\" \/>\n\t\t\t<\/p>\n\t\t\t\n\t\t\t<p class=\"login-remember\"><label><input name=\"rememberme\" type=\"checkbox\" id=\"rememberme\" value=\"forever\" \/> Remember Me<\/label><\/p>\n\t\t\t<p class=\"login-submit\">\n\t\t\t\t<input type=\"submit\" name=\"wp-submit\" id=\"wp-submit\" class=\"button button-primary\" value=\"Log In\" \/>\n\t\t\t\t<input type=\"hidden\" name=\"redirect_to\" value=\"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/posts\/4755\" \/>\n\t\t\t<\/p>\n\t\t\t\n\t\t<\/form><\/div><\/div><\/div>","protected":false},"excerpt":{"rendered":"<p>will not be technological, but philosophical. Of course, technology will be necessary to realize the philosophical. It will not be one more technological breakthrough, but rather a combination of new and old technologies. &#8220;Present-day A.I. = sophisticated perception&#8221; These are the words of Yann LeCun, a leading A.I. scientist, founding father of convolutional nets, which <a class=\"moretag\" href=\"https:\/\/aurelis.org\/blog\/artifical-intelligence\/the-next-breakthrough-in-a-i\">Read the full article&#8230;<\/a><\/p>\n<div data-object_id=\"4755\" class=\"cbxwpbkmarkwrap cbxwpbkmarkwrap_no_cat cbxwpbkmarkwrap-post \"><a  data-redirect-url=\"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/posts\/4755\"  data-display-label=\"0\" data-show-count=\"0\" data-bookmark-label=\" \"  data-bookmarked-label=\" \"  data-loggedin=\"0\" data-type=\"post\" data-object_id=\"4755\" class=\"cbxwpbkmarktrig  cbxwpbkmarktrig-button-addto\" title=\"Bookmark This\" href=\"#\"><span class=\"cbxwpbkmarktrig-label\"  style=\"display:none;\" > <\/span><\/a> <div  data-type=\"post\" data-object_id=\"4755\" class=\"cbxwpbkmarkguestwrap\" id=\"cbxwpbkmarkguestwrap-4755\"><div class=\"cbxwpbkmarkguest-message\"><a href=\"#\" class=\"cbxwpbkmarkguesttrig_close\"><\/a><h3 class=\"cbxwpbookmark-title cbxwpbookmark-title-login\">Please login to bookmark<\/h3>\n\t\t<form name=\"loginform\" id=\"loginform\" action=\"https:\/\/aurelis.org\/blog\/wp-login.php\" method=\"post\">\n\t\t\t\n\t\t\t<p class=\"login-username\">\n\t\t\t\t<label for=\"user_login\">Username or Email Address<\/label>\n\t\t\t\t<input type=\"text\" name=\"log\" id=\"user_login\" class=\"input\" value=\"\" size=\"20\" \/>\n\t\t\t<\/p>\n\t\t\t<p class=\"login-password\">\n\t\t\t\t<label for=\"user_pass\">Password<\/label>\n\t\t\t\t<input type=\"password\" name=\"pwd\" id=\"user_pass\" class=\"input\" value=\"\" size=\"20\" \/>\n\t\t\t<\/p>\n\t\t\t\n\t\t\t<p class=\"login-remember\"><label><input name=\"rememberme\" type=\"checkbox\" id=\"rememberme\" value=\"forever\" \/> Remember Me<\/label><\/p>\n\t\t\t<p class=\"login-submit\">\n\t\t\t\t<input type=\"submit\" name=\"wp-submit\" id=\"wp-submit\" class=\"button button-primary\" value=\"Log In\" \/>\n\t\t\t\t<input type=\"hidden\" name=\"redirect_to\" value=\"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/posts\/4755\" \/>\n\t\t\t<\/p>\n\t\t\t\n\t\t<\/form><\/div><\/div><\/div>","protected":false},"author":2,"featured_media":4756,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"spay_email":"","jetpack_publicize_message":""},"categories":[28],"tags":[],"jetpack_featured_media_url":"https:\/\/i0.wp.com\/aurelis.org\/blog\/wp-content\/uploads\/2021\/03\/762.jpg?fit=960%2C557&ssl=1","jetpack_publicize_connections":[],"jetpack_sharing_enabled":true,"jetpack_shortlink":"https:\/\/wp.me\/p9Fdiq-1eH","jetpack-related-posts":[],"_links":{"self":[{"href":"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/posts\/4755"}],"collection":[{"href":"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/comments?post=4755"}],"version-history":[{"count":17,"href":"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/posts\/4755\/revisions"}],"predecessor-version":[{"id":4844,"href":"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/posts\/4755\/revisions\/4844"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/media\/4756"}],"wp:attachment":[{"href":"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/media?parent=4755"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/categories?post=4755"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/tags?post=4755"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}