{"id":11971,"date":"2023-04-19T15:06:25","date_gmt":"2023-04-19T15:06:25","guid":{"rendered":"https:\/\/aurelis.org\/blog\/?p=11971"},"modified":"2023-04-19T20:02:35","modified_gmt":"2023-04-19T20:02:35","slug":"forward-forward-neuronal-networks","status":"publish","type":"post","link":"https:\/\/aurelis.org\/blog\/artifical-intelligence\/forward-forward-neuronal-networks","title":{"rendered":"Forward-Forward Neur(on)al Networks"},"content":{"rendered":"\n<h3>Rest assured, I don\u2019t stuff technical details into this blog. Nevertheless, this new framework lies closer to how the brain works, which is interesting enough to go somewhat into it.<\/h3>\n\n\n\n<p><strong>Backprop<\/strong><\/p>\n\n\n\n<p>In Artificial Neural Networks (ANN) \u2013 the subfield that sways a big scepter in A.I. nowadays \u2013 backpropagation (backprop) is one of the main buzzwords and is used ubiquitously. Present-day A.I. would be nowhere without it. However, that may change.<\/p>\n\n\n\n<p><\/p>\n\n\n\n<div class=\"wp-block-image\"><figure class=\"aligncenter size-large is-resized\"><img loading=\"lazy\" src=\"https:\/\/i1.wp.com\/aurelis.org\/blog\/wp-content\/uploads\/2023\/04\/neural-network-3816319.png?resize=256%2C287&#038;ssl=1\" alt=\"\" class=\"wp-image-11991\" width=\"256\" height=\"287\" srcset=\"https:\/\/i1.wp.com\/aurelis.org\/blog\/wp-content\/uploads\/2023\/04\/neural-network-3816319.png?resize=912%2C1024&amp;ssl=1 912w, https:\/\/i1.wp.com\/aurelis.org\/blog\/wp-content\/uploads\/2023\/04\/neural-network-3816319.png?resize=267%2C300&amp;ssl=1 267w, https:\/\/i1.wp.com\/aurelis.org\/blog\/wp-content\/uploads\/2023\/04\/neural-network-3816319.png?resize=768%2C862&amp;ssl=1 768w, https:\/\/i1.wp.com\/aurelis.org\/blog\/wp-content\/uploads\/2023\/04\/neural-network-3816319.png?resize=1368%2C1536&amp;ssl=1 1368w, https:\/\/i1.wp.com\/aurelis.org\/blog\/wp-content\/uploads\/2023\/04\/neural-network-3816319.png?w=1710&amp;ssl=1 1710w\" sizes=\"(max-width: 256px) 100vw, 256px\" data-recalc-dims=\"1\" \/><figcaption>ANN schema with several layers from left to right<\/figcaption><\/figure><\/div>\n\n\n\n<p>Shortly put, an explanation of backprop:<\/p>\n\n\n\n<blockquote class=\"wp-block-quote\"><p>A multilayered ANN-in-training tries to accurately categorize, for instance, images of cats and dogs. But it makes many errors. After each pass forward through the network from entry layers to exit layers, an error estimation is fed into the system at the end side and, from there, backprops to earlier layers. So, signals go forward; error estimations go backward through the same network. ANNs work well this way for some problems, not for others.<\/p><\/blockquote>\n\n\n\n<p><strong>In any case, the human brain hardly works this way.<\/strong><\/p>\n\n\n\n<p>Firstly, backprop involves a lot of mathematics, while no calculator exists in the brain. There are only neurons \u2013 and other cells \u2013 and groups of them. The brain may simulate a calculator, functionally approaching it to a small degree. But there is no way the brain could perform as many complex calculations as are needed for backprop.<\/p>\n\n\n\n<p>Secondly, backprop would mean that, at the cellular level, there is a kind of memory that definitely isn\u2019t there.<\/p>\n\n\n\n<p>Thus, although low-level information in brainy neuronal networks can go in many directions, a backprop direction can hardly be one of them. Also, there is no evidence of it in reality. Backprop is especially (extremely) implausible with information that evolves over time.<\/p>\n\n\n\n<p><strong>The brain goes forward.<\/strong><\/p>\n\n\n\n<p>Neurons in the human brain are unidirectional. As a general flow, the information goes from one side (dendrites) to the cell body and from there to the axon connecting with other cells\u2019 dendrites.<\/p>\n\n\n\n<div class=\"wp-block-image\"><figure class=\"aligncenter size-large is-resized\"><img loading=\"lazy\" src=\"https:\/\/i1.wp.com\/aurelis.org\/blog\/wp-content\/uploads\/2023\/04\/neuron-296581.png?resize=268%2C134&#038;ssl=1\" alt=\"\" class=\"wp-image-11992\" width=\"268\" height=\"134\" srcset=\"https:\/\/i1.wp.com\/aurelis.org\/blog\/wp-content\/uploads\/2023\/04\/neuron-296581.png?resize=1024%2C512&amp;ssl=1 1024w, https:\/\/i1.wp.com\/aurelis.org\/blog\/wp-content\/uploads\/2023\/04\/neuron-296581.png?resize=300%2C150&amp;ssl=1 300w, https:\/\/i1.wp.com\/aurelis.org\/blog\/wp-content\/uploads\/2023\/04\/neuron-296581.png?resize=768%2C384&amp;ssl=1 768w, https:\/\/i1.wp.com\/aurelis.org\/blog\/wp-content\/uploads\/2023\/04\/neuron-296581.png?resize=1536%2C768&amp;ssl=1 1536w, https:\/\/i1.wp.com\/aurelis.org\/blog\/wp-content\/uploads\/2023\/04\/neuron-296581.png?w=1920&amp;ssl=1 1920w\" sizes=\"(max-width: 268px) 100vw, 268px\" data-recalc-dims=\"1\" \/><figcaption>Neuron with flow from left to right<\/figcaption><\/figure><\/div>\n\n\n\n<p>Thus, all in all, and taking into account many loops, forward processing is what the brain does, also when doing it backwards. For instance, much information travels from the brain to the eye, influencing how the eye itself reacts to what it \u2018sees.\u2019 This is no backprop but a forward feed in the backward (or \u2018top-down\u2019) direction.<\/p>\n\n\n\n<p><strong>A few months ago, <a href=\"https:\/\/arxiv.org\/abs\/2212.13345\">Geoffrey Hinton proposed the Forward-Forward algorithm<\/a> (FF) in neural networks.<\/strong><\/p>\n\n\n\n<p>Shortly put, if an error is made (cat, no dog). FF handles this network experience in a forward fashion \u2015 erroneous and non-erroneous pass alike. The forward trails of both (thus \u2018forward-forward\u2019) are compared, and the system learns.<\/p>\n\n\n\n<p>This is more aligned with how the brain works \u2015 even to such a degree that both cases (natural and artificial) can be described pretty much in the same way. I just did so and will do some more.<\/p>\n\n\n\n<p>Given good and bad examples, a neur(on)al network can proceed in two ways:<\/p>\n\n\n\n<ul><li>It can process each pretty much apart from each other. Hinton proposes this may be the main aim of why humans sleep \u2015 a period in which the brain can generate its own erroneous \u2018input.\u2019<\/li><li>Alternatively, a system can compare good and bad examples as overlays and automatically find out where the overlays mainly differ. This way, the differences are found subconceptually. The conceptual interpretation results from emergence.<\/li><\/ul>\n\n\n\n<p>Are you following? For the sake of comprehension, imagine two transparent sheets. On one is drawn a cat; on the other a dog that overlaps as much as possible with the cat. Then you can look at this and immediately notice the differences without working anything out. It happens automatically.<\/p>\n\n\n\n<p><strong>The brain intrinsically likes automatic stuff.<\/strong><\/p>\n\n\n\n<p>Things that happen automatically don\u2019t require much energy or other resources. This makes the brain very power-efficient <em>and<\/em> responsive, as it can also do with FF neural networks. One promise of the latter is that no massive amounts of memory or distinct processing are needed \u2015 contrary to what is generally the case with backprop. In short, purely forward processing \u2013 using a suitable architecture \u2013 can do with <em>very much<\/em> less energy and memory.<\/p>\n\n\n\n<p>Just as it is easier to drop an object on your toe than to calculate how much that may hurt.<\/p>\n\n\n\n<p>Additionally interesting is that no magic is involved while we can explain more and more of the brain\u2019s inner secrets in relatively simple terms that can be experimentally investigated in an artificial medium.<\/p>\n\n\n\n<p><strong>Society of FF-networks<\/strong><\/p>\n\n\n\n<p>This is a personal addition to FF. At least, I didn\u2019t encounter it yet, but it\u2019s logical. Such a society cannot be but close to the brain\u2019s working as a network of networks (of networks).<\/p>\n\n\n\n<p>One macro function (for instance, dancing the tango, or even one single move) is accomplished by many small brainy networks working together, each performing a part of the macro task. One network\u2019s forward pass can relatively easily be combined with others in many parallel and serial ways \u2015 which is what we see happening in the brain to a huge degree. Of course, nature\u2019s playfield has been immense in this regard.<\/p>\n\n\n\n<p>In such a society, many conglomerates of networks can work in different ways while accomplishing the same macro-functional result. In the human case, this is a well-investigated phenomenon. The same principle may be brought to the artificial world, creating many additional possibilities for efficiency and effectiveness.<\/p>\n\n\n\n<p><strong>(Only) two (of many possible) lessons<\/strong><\/p>\n\n\n\n<ol type=\"1\"><li>We can learn much from the brain when developing genuinely artificial intelligence. The natural case (we) gives us much inspiration.<\/li><li>We can learn from the above abstract thinking how humans are concocted \u2015 including the positive and the flaws. Significantly, such insights may explain why our thinking does an excellent job while simultaneously being such a bunch of biases. It shows how hard it is to keep the intelligence while reducing bias. <a href=\"https:\/\/aurelis.org\/blog?p=9143\">The biases are part of the intelligence<\/a>. This explains why many people frequently act stupidly in so many interesting ways.<\/li><\/ol>\n\n\n\n<p><strong>It also shows how we can ameliorate the situation.<\/strong><\/p>\n\n\n\n<p>In my view, we will direly need this more than ever soon enough.<\/p>\n<div data-object_id=\"11971\" class=\"cbxwpbkmarkwrap cbxwpbkmarkwrap_no_cat cbxwpbkmarkwrap-post \"><a  data-redirect-url=\"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/posts\/11971\"  data-display-label=\"0\" data-show-count=\"0\" data-bookmark-label=\" \"  data-bookmarked-label=\" \"  data-loggedin=\"0\" data-type=\"post\" data-object_id=\"11971\" class=\"cbxwpbkmarktrig  cbxwpbkmarktrig-button-addto\" title=\"Bookmark This\" href=\"#\"><span class=\"cbxwpbkmarktrig-label\"  style=\"display:none;\" > <\/span><\/a> <div  data-type=\"post\" data-object_id=\"11971\" class=\"cbxwpbkmarkguestwrap\" id=\"cbxwpbkmarkguestwrap-11971\"><div class=\"cbxwpbkmarkguest-message\"><a href=\"#\" class=\"cbxwpbkmarkguesttrig_close\"><\/a><h3 class=\"cbxwpbookmark-title cbxwpbookmark-title-login\">Please login to bookmark<\/h3>\n\t\t<form name=\"loginform\" id=\"loginform\" action=\"https:\/\/aurelis.org\/blog\/wp-login.php\" method=\"post\">\n\t\t\t\n\t\t\t<p class=\"login-username\">\n\t\t\t\t<label for=\"user_login\">Username or Email Address<\/label>\n\t\t\t\t<input type=\"text\" name=\"log\" id=\"user_login\" class=\"input\" value=\"\" size=\"20\" \/>\n\t\t\t<\/p>\n\t\t\t<p class=\"login-password\">\n\t\t\t\t<label for=\"user_pass\">Password<\/label>\n\t\t\t\t<input type=\"password\" name=\"pwd\" id=\"user_pass\" class=\"input\" value=\"\" size=\"20\" \/>\n\t\t\t<\/p>\n\t\t\t\n\t\t\t<p class=\"login-remember\"><label><input name=\"rememberme\" type=\"checkbox\" id=\"rememberme\" value=\"forever\" \/> Remember Me<\/label><\/p>\n\t\t\t<p class=\"login-submit\">\n\t\t\t\t<input type=\"submit\" name=\"wp-submit\" id=\"wp-submit\" class=\"button button-primary\" value=\"Log In\" \/>\n\t\t\t\t<input type=\"hidden\" name=\"redirect_to\" value=\"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/posts\/11971\" \/>\n\t\t\t<\/p>\n\t\t\t\n\t\t<\/form><\/div><\/div><\/div>","protected":false},"excerpt":{"rendered":"<p>Rest assured, I don\u2019t stuff technical details into this blog. Nevertheless, this new framework lies closer to how the brain works, which is interesting enough to go somewhat into it. Backprop In Artificial Neural Networks (ANN) \u2013 the subfield that sways a big scepter in A.I. nowadays \u2013 backpropagation (backprop) is one of the main <a class=\"moretag\" href=\"https:\/\/aurelis.org\/blog\/artifical-intelligence\/forward-forward-neuronal-networks\">Read the full article&#8230;<\/a><\/p>\n<div data-object_id=\"11971\" class=\"cbxwpbkmarkwrap cbxwpbkmarkwrap_no_cat cbxwpbkmarkwrap-post \"><a  data-redirect-url=\"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/posts\/11971\"  data-display-label=\"0\" data-show-count=\"0\" data-bookmark-label=\" \"  data-bookmarked-label=\" \"  data-loggedin=\"0\" data-type=\"post\" data-object_id=\"11971\" class=\"cbxwpbkmarktrig  cbxwpbkmarktrig-button-addto\" title=\"Bookmark This\" href=\"#\"><span class=\"cbxwpbkmarktrig-label\"  style=\"display:none;\" > <\/span><\/a> <div  data-type=\"post\" data-object_id=\"11971\" class=\"cbxwpbkmarkguestwrap\" id=\"cbxwpbkmarkguestwrap-11971\"><div class=\"cbxwpbkmarkguest-message\"><a href=\"#\" class=\"cbxwpbkmarkguesttrig_close\"><\/a><h3 class=\"cbxwpbookmark-title cbxwpbookmark-title-login\">Please login to bookmark<\/h3>\n\t\t<form name=\"loginform\" id=\"loginform\" action=\"https:\/\/aurelis.org\/blog\/wp-login.php\" method=\"post\">\n\t\t\t\n\t\t\t<p class=\"login-username\">\n\t\t\t\t<label for=\"user_login\">Username or Email Address<\/label>\n\t\t\t\t<input type=\"text\" name=\"log\" id=\"user_login\" class=\"input\" value=\"\" size=\"20\" \/>\n\t\t\t<\/p>\n\t\t\t<p class=\"login-password\">\n\t\t\t\t<label for=\"user_pass\">Password<\/label>\n\t\t\t\t<input type=\"password\" name=\"pwd\" id=\"user_pass\" class=\"input\" value=\"\" size=\"20\" \/>\n\t\t\t<\/p>\n\t\t\t\n\t\t\t<p class=\"login-remember\"><label><input name=\"rememberme\" type=\"checkbox\" id=\"rememberme\" value=\"forever\" \/> Remember Me<\/label><\/p>\n\t\t\t<p class=\"login-submit\">\n\t\t\t\t<input type=\"submit\" name=\"wp-submit\" id=\"wp-submit\" class=\"button button-primary\" value=\"Log In\" \/>\n\t\t\t\t<input type=\"hidden\" name=\"redirect_to\" value=\"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/posts\/11971\" \/>\n\t\t\t<\/p>\n\t\t\t\n\t\t<\/form><\/div><\/div><\/div>","protected":false},"author":2,"featured_media":11972,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"spay_email":"","jetpack_publicize_message":""},"categories":[28,30],"tags":[],"jetpack_featured_media_url":"https:\/\/i1.wp.com\/aurelis.org\/blog\/wp-content\/uploads\/2023\/04\/2097.jpg?fit=960%2C560&ssl=1","jetpack_publicize_connections":[],"jetpack_sharing_enabled":true,"jetpack_shortlink":"https:\/\/wp.me\/p9Fdiq-375","jetpack-related-posts":[],"_links":{"self":[{"href":"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/posts\/11971"}],"collection":[{"href":"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/comments?post=11971"}],"version-history":[{"count":21,"href":"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/posts\/11971\/revisions"}],"predecessor-version":[{"id":11999,"href":"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/posts\/11971\/revisions\/11999"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/media\/11972"}],"wp:attachment":[{"href":"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/media?parent=11971"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/categories?post=11971"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/tags?post=11971"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}