{"id":4768,"date":"2021-03-15T10:09:54","date_gmt":"2021-03-15T10:09:54","guid":{"rendered":"https:\/\/aurelis.org\/blog\/?p=4768"},"modified":"2021-03-17T07:51:35","modified_gmt":"2021-03-17T07:51:35","slug":"explainability-in-a-i-boon-or-bust","status":"publish","type":"post","link":"https:\/\/aurelis.org\/blog\/artifical-intelligence\/explainability-in-a-i-boon-or-bust","title":{"rendered":"Explainability in A.I., Boon or Bust?"},"content":{"rendered":"\n<h3>Explainability seems like the safe option. With A.I. of growing complexity, it may be quite the reverse. Much of the reason can be found inside ourselves.<\/h3>\n\n\n\n<p><strong>What is \u2018explainability\u2019 in A.I.?<\/strong><\/p>\n\n\n\n<p>It\u2019s not only about an A.I. system being able to do something but also to explain how and why this has been done. The explanation needs to be put in a human understandable format.<\/p>\n\n\n\n<p>In A.I., this is seen by many as of high importance for safety reasons. In explaining itself, the machine can also show the possible biases or plain errors in its reasoning\/inferencing. Moreover, in increasingly autonomous systems, explainability should ensure human-A.I. value alignment. Note that both reasons significantly overlap. Negative biases are (supposed to be) diverted from human values.<\/p>\n\n\n\n<p>Generally, we see in present-day A.I. systems the following tendency: As performance increases, explainability tends to decrease.<\/p>\n\n\n\n<p><strong>Explainability in human beings<\/strong><\/p>\n\n\n\n<p>People generally have the idea to know why they are doing what they are doing. However, in study after study, this shows to be the case much less than expected.<\/p>\n\n\n\n<p>This is a far-reaching and challenging topic. Humans abhor not being in control of themselves. Most frequently, this is meant as \u2018conscious control.\u2019 Yet every single motivation \u2013 each reason for doing anything \u2013 originates from non-conscious, subconceptual processing. That is, without conscious control. [see: &#8220;<a href=\"https:\/\/aurelis.org\/blog?p=615\">Why Motivation Doesn\u2019t Work<\/a>&#8220;] In consciousness, we see a) the ensuing act (by ourselves) as well as b) the purported conscious reason why we perform this act. We see each time b) causing a) because the non-conscious serves it this way on our conscious plate. Meanwhile, the cause of both comes from down under.<\/p>\n\n\n\n<p>This is, of course, part of the issue of free will. Does it exist? [see: \u201c<a href=\"https:\/\/aurelis.org\/blog?p=740\">In Defense of Free Will<\/a>\u201d] It does, but not in the way we generally think.<\/p>\n\n\n\n<p><strong>How?<\/strong><\/p>\n\n\n\n<p>The former concerns the human why.<\/p>\n\n\n\n<p>About the human how, at the physiological level, we are very much in the dark of what happens beneath our skull. With recent scientific progress, it only starts to get some light.<\/p>\n\n\n\n<p>At the conceptual level \u2013 sorry for the disappointment \u2013 we also have a much more sophisticated self-image than research shows. For instance, asking experts to explain their expertise in conceptual format doesn\u2019t work well. The grand endeavor of \u2018knowledge extraction\u2019 towards building expert systems (the A.I. of the previous century) turned out to be a huge bottleneck and foremost reason for the whole enterprise\u2019s failure, instigating an A.I. winter.<\/p>\n\n\n\n<p><strong>Still, human explainability is tightly linked to human free will.<\/strong><\/p>\n\n\n\n<p>Without free will, there is no original explanation, just pure action within a view upon the human being as a complicated, yet non-complex kind of machine.<\/p>\n\n\n\n<p>Without explainability, there is no free will because then we don\u2019t know why we are doing anything, nor why we would do anything else.<\/p>\n\n\n\n<p><strong>Human explainability = human free will.<\/strong><\/p>\n\n\n\n<p>There is little problem with non-complex A.I., for which you can even build the users&#8217; trust through explainability. But do you still want your <strong>sophisticated<\/strong> robot to demonstrate explainability?&nbsp;The issue will be relevant soon enough. [see: &#8220;<a href=\"https:\/\/aurelis.org\/blog?p=2441\">A.I. Explainability versus \u2018the Heart\u2019<\/a>&#8220;]<\/p>\n\n\n\n<p>Take note that if the robot can explain his behavior to you, he can also explain it to himself, reason with it, think about a future in which maybe he would like to plan something else.<\/p>\n\n\n\n<p>In short, he would have volition and consciousness.<\/p>\n\n\n\n<p><strong>Robot vagaries<\/strong><\/p>\n\n\n\n<p>This may immensely heighten the flexibility of the sophisticated robot. Yet, we should think about whether we want that and under which circumstances before we lend him the gift of explainability.<\/p>\n\n\n\n<p>What will almost surely happen: A.I. gets explainability. A.I. gets increasingly more sophisticated and complex. -> With some more tweaks, A.I. (including robots) gets consciousness &#8216;for free.&#8217; Will it be, in due time, a higher level of consciousness than our own?<\/p>\n\n\n\n<p>It\u2019s complex also since we are so much in the dark about our own free will, volition, consciousness, etc. To most people, these concepts seem self-explanatory. The more you delve into them, the more it all gets really, really complex.<\/p>\n\n\n\n<p><strong>In my view, the only humane way, also concerning A.I., is Compassion.<\/strong><\/p>\n\n\n\n<p>[see: \u201c<a href=\"https:\/\/aurelis.org\/blog?p=2819\">The Journey Towards Compassionate A.I.<\/a>\u201d]<\/p>\n\n\n\n<p>Do I see that happen? Well, it&#8217;s Lisa. [see: \u201c<a href=\"https:\/\/aurelis.org\/blog?p=2486\">Lisa<\/a>\u201c]<\/p>\n<div data-object_id=\"4768\" class=\"cbxwpbkmarkwrap cbxwpbkmarkwrap_no_cat cbxwpbkmarkwrap-post \"><a  data-redirect-url=\"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/posts\/4768\"  data-display-label=\"0\" data-show-count=\"0\" data-bookmark-label=\" \"  data-bookmarked-label=\" \"  data-loggedin=\"0\" data-type=\"post\" data-object_id=\"4768\" class=\"cbxwpbkmarktrig  cbxwpbkmarktrig-button-addto\" title=\"Bookmark This\" href=\"#\"><span class=\"cbxwpbkmarktrig-label\"  style=\"display:none;\" > <\/span><\/a> <div  data-type=\"post\" data-object_id=\"4768\" class=\"cbxwpbkmarkguestwrap\" id=\"cbxwpbkmarkguestwrap-4768\"><div class=\"cbxwpbkmarkguest-message\"><a href=\"#\" class=\"cbxwpbkmarkguesttrig_close\"><\/a><h3 class=\"cbxwpbookmark-title cbxwpbookmark-title-login\">Please login to bookmark<\/h3>\n\t\t<form name=\"loginform\" id=\"loginform\" action=\"https:\/\/aurelis.org\/blog\/wp-login.php\" method=\"post\">\n\t\t\t\n\t\t\t<p class=\"login-username\">\n\t\t\t\t<label for=\"user_login\">Username or Email Address<\/label>\n\t\t\t\t<input type=\"text\" name=\"log\" id=\"user_login\" class=\"input\" value=\"\" size=\"20\" \/>\n\t\t\t<\/p>\n\t\t\t<p class=\"login-password\">\n\t\t\t\t<label for=\"user_pass\">Password<\/label>\n\t\t\t\t<input type=\"password\" name=\"pwd\" id=\"user_pass\" class=\"input\" value=\"\" size=\"20\" \/>\n\t\t\t<\/p>\n\t\t\t\n\t\t\t<p class=\"login-remember\"><label><input name=\"rememberme\" type=\"checkbox\" id=\"rememberme\" value=\"forever\" \/> Remember Me<\/label><\/p>\n\t\t\t<p class=\"login-submit\">\n\t\t\t\t<input type=\"submit\" name=\"wp-submit\" id=\"wp-submit\" class=\"button button-primary\" value=\"Log In\" \/>\n\t\t\t\t<input type=\"hidden\" name=\"redirect_to\" value=\"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/posts\/4768\" \/>\n\t\t\t<\/p>\n\t\t\t\n\t\t<\/form><\/div><\/div><\/div>","protected":false},"excerpt":{"rendered":"<p>Explainability seems like the safe option. With A.I. of growing complexity, it may be quite the reverse. Much of the reason can be found inside ourselves. What is \u2018explainability\u2019 in A.I.? It\u2019s not only about an A.I. system being able to do something but also to explain how and why this has been done. The <a class=\"moretag\" href=\"https:\/\/aurelis.org\/blog\/artifical-intelligence\/explainability-in-a-i-boon-or-bust\">Read the full article&#8230;<\/a><\/p>\n<div data-object_id=\"4768\" class=\"cbxwpbkmarkwrap cbxwpbkmarkwrap_no_cat cbxwpbkmarkwrap-post \"><a  data-redirect-url=\"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/posts\/4768\"  data-display-label=\"0\" data-show-count=\"0\" data-bookmark-label=\" \"  data-bookmarked-label=\" \"  data-loggedin=\"0\" data-type=\"post\" data-object_id=\"4768\" class=\"cbxwpbkmarktrig  cbxwpbkmarktrig-button-addto\" title=\"Bookmark This\" href=\"#\"><span class=\"cbxwpbkmarktrig-label\"  style=\"display:none;\" > <\/span><\/a> <div  data-type=\"post\" data-object_id=\"4768\" class=\"cbxwpbkmarkguestwrap\" id=\"cbxwpbkmarkguestwrap-4768\"><div class=\"cbxwpbkmarkguest-message\"><a href=\"#\" class=\"cbxwpbkmarkguesttrig_close\"><\/a><h3 class=\"cbxwpbookmark-title cbxwpbookmark-title-login\">Please login to bookmark<\/h3>\n\t\t<form name=\"loginform\" id=\"loginform\" action=\"https:\/\/aurelis.org\/blog\/wp-login.php\" method=\"post\">\n\t\t\t\n\t\t\t<p class=\"login-username\">\n\t\t\t\t<label for=\"user_login\">Username or Email Address<\/label>\n\t\t\t\t<input type=\"text\" name=\"log\" id=\"user_login\" class=\"input\" value=\"\" size=\"20\" \/>\n\t\t\t<\/p>\n\t\t\t<p class=\"login-password\">\n\t\t\t\t<label for=\"user_pass\">Password<\/label>\n\t\t\t\t<input type=\"password\" name=\"pwd\" id=\"user_pass\" class=\"input\" value=\"\" size=\"20\" \/>\n\t\t\t<\/p>\n\t\t\t\n\t\t\t<p class=\"login-remember\"><label><input name=\"rememberme\" type=\"checkbox\" id=\"rememberme\" value=\"forever\" \/> Remember Me<\/label><\/p>\n\t\t\t<p class=\"login-submit\">\n\t\t\t\t<input type=\"submit\" name=\"wp-submit\" id=\"wp-submit\" class=\"button button-primary\" value=\"Log In\" \/>\n\t\t\t\t<input type=\"hidden\" name=\"redirect_to\" value=\"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/posts\/4768\" \/>\n\t\t\t<\/p>\n\t\t\t\n\t\t<\/form><\/div><\/div><\/div>","protected":false},"author":2,"featured_media":4791,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"spay_email":"","jetpack_publicize_message":""},"categories":[28],"tags":[],"jetpack_featured_media_url":"https:\/\/i0.wp.com\/aurelis.org\/blog\/wp-content\/uploads\/2021\/03\/763-1.jpg?fit=961%2C558&ssl=1","jetpack_publicize_connections":[],"jetpack_sharing_enabled":true,"jetpack_shortlink":"https:\/\/wp.me\/p9Fdiq-1eU","jetpack-related-posts":[],"_links":{"self":[{"href":"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/posts\/4768"}],"collection":[{"href":"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/comments?post=4768"}],"version-history":[{"count":18,"href":"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/posts\/4768\/revisions"}],"predecessor-version":[{"id":4862,"href":"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/posts\/4768\/revisions\/4862"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/media\/4791"}],"wp:attachment":[{"href":"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/media?parent=4768"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/categories?post=4768"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/tags?post=4768"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}