The brain computes, although not comparably to a present-day computer. As a computing device, it is general-purpose.
Scientists have found out that the neocortex – part of the brain where much of human intelligence happens – is much the same over its whole surface. Any neocortical patch can develop in a variety of functional directions, depending on which sensorial modality is plugged into it. If a hearing device (your ear) is plugged into it, it becomes a hearing-devoted patch.
This is not only the case at birth but also lifelong, although some flexibility gets lost with old age. In many animal experiments already in the early 1990s, patches of neocortex have been cut out from, for example, the auditory cortex, and implanted in the visual cortex. After a while, such transplanted patch functions just fine, like the rest of the visual cortex. Even more, at reimplantation of the visual nerve to the auditory cortex, after some time, the latter becomes visual cortex.
Thus, much of the neocortex is, indeed, general-purpose. This carries some profoundly intriguing implications.
The general-purpose characteristic comes in handy for the natural evolution of peripheral devices (ears, eyes, etc.). This is, nature can tinker with a new device – sensory and also motoric – and the brain takes the new peripherals in plug-and-play mode to make the best of it.
This way, a peripheral device such as the human eye can evolve from something basic to the wonderfully complex instrument with which you are reading this text. Nature could try out thousands of visual options all the evolutionary way through, relying on the brain to follow suit spontaneously ― no problem. This trick has made evolution immensely powerful.
By the way, this counters the idea that the eye’s complexity by itself is proof of intelligent design. Nature has been able to evolve the eye in many small steps, each of which being viable in the setting of that time.
In the future, we will be able to follow nature’s path further toward more input devices into the human brain. Nothing in the brain says that we are necessarily bound to the peripherals that nature endowed us until now. If this sounds scary, it is, but more and more people already have brain implants for several health-related reasons. The Rubicon has been passed.
In principle, there are no bounds to possibilities. For instance, your grandchildren may have brain implants directly to the Internet or a device that gives them echolocation or magnetic field perception. Other implants may provide access to super-limbs on the moon. The science of this seems to have fewer boundaries than science-fiction. It will all be possible ― in principle.
Personally, this makes me very apprehensive.
Another implication of the above-described flexibility is that the brain can also continually change. It is a dynamic system, constantly altering its circuitry to match ever-changing demands. Unlike a computer with fixed hardware, the brain is, as one may call it, ‘liveware.’
These changes are visible with the naked eye (by a brain surgeon) for instance after years of playing the piano, or even just some months of studying hard for university exams. The speed with which this happens – anatomically and functionally – has surprised scientists in recent years.
Looking much more closely, one can see changes even much quicker. Eventually, every experience you have changes something in your brain ― materially. The accumulated changes over the years tally up to who you are.
This is not surprising in view of body-mind unity. It is additional proof of this unity. From here, the implications for health and well-being are exciting. As you may know, I have explored several of these.
Toward intelligent design ― of A.I.
Toward the next stages of A.I., instead of engineering everything from a to z at the designer’s table, it may be wiser to engineer no more than the basic stuff of what can evolve by itself. As in the human brain, some things can be fixed from the start; others can evolve according to what gives the most desired result in any unforeseen environment.
For this, a system needs the notion of ‘relevance.’ However, present-day A.I. doesn’t work with relevance. It learns indiscriminately what is being fed. It lacks the motivation to learn. That makes it computationally powerful but functionally incomparable to us. Needed for the relevance leap is an A.I.-system that explores. It chooses its input modalities, predicts the concrete input, and notices where the prediction and the actual input don’t overlap. This lack of overlap then becomes a reason for adjusting the next forecast. The prediction works through pattern recognition and completion. Probably, in the being able to grab the intricacies of such a prediction process lies the future of A.I.
Abstractly seen, this can be the description of how the brain works as well as future A.I.
Autonomy for free
Of course, if this sounds scary, it is. It lends a lot of autonomy to the A.I., which may get some abstract goal to strive for, with further autonomy as to how to attain it. This quickly becomes a deontological issue.
In my view, this evolution cannot be stopped, but it can be led in proper directions. Explorative growth and Compassion then become extremely important.