How A.I. will Change Us
At present, this still depends mainly on human decisions. It’s up to us if only we take that responsibility — now.
Heidegger
According to the German philosopher Martin Heidegger (died 1976), technology is not neutral. It changes how humans think and behave. It even defines what we see as ‘reality’ insofar as we are able to perceive it. Note that this is not about ultimate reality, which we may never know, and of which we may never even know if it exists.
Thus, the technology of the time – according to Heidegger – reveals the perceived reality of the time, which, therefore, develops through the ages. Within the boundaries of the ultimately knowable, we may know reality only insofar and in such ways as our technology enables us. For example, the invention of the telescope fundamentally changed our view of the universe and ourselves. This helped to usher in the era of Western Enlightenment.
Super-A.I. will change us more than anything before.
Over the past few centuries, technological change has brought us automatization, whereby humans stay in full control, using machines to enhance human efficiency. With A.I., however, we are entering an era of autonomization. The machine isn’t just used anymore. Increasingly, it takes control of the process in ever more flexible and intelligent ways.
Doing so, it increasingly stands closer to us in a setting of cooperation. By being closer to us, it also holds more ways to change us significantly. Those who follow that setting may be at a growing advantage and enhance the change.
Essential changes will particularly change us.
These are changes with wide repercussions stemming from some deeper level. As said, many technological changes have already done so in the past. For instance, engines changed our mobility and thus also the way many view their ‘touristic needs,’ fostering tourism industries in many countries.
A.I. will change our mental mobility. This may foster or even create new and unforeseen mental capabilities. We will see broader and subtler patterns where we see nothing or only blobs at present. Surely, this will change the mind=body related picture in many healthcare domains — creatively disrupting the present status and changing our view on many societal issues, if not the issues themselves.
Many boundaries will go down. Many domains will conflate at their borders if not their essence. We must make sure this will be done to people’s advantage.
What about human-A.I. value alignment?
Changing us profoundly also means changing our values according to our cooperation with A.I. This should better be done explicitly than implicitly (= largely outside of our conscious awareness).
Thus, we do not just have to look into which human values we want A.I. to align with but also which values we want humans to evolve toward through that pending evolution. At least partly, it’s a vicious circle — either in good or bad directions. Any way it goes, we are bootstrapping ourselves toward it. To some degree, we influence the values upon which future humanity (and human-A.I. cooperation) will take further decisions. We do so perhaps most consequentially through the ways we develop A.I. right now.
To me, that is one more reason to prefer developments in the most Compassionate way possible. Fortunately, we can still choose for Compassionate A.I.