How A.I. will Change Us

January 6, 2024 Artifical Intelligence No Comments

At present, this still depends mainly on human decisions. It’s up to us if only we take that responsibility — now.

Heidegger

According to the German philosopher Martin Heidegger (died 1976), technology is not neutral. It changes how humans think and behave. It even defines what we see as ‘reality’ insofar as we are able to perceive it. Note that this is not about ultimate reality, which we may never know, and of which we may never even know if it exists.

Thus, the technology of the time – according to Heidegger – reveals the perceived reality of the time, which, therefore, develops through the ages. Within the boundaries of the ultimately knowable, we may know reality only insofar and in such ways as our technology enables us. For example, the invention of the telescope fundamentally changed our view of the universe and ourselves. This helped to usher in the era of Western Enlightenment.

Super-A.I. will change us more than anything before.

Over the past few centuries, technological change has brought us automatization, whereby humans stay in full control, using machines to enhance human efficiency. With A.I., however, we are entering an era of autonomization. The machine isn’t just used anymore. Increasingly, it takes control of the process in ever more flexible and intelligent ways.

Doing so, it increasingly stands closer to us in a setting of cooperation. By being closer to us, it also holds more ways to change us significantly. Those who follow that setting may be at a growing advantage and enhance the change.

Essential changes will particularly change us.

These are changes with wide repercussions stemming from some deeper level. As said, many technological changes have already done so in the past. For instance, engines changed our mobility and thus also the way many view their ‘touristic needs,’ fostering tourism industries in many countries.

A.I. will change our mental mobility. This may foster or even create new and unforeseen mental capabilities. We will see broader and subtler patterns where we see nothing or only blobs at present. Surely, this will change the mind=body related picture in many healthcare domains — creatively disrupting the present status and changing our view on many societal issues, if not the issues themselves.

Many boundaries will go down. Many domains will conflate at their borders if not their essence. We must make sure this will be done to people’s advantage.

What about human-A.I. value alignment?

Changing us profoundly also means changing our values according to our cooperation with A.I. This should better be done explicitly than implicitly (= largely outside of our conscious awareness).

Thus, we do not just have to look into which human values we want A.I. to align with but also which values we want humans to evolve toward through that pending evolution. At least partly, it’s a vicious circle — either in good or bad directions. Any way it goes, we are bootstrapping ourselves toward it. To some degree, we influence the values upon which future humanity (and human-A.I. cooperation) will take further decisions. We do so perhaps most consequentially through the ways we develop A.I. right now.

To me, that is one more reason to prefer developments in the most Compassionate way possible. Fortunately, we can still choose for Compassionate A.I.

Leave a Reply

Related Posts

How can A.I. Become Compassionate?

Since this may be the only possible human-friendly future, it’s good to know how it can be reached, at least principally. Please read Compassion, basically, The Journey Towards Compassionate A.I., and Why A.I. Must Be Compassionate. Two ways and an opposite In principle, A.I. can become Compassionate by itself, or we may guide it toward Read the full article…

Openness to Complexity in the Age of A.I.

We are entering the Age of A.I., and nothing will ever be the same. Complexity is growing everywhere — in business, in global governance, in our own inner lives. Treating it as complicatedness (no complexity involved) is a recipe for collapse. The only real solution is Openness (mainly to our own complexity). With it, business, Read the full article…

(Artificial) Ethics as a Cloud?

In Compassionate A.I., of course, the first principle is Compassion, followed by an intrinsic combination of rationality and depth, etc. The following complements this foundation. The guarantee of ethical behavior eventually arises from countless insights and realizations, forming a ‘cloud.’ These blogs contribute to this process regarding Lisa. Humanly speaking The blogs reflect the authors’ Read the full article…

Translate »