Is A.I. Becoming more Philosophy than Technology?

October 19, 2023 Artifical Intelligence No Comments

This question has been relevant already for years. It’s only becoming worse (or better). Of course, technology remains important but it’s more like the bricks than the building.

Many technologically oriented people may not like this idea. The ones who do are probably forming the future.

Some history

Historically, the development of A.I. has had and still has a dual purpose: 1) to emulate and thereby better understand the human mind, and 2) to use what we know about ourselves in order to develop more performant support for our human endeavors. Over the decades, this has indeed been realized more or less ― in two ways:

  • GOFAI (Good Old-Fashioned A.I. of knowledge bases, heuristic rules, and expert systems) has shown how human experts – and non-experts alike, for that matter – do not generally think. The ensuing knowledge acquisition gap led to an A.I. winter of few commercial accomplishments.
  • But even then, researchers kept posing the same questions and trying to solve them. Many insights that see broad daylight now have been developed decades ago, including the ones far away from GOFAI. Some tweaking, as well as much more data and computer power, make them shine.

At the same time, other successful A.I. developments appear to roam away from the human example. Notoriously, generative transformer technology is one of them. But even the old backprop of supervised learning had little in common with human thinking. The trend to go away from human thinking has only grown.

How should we understand this?

Philosophy of A.I., philosophy of mind

Looking at human intelligence, consciousness, and wisdom even as the only real ones answers the question of whether there can be non-human alternatives in a circular way ― putting the answers inside the definitions. For instance, as a matter of fact, ‘consciousness’ is possible nowhere but in a human being if it is defined as equal to ‘human consciousness.’

Underlying this is a conceptual choice. Namely, we can use the same terms for more abstract concepts. This way, we can talk about them even apart from their human realizations, as I have done extensively in my book The Journey Towards Compassionate A.I. This makes the above dual purpose even more interesting by lending more freedom to creative thinking. More abstract concepts are better instruments to investigate more profound directions and, eventually, more performant realizations. For instance, one may ask oneself about what has always been the purpose ― reaching toward functional abstraction. If this functionality gets realized in two pretty different ways, then it’s interesting to see where these differ. It may tell us more about both.

This way, the philosophy of A.I. and the philosophy of mind increasingly overlap ― for instance, in questions of knowledge acquisition, representation (ontology), justification (epistemology), and experience (phenomenology), not to speak of many profoundly ethical questions. Many of these issues have been around for long, but A.I. is making them so much more pertinent.

Therefore, up toward more philosophy?

Philosophy can be seen as the tendency to start from more abstract concepts, in contrast to technology, which starts from more concrete concepts and realizations.

Of course, we need both. However, A.I. technology is taking such a flight that ‘Anything is possible’ is gradually becoming possible. If something is not possible now, it may already be in a few years. This makes technology less a constraint and more purely an enhancer.

This by itself puts more emphasis on philosophy ― the one(s) of the last few millennia, but even more so the one that starts from present-day scientific insights into the brain, mind, A.I., and more. A lot goes on in all these domains.

The complexity of A.I. furthermore brings in sight the full human complexity it needs to serve. This also heightens the importance of profound philosophical thinking. Otherwise, unknowingly and even with good intentions, one may do more harm than good. For instance, human-centered A.I. should not be human-ego-centered A.I. ― in many cases, rather the opposite. Because of this, a proper emphasis on Compassionate A.I. is frequently essential.

This is also crucial to concrete realizations.

If one sticks to specific technologies as the most important elements, one risks becoming obsolete in a few years with the peril that made choices weigh heavily on further developments. The project may fail even with initially promising developments.

Excellent philosophy is much more durable. It provides a properly human-centered framework in which to incorporate technologies as they come and go ― superseded by even more powerful ones, as it happens, increasingly quickly. This requires modular thinking to an unusual degree and far beyond modular mechanics. It requires depth. Intriguingly, this can be found especially in Compassion.

The least required is a hazy philosopher ― although his thinking may be a source of inspiration.

The most required is one who can roam through several worlds without sticking to any of them. This may be the ‘real philosopher’ about whom Plato already talked 2500 years ago.

I presently don’t see many of these.

Leave a Reply

Related Posts

Levels of Abstraction in Humans and A.I.

Humans are masters of abstraction. We do it spontaneously, thus creating an efficient mental environment for ourselves, others, and culturally. The challenge is now to bring this to A.I. Abstraction = generalization Humans (and other animals) perform spontaneous generalization. From a number of example objects, we generalize to some concept. A concept is already an Read the full article…

Principles of Being an Intelligent Being

Strange times. We are living at the borders of old and new intelligences. We’ll need some agreement in seeing what it’s about. Intelligence is in the eye of the beholder. Definitions of intelligence abound. Therefore, it is better to start from the really basic, where it can be hardly even more basic. There, it’s the Read the full article…

What Is Morality to A.I.?

Agreed: it’s not even evident what ‘morality’ means to us. Soon comes A.I. Will it be ‘morally good’? Humans have a natural propensity towards morality. Whether we tend towards ‘good’ or ‘bad’, we have feelings and generally recognize these in others too, in humans and in animals. We share organic roots. We recognize suffering and Read the full article…

Translate »