Is A.I. Becoming more Philosophy than Technology?

October 19, 2023 Artifical Intelligence No Comments

This question has been relevant already for years. It’s only becoming worse (or better). Of course, technology remains important but it’s more like the bricks than the building.

Many technologically oriented people may not like this idea. The ones who do are probably forming the future.

Some history

Historically, the development of A.I. has had and still has a dual purpose: 1) to emulate and thereby better understand the human mind, and 2) to use what we know about ourselves in order to develop more performant support for our human endeavors. Over the decades, this has indeed been realized more or less ― in two ways:

  • GOFAI (Good Old-Fashioned A.I. of knowledge bases, heuristic rules, and expert systems) has shown how human experts – and non-experts alike, for that matter – do not generally think. The ensuing knowledge acquisition gap led to an A.I. winter of few commercial accomplishments.
  • But even then, researchers kept posing the same questions and trying to solve them. Many insights that see broad daylight now have been developed decades ago, including the ones far away from GOFAI. Some tweaking, as well as much more data and computer power, make them shine.

At the same time, other successful A.I. developments appear to roam away from the human example. Notoriously, generative transformer technology is one of them. But even the old backprop of supervised learning had little in common with human thinking. The trend to go away from human thinking has only grown.

How should we understand this?

Philosophy of A.I., philosophy of mind

Looking at human intelligence, consciousness, and wisdom even as the only real ones answers the question of whether there can be non-human alternatives in a circular way ― putting the answers inside the definitions. For instance, as a matter of fact, ‘consciousness’ is possible nowhere but in a human being if it is defined as equal to ‘human consciousness.’

Underlying this is a conceptual choice. Namely, we can use the same terms for more abstract concepts. This way, we can talk about them even apart from their human realizations, as I have done extensively in my book The Journey Towards Compassionate A.I. This makes the above dual purpose even more interesting by lending more freedom to creative thinking. More abstract concepts are better instruments to investigate more profound directions and, eventually, more performant realizations. For instance, one may ask oneself about what has always been the purpose ― reaching toward functional abstraction. If this functionality gets realized in two pretty different ways, then it’s interesting to see where these differ. It may tell us more about both.

This way, the philosophy of A.I. and the philosophy of mind increasingly overlap ― for instance, in questions of knowledge acquisition, representation (ontology), justification (epistemology), and experience (phenomenology), not to speak of many profoundly ethical questions. Many of these issues have been around for long, but A.I. is making them so much more pertinent.

Therefore, up toward more philosophy?

Philosophy can be seen as the tendency to start from more abstract concepts, in contrast to technology, which starts from more concrete concepts and realizations.

Of course, we need both. However, A.I. technology is taking such a flight that ‘Anything is possible’ is gradually becoming possible. If something is not possible now, it may already be in a few years. This makes technology less a constraint and more purely an enhancer.

This by itself puts more emphasis on philosophy ― the one(s) of the last few millennia, but even more so the one that starts from present-day scientific insights into the brain, mind, A.I., and more. A lot goes on in all these domains.

The complexity of A.I. furthermore brings in sight the full human complexity it needs to serve. This also heightens the importance of profound philosophical thinking. Otherwise, unknowingly and even with good intentions, one may do more harm than good. For instance, human-centered A.I. should not be human-ego-centered A.I. ― in many cases, rather the opposite. Because of this, a proper emphasis on Compassionate A.I. is frequently essential.

This is also crucial to concrete realizations.

If one sticks to specific technologies as the most important elements, one risks becoming obsolete in a few years with the peril that made choices weigh heavily on further developments. The project may fail even with initially promising developments.

Excellent philosophy is much more durable. It provides a properly human-centered framework in which to incorporate technologies as they come and go ― superseded by even more powerful ones, as it happens, increasingly quickly. This requires modular thinking to an unusual degree and far beyond modular mechanics. It requires depth. Intriguingly, this can be found especially in Compassion.

The least required is a hazy philosopher ― although his thinking may be a source of inspiration.

The most required is one who can roam through several worlds without sticking to any of them. This may be the ‘real philosopher’ about whom Plato already talked 2500 years ago.

I presently don’t see many of these.

Leave a Reply

Related Posts

A.I. and Constructionism

Many people, and Western culture (if not most cultures) in general, mainly live in ‘constructed reality.’ In combination with the power of A.I., this is excruciatingly dangerous. Constructionism [see: “Constructionism“] In short, humans mainly live in a ‘constructed reality’ full of group-based assumptions. On the one side, this is an asset. It makes life simpler. Read the full article…

More about Rewards (also in A.I.)

A reward is a nudge – with more or less lasting result – into some preferred direction. Anything can be experienced as a reward. Thinking about it as a pattern within a broader pattern is clarifying. Pattern recognition and completion (PRC) Seeing rewards in the context of PRC, a reward is always just a part Read the full article…

Why We don’t See What’s Around the Corner

The main why (of not seeing pending super-A.I.) is all about us. We need a next phase in self-understanding, but this is not getting realized yet. If we don’t see this, we don’t see that. This is an excerpt – in slightly different format – from my book The Journey Towards Compassionate A.I. Complex machinery Read the full article…

Translate »