Is A.I. Becoming more Philosophy than Technology?

October 19, 2023 Artifical Intelligence No Comments

This question has been relevant already for years. It’s only becoming worse (or better). Of course, technology remains important but it’s more like the bricks than the building.

Many technologically oriented people may not like this idea. The ones who do are probably forming the future.

Some history

Historically, the development of A.I. has had and still has a dual purpose: 1) to emulate and thereby better understand the human mind, and 2) to use what we know about ourselves in order to develop more performant support for our human endeavors. Over the decades, this has indeed been realized more or less ― in two ways:

  • GOFAI (Good Old-Fashioned A.I. of knowledge bases, heuristic rules, and expert systems) has shown how human experts – and non-experts alike, for that matter – do not generally think. The ensuing knowledge acquisition gap led to an A.I. winter of few commercial accomplishments.
  • But even then, researchers kept posing the same questions and trying to solve them. Many insights that see broad daylight now have been developed decades ago, including the ones far away from GOFAI. Some tweaking, as well as much more data and computer power, make them shine.

At the same time, other successful A.I. developments appear to roam away from the human example. Notoriously, generative transformer technology is one of them. But even the old backprop of supervised learning had little in common with human thinking. The trend to go away from human thinking has only grown.

How should we understand this?

Philosophy of A.I., philosophy of mind

Looking at human intelligence, consciousness, and wisdom even as the only real ones answers the question of whether there can be non-human alternatives in a circular way ― putting the answers inside the definitions. For instance, as a matter of fact, ‘consciousness’ is possible nowhere but in a human being if it is defined as equal to ‘human consciousness.’

Underlying this is a conceptual choice. Namely, we can use the same terms for more abstract concepts. This way, we can talk about them even apart from their human realizations, as I have done extensively in my book The Journey Towards Compassionate A.I. This makes the above dual purpose even more interesting by lending more freedom to creative thinking. More abstract concepts are better instruments to investigate more profound directions and, eventually, more performant realizations. For instance, one may ask oneself about what has always been the purpose ― reaching toward functional abstraction. If this functionality gets realized in two pretty different ways, then it’s interesting to see where these differ. It may tell us more about both.

This way, the philosophy of A.I. and the philosophy of mind increasingly overlap ― for instance, in questions of knowledge acquisition, representation (ontology), justification (epistemology), and experience (phenomenology), not to speak of many profoundly ethical questions. Many of these issues have been around for long, but A.I. is making them so much more pertinent.

Therefore, up toward more philosophy?

Philosophy can be seen as the tendency to start from more abstract concepts, in contrast to technology, which starts from more concrete concepts and realizations.

Of course, we need both. However, A.I. technology is taking such a flight that ‘Anything is possible’ is gradually becoming possible. If something is not possible now, it may already be in a few years. This makes technology less a constraint and more purely an enhancer.

This by itself puts more emphasis on philosophy ― the one(s) of the last few millennia, but even more so the one that starts from present-day scientific insights into the brain, mind, A.I., and more. A lot goes on in all these domains.

The complexity of A.I. furthermore brings in sight the full human complexity it needs to serve. This also heightens the importance of profound philosophical thinking. Otherwise, unknowingly and even with good intentions, one may do more harm than good. For instance, human-centered A.I. should not be human-ego-centered A.I. ― in many cases, rather the opposite. Because of this, a proper emphasis on Compassionate A.I. is frequently essential.

This is also crucial to concrete realizations.

If one sticks to specific technologies as the most important elements, one risks becoming obsolete in a few years with the peril that made choices weigh heavily on further developments. The project may fail even with initially promising developments.

Excellent philosophy is much more durable. It provides a properly human-centered framework in which to incorporate technologies as they come and go ― superseded by even more powerful ones, as it happens, increasingly quickly. This requires modular thinking to an unusual degree and far beyond modular mechanics. It requires depth. Intriguingly, this can be found especially in Compassion.

The least required is a hazy philosopher ― although his thinking may be a source of inspiration.

The most required is one who can roam through several worlds without sticking to any of them. This may be the ‘real philosopher’ about whom Plato already talked 2500 years ago.

I presently don’t see many of these.

Leave a Reply

Related Posts

Is Compassionate A.I. (Still) Our Choice?

Seen from the future, the present era may be the most responsible for accomplishing the advent of Compassionate A.I. Compassion, basically, is the realm of complexity. It’s not about some commandments or a – simple or less simple – conceptual system of ethics. Therefore, instilling Compassion into a system is not a straightforward engineering endeavor Read the full article…

Lisa in Times of Suicide Danger

Can A.I.-video-coach-bot Lisa prevent suicide or bring someone to it? The question needs to be looked upon broadly and openly. Yesterday, a Belgian person committed suicide after long conversations with a chatbot. Doubtlessly, once in a while, some coach-bot will be accused of having brought someone closer to suicide. Such accusations cannot be prevented, even Read the full article…

How Lisa Prevents LLM Hallucinations

Hallucinations (better-called confabulations) in the context of large language models (LLMs) occur when these models generate information that isn’t factually accurate. Lisa can mitigate these from the insight of why they happen, namely: LLM confabulations happen because these systems don’t have a proper understanding of the world but generate text based on patterns learned from Read the full article…

Translate »