Levels of Abstraction in Humans and A.I.

April 28, 2023 Artifical Intelligence, Cognitive Insights No Comments

Humans are masters of abstraction. We do it spontaneously, thus creating an efficient mental environment for ourselves, others, and culturally. The challenge is now to bring this to A.I.

Abstraction = generalization

Humans (and other animals) perform spontaneous generalization. From a number of example objects, we generalize to some concept. A concept is already an abstract mental construct. One cannot take or give, or build a house with concepts as one does with concrete bricks, for instance.

Taking concepts together, one can form a higher-level abstraction.

Higher-level = fewer features

Theoretically, a concept can be defined by its features. For instance, the concept <dog> has many features. Higher-level concepts have fewer features; thus, they are said to ‘subsume’ lower-level concepts. For instance, <animal> has fewer features than <dog>. A dog is an animal with specific additional features that make it not being a cat. One can go up to <living entity> or down to <spaniel> and further to <cocker spaniel>.

Formally: Little Pluto is a <cocker spaniel> IS-A <spaniel> IS-A <dog> IS-A <living entity> IS-A <entity>. Therefore, little Pluto is an <entity>.

Naturally, many concepts form part of multiple hierarchies. This way, any feature can be the basis of another category. For instance, <dog> HAS <tail> can be seen as <dog> IS-A <creature with tail>. In other words, the subsumption principle is enough to describe the universe ― and what happens, one way or another, in the mind/brain. We are living abstraction machines.

This immensely heightens efficiency in our thinking.

The more one can correctly abstract, the more control one gains over the environment.

Through <dog>, one can treat all kinds of dogs together where appropriate. Also, when encountering an individual dog, one already knows much about the creature — even more so if one is a dog expert.

The same with <heart attack>, <car>, <gun>, etc. Levels of abstraction make our world an immensely more efficiently controllable place. Thus, the ecological niche of Homo sapiens is mainly formed by abstraction.

But we abstract messily by necessity.

Two interrelated reasons are that 1) reality is messy, and 2) we are part of reality.

For instance, birds can fly, but chickens so-so and ostriches don’t. On the other hand, fish-like creatures are fish, but not whales or dolphins (being mammals, of course).

There are seven formal definitions of <heart attack> (in medicalese ‘myocardial infarction’). Yet even to many physicians, there appears to be only one in causes and treatment.

These are no exceptions but the general rule of worldly messiness. It becomes even messier when going deeper inside ourselves. Categorizing human feelings quickly goes beyond the reality of ‘feelings without feelings.’

The messiness of our abstract (conceptual) thinking itself

This is generally much messier than we care to acknowledge.

We are more commonsense thinkers than Platonian (pure) conceptual ones. Actually, we have NO room for Platonian concepts in our mind/brain, although we can work with simulations, such as in mathematics. Shortly put, we have a subconceptual tool (mind/brain) with which we continually construct a socio-mental environment that looks more conceptual than it really is. Our subconceptual tool enables us to exhibit several concept-level features such as spontaneous generalization. It also makes us much more biased thinkers – by definition – than we generally appreciate.

Relevance to A.I.

A few decades ago, knowledge systems (A.I. products of that era) failed to gain traction. The main reason was the knowledge acquisition bottleneck stemming from the erroneous idea that human experts mainly have conceptual expertise that is easily transferable to an artificial system.

Since several years, subconceptually based systems (neural networks and, more recently, GPT) have led to big commercial successes. However, these suffer from a lack of straightforward conceptual abstraction and knowledge transfer to other domains. They don’t have any formal knowledge representation.

With pure concepts, we lose reality. Without concepts, we lose flexibility, transferability, and accountability. The path forward in A.I. lies in managing levels of conceptual abstraction on top of subconceptual processing, including communication between levels.

To achieve this, we can learn from the human case. I’m sure it will also go vice versa.

Interesting? Yes.

Dangerous? Yes.

Compassionate? Absolutely necessary.

Leave a Reply

Related Posts

How can Medical A.I. Enhance the Human Touch?

This is about ‘plain’ medical A.I. as any physician can use in his consultation room. The aim is a win for patients, physicians, society, and everybody. Please also read Medical A.I. for Humans. The danger of the reverse The use of computers in medicine has notoriously not enhanced the human touch. Arguably, it has provoked Read the full article…

From Concrete to Abstract

Many people view the concepts of ‘concrete’ and ‘abstract’ as dichotomous ends of a straightforward spectrum — in daily life, often without much thought. This is also relevant to their use in inferential patterns. One example are mental-neuronal patterns in humans. However, the muddy underlying reality becomes especially apparent when trying to realize them in Read the full article…

Wisdom-Driven A.I. = Compassionate A.I.

Wisdom-driven A.I. taps into not just data-driven intelligence but a deeper form of understanding, much like Compassion itself. Please read Data-Driven vs. Wisdom-Driven A.I. ― Compassion, Basically ― Wisdom Emerges. With an abundance of time, please read The Journey Towards Compassionate A.I. Wisdom and Compassion, culturally In many ancient Eastern philosophies, wisdom and Compassion are Read the full article…

Translate »