Levels of Abstraction in Humans and A.I.

April 28, 2023 Artifical Intelligence, Cognitive Insights No Comments

Humans are masters of abstraction. We do it spontaneously, thus creating an efficient mental environment for ourselves, others, and culturally. The challenge is now to bring this to A.I.

Abstraction = generalization

Humans (and other animals) perform spontaneous generalization. From a number of example objects, we generalize to some concept. A concept is already an abstract mental construct. One cannot take or give, or build a house with concepts as one does with concrete bricks, for instance.

Taking concepts together, one can form a higher-level abstraction.

Higher-level = fewer features

Theoretically, a concept can be defined by its features. For instance, the concept <dog> has many features. Higher-level concepts have fewer features; thus, they are said to ‘subsume’ lower-level concepts. For instance, <animal> has fewer features than <dog>. A dog is an animal with specific additional features that make it not being a cat. One can go up to <living entity> or down to <spaniel> and further to <cocker spaniel>.

Formally: Little Pluto is a <cocker spaniel> IS-A <spaniel> IS-A <dog> IS-A <living entity> IS-A <entity>. Therefore, little Pluto is an <entity>.

Naturally, many concepts form part of multiple hierarchies. This way, any feature can be the basis of another category. For instance, <dog> HAS <tail> can be seen as <dog> IS-A <creature with tail>. In other words, the subsumption principle is enough to describe the universe ― and what happens, one way or another, in the mind/brain. We are living abstraction machines.

This immensely heightens efficiency in our thinking.

The more one can correctly abstract, the more control one gains over the environment.

Through <dog>, one can treat all kinds of dogs together where appropriate. Also, when encountering an individual dog, one already knows much about the creature — even more so if one is a dog expert.

The same with <heart attack>, <car>, <gun>, etc. Levels of abstraction make our world an immensely more efficiently controllable place. Thus, the ecological niche of Homo sapiens is mainly formed by abstraction.

But we abstract messily by necessity.

Two interrelated reasons are that 1) reality is messy, and 2) we are part of reality.

For instance, birds can fly, but chickens so-so and ostriches don’t. On the other hand, fish-like creatures are fish, but not whales or dolphins (being mammals, of course).

There are seven formal definitions of <heart attack> (in medicalese ‘myocardial infarction’). Yet even to many physicians, there appears to be only one in causes and treatment.

These are no exceptions but the general rule of worldly messiness. It becomes even messier when going deeper inside ourselves. Categorizing human feelings quickly goes beyond the reality of ‘feelings without feelings.’

The messiness of our abstract (conceptual) thinking itself

This is generally much messier than we care to acknowledge.

We are more commonsense thinkers than Platonian (pure) conceptual ones. Actually, we have NO room for Platonian concepts in our mind/brain, although we can work with simulations, such as in mathematics. Shortly put, we have a subconceptual tool (mind/brain) with which we continually construct a socio-mental environment that looks more conceptual than it really is. Our subconceptual tool enables us to exhibit several concept-level features such as spontaneous generalization. It also makes us much more biased thinkers – by definition – than we generally appreciate.

Relevance to A.I.

A few decades ago, knowledge systems (A.I. products of that era) failed to gain traction. The main reason was the knowledge acquisition bottleneck stemming from the erroneous idea that human experts mainly have conceptual expertise that is easily transferable to an artificial system.

Since several years, subconceptually based systems (neural networks and, more recently, GPT) have led to big commercial successes. However, these suffer from a lack of straightforward conceptual abstraction and knowledge transfer to other domains. They don’t have any formal knowledge representation.

With pure concepts, we lose reality. Without concepts, we lose flexibility, transferability, and accountability. The path forward in A.I. lies in managing levels of conceptual abstraction on top of subconceptual processing, including communication between levels.

To achieve this, we can learn from the human case. I’m sure it will also go vice versa.

Interesting? Yes.

Dangerous? Yes.

Compassionate? Absolutely necessary.

Leave a Reply

Related Posts

The Path from Implicit to Explicit Knowledge

Implicit: It’s there, but we don’t readily know how, neither why it works. Explicit: We can readily follow each step. This is more or less the same move as from intractable to tractable or from competence to comprehension. But how? Emergence If something comes out, it must have been in ― one way or another. Read the full article…

A.I.-Phobia

One should be scared of any danger, including dangerous A.I. Contrary to this, anxiety is never a good adviser. This text is about being anxious. A phobic reaction against present technology is most dangerous. Needed is a lot of common sense. As to the above image, note the reference to Mary Wollstonecraft Shelley’s novel. In Read the full article…

Can A.I. be Empathic?

Not readily in the completely human sense of course. But can A.I. show sufficient ‘empathy’ for a human to recognize it as such, relate to it and even ‘grow’ through it? [Please read: “Landscape of Empathy”] Humans tend to ‘recognize’ human features in many things. We look at the clouds and we see human faces Read the full article…

Translate »