Levels of Abstraction in Humans and A.I.

April 28, 2023 Artifical Intelligence, Cognitive Insights No Comments

Humans are masters of abstraction. We do it spontaneously, thus creating an efficient mental environment for ourselves, others, and culturally. The challenge is now to bring this to A.I.

Abstraction = generalization

Humans (and other animals) perform spontaneous generalization. From a number of example objects, we generalize to some concept. A concept is already an abstract mental construct. One cannot take or give, or build a house with concepts as one does with concrete bricks, for instance.

Taking concepts together, one can form a higher-level abstraction.

Higher-level = fewer features

Theoretically, a concept can be defined by its features. For instance, the concept <dog> has many features. Higher-level concepts have fewer features; thus, they are said to ‘subsume’ lower-level concepts. For instance, <animal> has fewer features than <dog>. A dog is an animal with specific additional features that make it not being a cat. One can go up to <living entity> or down to <spaniel> and further to <cocker spaniel>.

Formally: Little Pluto is a <cocker spaniel> IS-A <spaniel> IS-A <dog> IS-A <living entity> IS-A <entity>. Therefore, little Pluto is an <entity>.

Naturally, many concepts form part of multiple hierarchies. This way, any feature can be the basis of another category. For instance, <dog> HAS <tail> can be seen as <dog> IS-A <creature with tail>. In other words, the subsumption principle is enough to describe the universe ― and what happens, one way or another, in the mind/brain. We are living abstraction machines.

This immensely heightens efficiency in our thinking.

The more one can correctly abstract, the more control one gains over the environment.

Through <dog>, one can treat all kinds of dogs together where appropriate. Also, when encountering an individual dog, one already knows much about the creature — even more so if one is a dog expert.

The same with <heart attack>, <car>, <gun>, etc. Levels of abstraction make our world an immensely more efficiently controllable place. Thus, the ecological niche of Homo sapiens is mainly formed by abstraction.

But we abstract messily by necessity.

Two interrelated reasons are that 1) reality is messy, and 2) we are part of reality.

For instance, birds can fly, but chickens so-so and ostriches don’t. On the other hand, fish-like creatures are fish, but not whales or dolphins (being mammals, of course).

There are seven formal definitions of <heart attack> (in medicalese ‘myocardial infarction’). Yet even to many physicians, there appears to be only one in causes and treatment.

These are no exceptions but the general rule of worldly messiness. It becomes even messier when going deeper inside ourselves. Categorizing human feelings quickly goes beyond the reality of ‘feelings without feelings.’

The messiness of our abstract (conceptual) thinking itself

This is generally much messier than we care to acknowledge.

We are more commonsense thinkers than Platonian (pure) conceptual ones. Actually, we have NO room for Platonian concepts in our mind/brain, although we can work with simulations, such as in mathematics. Shortly put, we have a subconceptual tool (mind/brain) with which we continually construct a socio-mental environment that looks more conceptual than it really is. Our subconceptual tool enables us to exhibit several concept-level features such as spontaneous generalization. It also makes us much more biased thinkers – by definition – than we generally appreciate.

Relevance to A.I.

A few decades ago, knowledge systems (A.I. products of that era) failed to gain traction. The main reason was the knowledge acquisition bottleneck stemming from the erroneous idea that human experts mainly have conceptual expertise that is easily transferable to an artificial system.

Since several years, subconceptually based systems (neural networks and, more recently, GPT) have led to big commercial successes. However, these suffer from a lack of straightforward conceptual abstraction and knowledge transfer to other domains. They don’t have any formal knowledge representation.

With pure concepts, we lose reality. Without concepts, we lose flexibility, transferability, and accountability. The path forward in A.I. lies in managing levels of conceptual abstraction on top of subconceptual processing, including communication between levels.

To achieve this, we can learn from the human case. I’m sure it will also go vice versa.

Interesting? Yes.

Dangerous? Yes.

Compassionate? Absolutely necessary.

Leave a Reply

Related Posts

The Power of Embedding

This is the power of complexity in humans and in present-day Large Language Models (the most visible form of A.I. nowadays). ‘Embedding’ is the transformation of information/knowledge into a format of many subconceptual elements interacting in multifaceted systems that makes this information prone to emerge in novel ways. A multitude of relatively simple (smaller than Read the full article…

Forward-Forward Neur(on)al Networks

Rest assured, I don’t stuff technical details into this blog. Nevertheless, this new framework lies closer to how the brain works, which is interesting enough to go somewhat into it. Backprop In Artificial Neural Networks (ANN) – the subfield that sways a big scepter in A.I. nowadays – backpropagation (backprop) is one of the main Read the full article…

The Danger of Non-Compassionate A.I.

There are many obvious issues, from killer humans to killer robots. This text is about something even more fundamental. About Compassion Please read Compassion, basically, or more blogs about Compassion. Having done so, you know the reason for the capital ‘C,’ which is what this text is mainly about. To intellectually grasp Compassion, one needs Read the full article…

Translate »