Levels of Abstraction in Humans and A.I.

April 28, 2023 Artifical Intelligence, Cognitive Insights No Comments

Humans are masters of abstraction. We do it spontaneously, thus creating an efficient mental environment for ourselves, others, and culturally. The challenge is now to bring this to A.I.

Abstraction = generalization

Humans (and other animals) perform spontaneous generalization. From a number of example objects, we generalize to some concept. A concept is already an abstract mental construct. One cannot take or give, or build a house with concepts as one does with concrete bricks, for instance.

Taking concepts together, one can form a higher-level abstraction.

Higher-level = fewer features

Theoretically, a concept can be defined by its features. For instance, the concept <dog> has many features. Higher-level concepts have fewer features; thus, they are said to ‘subsume’ lower-level concepts. For instance, <animal> has fewer features than <dog>. A dog is an animal with specific additional features that make it not being a cat. One can go up to <living entity> or down to <spaniel> and further to <cocker spaniel>.

Formally: Little Pluto is a <cocker spaniel> IS-A <spaniel> IS-A <dog> IS-A <living entity> IS-A <entity>. Therefore, little Pluto is an <entity>.

Naturally, many concepts form part of multiple hierarchies. This way, any feature can be the basis of another category. For instance, <dog> HAS <tail> can be seen as <dog> IS-A <creature with tail>. In other words, the subsumption principle is enough to describe the universe ― and what happens, one way or another, in the mind/brain. We are living abstraction machines.

This immensely heightens efficiency in our thinking.

The more one can correctly abstract, the more control one gains over the environment.

Through <dog>, one can treat all kinds of dogs together where appropriate. Also, when encountering an individual dog, one already knows much about the creature — even more so if one is a dog expert.

The same with <heart attack>, <car>, <gun>, etc. Levels of abstraction make our world an immensely more efficiently controllable place. Thus, the ecological niche of Homo sapiens is mainly formed by abstraction.

But we abstract messily by necessity.

Two interrelated reasons are that 1) reality is messy, and 2) we are part of reality.

For instance, birds can fly, but chickens so-so and ostriches don’t. On the other hand, fish-like creatures are fish, but not whales or dolphins (being mammals, of course).

There are seven formal definitions of <heart attack> (in medicalese ‘myocardial infarction’). Yet even to many physicians, there appears to be only one in causes and treatment.

These are no exceptions but the general rule of worldly messiness. It becomes even messier when going deeper inside ourselves. Categorizing human feelings quickly goes beyond the reality of ‘feelings without feelings.’

The messiness of our abstract (conceptual) thinking itself

This is generally much messier than we care to acknowledge.

We are more commonsense thinkers than Platonian (pure) conceptual ones. Actually, we have NO room for Platonian concepts in our mind/brain, although we can work with simulations, such as in mathematics. Shortly put, we have a subconceptual tool (mind/brain) with which we continually construct a socio-mental environment that looks more conceptual than it really is. Our subconceptual tool enables us to exhibit several concept-level features such as spontaneous generalization. It also makes us much more biased thinkers – by definition – than we generally appreciate.

Relevance to A.I.

A few decades ago, knowledge systems (A.I. products of that era) failed to gain traction. The main reason was the knowledge acquisition bottleneck stemming from the erroneous idea that human experts mainly have conceptual expertise that is easily transferable to an artificial system.

Since several years, subconceptually based systems (neural networks and, more recently, GPT) have led to big commercial successes. However, these suffer from a lack of straightforward conceptual abstraction and knowledge transfer to other domains. They don’t have any formal knowledge representation.

With pure concepts, we lose reality. Without concepts, we lose flexibility, transferability, and accountability. The path forward in A.I. lies in managing levels of conceptual abstraction on top of subconceptual processing, including communication between levels.

To achieve this, we can learn from the human case. I’m sure it will also go vice versa.

Interesting? Yes.

Dangerous? Yes.

Compassionate? Absolutely necessary.

Leave a Reply

Related Posts

A.I.-Human Value Alignment

Can Compassionate A.I. be a beacon of profound values that humans unfortunately lack sometimes? The Compassionate endeavor is not about dominance. A.I.-Human Value Alignment can be seen as mutual growth, avoiding the imposition or blind adoption of values. This fosters an environment where both A.I. and humans can enhance their values, leading to a more Read the full article…

Can A.I. be Neutral?

I mean, concerning individuation vs. inner dissociation — in other words, total self vs. ego. If we don’t take care, are we doomed to enter a future of ever more ego, engendered by our ‘latest invention’? So, how can we take care? The illusion of neutrality At first glance, A.I. might appear neutral. After all, Read the full article…

The Importance of a Conceptual Ontology in A.I.

Utilizing a conceptual ontology can significantly boost an A.I.’s capability to ‘reason’ and deliver more precise, context-aware, and coherent responses that meet user needs and expectations. This blog is an enumeration of how this enhancement works out, with examples in the domain of Lisa. Improved understanding of a user query An ontology enables the system Read the full article…

Translate »