Levels of Abstraction in Humans and A.I.

April 28, 2023 Artifical Intelligence, Cognitive Insights No Comments

Humans are masters of abstraction. We do it spontaneously, thus creating an efficient mental environment for ourselves, others, and culturally. The challenge is now to bring this to A.I.

Abstraction = generalization

Humans (and other animals) perform spontaneous generalization. From a number of example objects, we generalize to some concept. A concept is already an abstract mental construct. One cannot take or give, or build a house with concepts as one does with concrete bricks, for instance.

Taking concepts together, one can form a higher-level abstraction.

Higher-level = fewer features

Theoretically, a concept can be defined by its features. For instance, the concept <dog> has many features. Higher-level concepts have fewer features; thus, they are said to ‘subsume’ lower-level concepts. For instance, <animal> has fewer features than <dog>. A dog is an animal with specific additional features that make it not being a cat. One can go up to <living entity> or down to <spaniel> and further to <cocker spaniel>.

Formally: Little Pluto is a <cocker spaniel> IS-A <spaniel> IS-A <dog> IS-A <living entity> IS-A <entity>. Therefore, little Pluto is an <entity>.

Naturally, many concepts form part of multiple hierarchies. This way, any feature can be the basis of another category. For instance, <dog> HAS <tail> can be seen as <dog> IS-A <creature with tail>. In other words, the subsumption principle is enough to describe the universe ― and what happens, one way or another, in the mind/brain. We are living abstraction machines.

This immensely heightens efficiency in our thinking.

The more one can correctly abstract, the more control one gains over the environment.

Through <dog>, one can treat all kinds of dogs together where appropriate. Also, when encountering an individual dog, one already knows much about the creature — even more so if one is a dog expert.

The same with <heart attack>, <car>, <gun>, etc. Levels of abstraction make our world an immensely more efficiently controllable place. Thus, the ecological niche of Homo sapiens is mainly formed by abstraction.

But we abstract messily by necessity.

Two interrelated reasons are that 1) reality is messy, and 2) we are part of reality.

For instance, birds can fly, but chickens so-so and ostriches don’t. On the other hand, fish-like creatures are fish, but not whales or dolphins (being mammals, of course).

There are seven formal definitions of <heart attack> (in medicalese ‘myocardial infarction’). Yet even to many physicians, there appears to be only one in causes and treatment.

These are no exceptions but the general rule of worldly messiness. It becomes even messier when going deeper inside ourselves. Categorizing human feelings quickly goes beyond the reality of ‘feelings without feelings.’

The messiness of our abstract (conceptual) thinking itself

This is generally much messier than we care to acknowledge.

We are more commonsense thinkers than Platonian (pure) conceptual ones. Actually, we have NO room for Platonian concepts in our mind/brain, although we can work with simulations, such as in mathematics. Shortly put, we have a subconceptual tool (mind/brain) with which we continually construct a socio-mental environment that looks more conceptual than it really is. Our subconceptual tool enables us to exhibit several concept-level features such as spontaneous generalization. It also makes us much more biased thinkers – by definition – than we generally appreciate.

Relevance to A.I.

A few decades ago, knowledge systems (A.I. products of that era) failed to gain traction. The main reason was the knowledge acquisition bottleneck stemming from the erroneous idea that human experts mainly have conceptual expertise that is easily transferable to an artificial system.

Since several years, subconceptually based systems (neural networks and, more recently, GPT) have led to big commercial successes. However, these suffer from a lack of straightforward conceptual abstraction and knowledge transfer to other domains. They don’t have any formal knowledge representation.

With pure concepts, we lose reality. Without concepts, we lose flexibility, transferability, and accountability. The path forward in A.I. lies in managing levels of conceptual abstraction on top of subconceptual processing, including communication between levels.

To achieve this, we can learn from the human case. I’m sure it will also go vice versa.

Interesting? Yes.

Dangerous? Yes.

Compassionate? Absolutely necessary.

Leave a Reply

Related Posts

Open Letter about Compassionate A.I. (C.A.I.) to Elon Musk

And to any Value-Driven Investors (VDI) in or out of worldly spotlights. This is a timely call for Compassionate A.I. (C.A.I.) Compassion and A.I. are seldom mentioned together. Yet C.A.I. may be the most crucial development in the near as well as far-away future of humanity. Please see my book about the Journey Towards Compassionate Read the full article…

Ontologization in Super-A.I.

Ontologization is the process of evolving from subconceptual to conceptual – including subsequent categorization – through attentive pattern recognition and completion. This way, a subconceptual system can form its own ontology. Natural evolution is one example. Artificially, it can be realized in many ways. PRC = Pattern Recognition and Completion. See: the brain as a Read the full article…

We Need to Be the Best We Can

This differs from being ‘the best person’ or ‘the most intelligent beings on Earth’ in competition with others. Our only – and fierce – competition should be with ourselves. The best, in good Aurelian tradition, is most Compassionately the best — striving for in-depth excellence. This striving is purposeful. It’s about standing at one’s limits Read the full article…

Translate »