From Concrete to Abstract

April 29, 2024 Artifical Intelligence, Cognitive Insights No Comments

Many people view the concepts of ‘concrete’ and ‘abstract’ as dichotomous ends of a straightforward spectrum — in daily life, often without much thought.

This is also relevant to their use in inferential patterns.

One example are mental-neuronal patterns in humans.

However, the muddy underlying reality becomes especially apparent when trying to realize them in an A.I. environment, where it’s relevant to anything related to learning and intelligence. A proper balance is mandatory:

  • Getting too concrete/detailed impedes learning.
  • Getting too abstract undergirds concrete situations, leading to an excess of bias.

Efficiency also plays a role. Without accepting some level of bias, one can become unremittingly bogged down in an overflow of details. See also Bias is Our Thinking.

The daily life use of the term ‘concrete’ is mostly meant at a level in-between.

It’s what captures the daily mind most immediately. Thus, ‘concrete’ is used here as a relative concept — relative to the observer and, one can say, also to the circumstances.

To complicate matters, ‘concrete’ can in this sense denote something more abstract than the very concrete itself. This is a pragmatic issue. Of course, being pragmatic, it’s also most realistically valuable. In any case, one needs to search an optimal level of abstraction.

AURELIS

Note that AURELIS emphasizes the importance of both the conceptual (abstract) and the subconceptual (concrete) layers of experience. This dual approach allows for a more Compassionate interaction with oneself, where the individual is not just a passive recipient of information but an active participant in shaping his mental processes.

Therefore, this is of utmost relevance to the development of Lisa.

The Lisa case

Lisa’s inferential prowess in handling abstract and concrete concepts is still the responsibility of the developers. In due time, Lisa will be able to self-learn optimal levels, also according to circumstances, driven by human-set rewards.

But there will always be a trade-off between advantages and disadvantages — even when using two levels simultaneously (which is, BTW, the way the human brain seems to work).

Importance of the balance

This challenge of finding the right balance between the concrete and the abstract mirrors the ongoing journey of human self-understanding and growth. It enables Lisa to serve as a companion in the user’s journey towards self-realization and growth.

Emphasizing this balance acknowledges the necessity of biases as functional shortcuts while striving to keep them in check to avoid overshadowing the richness of individual experiences.

Compassionately

This way, the Compassionate perspective values both the efficiency and the depth of learning, aiming to enrich the human experience rather than merely optimizing it.

This stance reflects a commitment to treating the user’s cognitive and emotional realms with care and respect, promoting a harmonious integration of technology into personal growth and well-being.

Broader implication: a Compassionate perspective

The dialogue between the concrete and the abstract is not just an intellectual exercise but a practical one that affects how we integrate AI technologies into our social fabric.

AURELIS advocates for a model where technology not only understands but also respects human values and complexities. This involves Lisa’s being able to adapt and respond to the nuanced needs of human emotions and psychological well-being.

This is especially needed in a world increasingly managed by digital interactions.

Ensuring that A.I. can operate across this spectrum – from concrete facts to abstract human experiences – means advocating for systems that enhance, rather than diminish, our ability to be fully human.

It’s about cultivating A.I. that understands and respects the human condition, offering insights that are tailored not by generic algorithms alone but by a nuanced understanding of each person’s unique mental landscape, fostering an environment where technology and humanity coexist in supportive, sustainable ways.

Leave a Reply

Related Posts

From Intractable to Tractable = Intelligent

This is what intelligence can do: simplifying intractable problems (not easily controlled, managed, or solved) to tractable ones without much loss of relevant information. Problems may be social, mathematical, or other. Depending on the meaning of the considered terms, it may be seen as an exhaustive characterization of intelligence. However, this is not meant as Read the full article…

Procedural vs. Declarative Knowledge in A.I.

Declarative memory is the memory of facts (semantic memory) and events (episodic memory). Procedural memory is the memory of how to do things (skills and tasks). Both complement each other and often overlap. The distinction is not the same as between conceptual and non-conceptual knowledge. Though related, these categories describe different aspects of knowledge processing: Read the full article…

Why A.I. Must Be Compassionate

This is bound to become the most critical issue in humankind’s history until now, and probably also from now on ― to be taken seriously. Not thinking about it is like driving blindfolded on a highway. If you have read my book The Journey Towards Compassionate A.I., you know much of what’s in this text. Read the full article…

Translate »