From Concrete to Abstract

April 29, 2024 Artifical Intelligence, Cognitive Insights No Comments

Many people view the concepts of ‘concrete’ and ‘abstract’ as dichotomous ends of a straightforward spectrum — in daily life, often without much thought.

This is also relevant to their use in inferential patterns.

One example are mental-neuronal patterns in humans.

However, the muddy underlying reality becomes especially apparent when trying to realize them in an A.I. environment, where it’s relevant to anything related to learning and intelligence. A proper balance is mandatory:

  • Getting too concrete/detailed impedes learning.
  • Getting too abstract undergirds concrete situations, leading to an excess of bias.

Efficiency also plays a role. Without accepting some level of bias, one can become unremittingly bogged down in an overflow of details. See also Bias is Our Thinking.

The daily life use of the term ‘concrete’ is mostly meant at a level in-between.

It’s what captures the daily mind most immediately. Thus, ‘concrete’ is used here as a relative concept — relative to the observer and, one can say, also to the circumstances.

To complicate matters, ‘concrete’ can in this sense denote something more abstract than the very concrete itself. This is a pragmatic issue. Of course, being pragmatic, it’s also most realistically valuable. In any case, one needs to search an optimal level of abstraction.

AURELIS

Note that AURELIS emphasizes the importance of both the conceptual (abstract) and the subconceptual (concrete) layers of experience. This dual approach allows for a more Compassionate interaction with oneself, where the individual is not just a passive recipient of information but an active participant in shaping his mental processes.

Therefore, this is of utmost relevance to the development of Lisa.

The Lisa case

Lisa’s inferential prowess in handling abstract and concrete concepts is still the responsibility of the developers. In due time, Lisa will be able to self-learn optimal levels, also according to circumstances, driven by human-set rewards.

But there will always be a trade-off between advantages and disadvantages — even when using two levels simultaneously (which is, BTW, the way the human brain seems to work).

Importance of the balance

This challenge of finding the right balance between the concrete and the abstract mirrors the ongoing journey of human self-understanding and growth. It enables Lisa to serve as a companion in the user’s journey towards self-realization and growth.

Emphasizing this balance acknowledges the necessity of biases as functional shortcuts while striving to keep them in check to avoid overshadowing the richness of individual experiences.

Compassionately

This way, the Compassionate perspective values both the efficiency and the depth of learning, aiming to enrich the human experience rather than merely optimizing it.

This stance reflects a commitment to treating the user’s cognitive and emotional realms with care and respect, promoting a harmonious integration of technology into personal growth and well-being.

Broader implication: a Compassionate perspective

The dialogue between the concrete and the abstract is not just an intellectual exercise but a practical one that affects how we integrate AI technologies into our social fabric.

AURELIS advocates for a model where technology not only understands but also respects human values and complexities. This involves Lisa’s being able to adapt and respond to the nuanced needs of human emotions and psychological well-being.

This is especially needed in a world increasingly managed by digital interactions.

Ensuring that A.I. can operate across this spectrum – from concrete facts to abstract human experiences – means advocating for systems that enhance, rather than diminish, our ability to be fully human.

It’s about cultivating A.I. that understands and respects the human condition, offering insights that are tailored not by generic algorithms alone but by a nuanced understanding of each person’s unique mental landscape, fostering an environment where technology and humanity coexist in supportive, sustainable ways.

Leave a Reply

Related Posts

Why Reinforcement Learning is Special

This high-end view on Reinforcement Learning (R.L.) applies to Organic and Artificial Intelligence. Especially in the latter, we must be careful with R.L. now and forever, arguably more than with any other kind of A.I. Reinforcement in a nutshell You (the learner) perform action X toward goal Y and get feedback Z. Next time you Read the full article…

A.I. Ethics = Value Creation

Often viewed as a burden, A.I. ethics can offer significant value to businesses. This necessitates a specific mindset, without which it remains a burden indefinitely. Generally Contemporary A.I. can transform any issue into an asset with a positive mindset. Furthermore, if the ethical considerations seem inherently burdensome, it might be more prudent to pause the Read the full article…

The Society of Mind in A.I.

The human brain is pretty modular. This is a lesson from nature that we should heed when building a new kind of intelligence. It brings A.I. and H.I. (human intelligence) closer together. The society of mind Marvin Minsky (cognitive science and A.I. researcher) wrote the philosophical book with this title back in 1986. In it, Read the full article…

Translate »