From Concrete to Abstract

April 29, 2024 Artifical Intelligence, Cognitive Insights No Comments

Many people view the concepts of ‘concrete’ and ‘abstract’ as dichotomous ends of a straightforward spectrum — in daily life, often without much thought.

This is also relevant to their use in inferential patterns.

One example are mental-neuronal patterns in humans.

However, the muddy underlying reality becomes especially apparent when trying to realize them in an A.I. environment, where it’s relevant to anything related to learning and intelligence. A proper balance is mandatory:

  • Getting too concrete/detailed impedes learning.
  • Getting too abstract undergirds concrete situations, leading to an excess of bias.

Efficiency also plays a role. Without accepting some level of bias, one can become unremittingly bogged down in an overflow of details. See also Bias is Our Thinking.

The daily life use of the term ‘concrete’ is mostly meant at a level in-between.

It’s what captures the daily mind most immediately. Thus, ‘concrete’ is used here as a relative concept — relative to the observer and, one can say, also to the circumstances.

To complicate matters, ‘concrete’ can in this sense denote something more abstract than the very concrete itself. This is a pragmatic issue. Of course, being pragmatic, it’s also most realistically valuable. In any case, one needs to search an optimal level of abstraction.

AURELIS

Note that AURELIS emphasizes the importance of both the conceptual (abstract) and the subconceptual (concrete) layers of experience. This dual approach allows for a more Compassionate interaction with oneself, where the individual is not just a passive recipient of information but an active participant in shaping his mental processes.

Therefore, this is of utmost relevance to the development of Lisa.

The Lisa case

Lisa’s inferential prowess in handling abstract and concrete concepts is still the responsibility of the developers. In due time, Lisa will be able to self-learn optimal levels, also according to circumstances, driven by human-set rewards.

But there will always be a trade-off between advantages and disadvantages — even when using two levels simultaneously (which is, BTW, the way the human brain seems to work).

Importance of the balance

This challenge of finding the right balance between the concrete and the abstract mirrors the ongoing journey of human self-understanding and growth. It enables Lisa to serve as a companion in the user’s journey towards self-realization and growth.

Emphasizing this balance acknowledges the necessity of biases as functional shortcuts while striving to keep them in check to avoid overshadowing the richness of individual experiences.

Compassionately

This way, the Compassionate perspective values both the efficiency and the depth of learning, aiming to enrich the human experience rather than merely optimizing it.

This stance reflects a commitment to treating the user’s cognitive and emotional realms with care and respect, promoting a harmonious integration of technology into personal growth and well-being.

Broader implication: a Compassionate perspective

The dialogue between the concrete and the abstract is not just an intellectual exercise but a practical one that affects how we integrate AI technologies into our social fabric.

AURELIS advocates for a model where technology not only understands but also respects human values and complexities. This involves Lisa’s being able to adapt and respond to the nuanced needs of human emotions and psychological well-being.

This is especially needed in a world increasingly managed by digital interactions.

Ensuring that A.I. can operate across this spectrum – from concrete facts to abstract human experiences – means advocating for systems that enhance, rather than diminish, our ability to be fully human.

It’s about cultivating A.I. that understands and respects the human condition, offering insights that are tailored not by generic algorithms alone but by a nuanced understanding of each person’s unique mental landscape, fostering an environment where technology and humanity coexist in supportive, sustainable ways.

Leave a Reply

Related Posts

What is Intelligence?

What exactly is intelligence when not restricting it to the human case? Or, better asked – since intelligence is what we call so – how can we characterize something as ‘intelligence’ in a generally recognizable way? In human (and animal, pre-human) evolution, intelligence has appeared in specific circumstances ― making human intelligence inextricably emotional and Read the full article…

Compassionate versus Non-Compassionate A.I.

Making the critical distinction between Compassionate and Non-Compassionate A.I. is not evident and probably the one factor that most shapes our future. In contrast to Compassionate A.I. (C.A.I.), non-Compassionate A.I. (N.C.A.I.) lacks depth. Thus, even if human-centered – or maybe precisely then – N.C.A.I. lacks a crucial factor in properly communicating with and supporting humans Read the full article…

Compassion as Basis for A.I. Regulations

To prevent A.I.-related mishaps or even disasters while going into a future of super-A.I., merely regulating A.I. is not sufficient ― presently nor in principle. Striving for Compassionate A.I. There will eventually be no security concerning A.I. if we don’t put Compassion into the core. The main reason is that super-A.I. will be much more Read the full article…

Translate »