From Concrete to Abstract

April 29, 2024 Artifical Intelligence, Cognitive Insights No Comments

Many people view the concepts of ‘concrete’ and ‘abstract’ as dichotomous ends of a straightforward spectrum — in daily life, often without much thought.

This is also relevant to their use in inferential patterns.

One example are mental-neuronal patterns in humans.

However, the muddy underlying reality becomes especially apparent when trying to realize them in an A.I. environment, where it’s relevant to anything related to learning and intelligence. A proper balance is mandatory:

  • Getting too concrete/detailed impedes learning.
  • Getting too abstract undergirds concrete situations, leading to an excess of bias.

Efficiency also plays a role. Without accepting some level of bias, one can become unremittingly bogged down in an overflow of details. See also Bias is Our Thinking.

The daily life use of the term ‘concrete’ is mostly meant at a level in-between.

It’s what captures the daily mind most immediately. Thus, ‘concrete’ is used here as a relative concept — relative to the observer and, one can say, also to the circumstances.

To complicate matters, ‘concrete’ can in this sense denote something more abstract than the very concrete itself. This is a pragmatic issue. Of course, being pragmatic, it’s also most realistically valuable. In any case, one needs to search an optimal level of abstraction.

AURELIS

Note that AURELIS emphasizes the importance of both the conceptual (abstract) and the subconceptual (concrete) layers of experience. This dual approach allows for a more Compassionate interaction with oneself, where the individual is not just a passive recipient of information but an active participant in shaping his mental processes.

Therefore, this is of utmost relevance to the development of Lisa.

The Lisa case

Lisa’s inferential prowess in handling abstract and concrete concepts is still the responsibility of the developers. In due time, Lisa will be able to self-learn optimal levels, also according to circumstances, driven by human-set rewards.

But there will always be a trade-off between advantages and disadvantages — even when using two levels simultaneously (which is, BTW, the way the human brain seems to work).

Importance of the balance

This challenge of finding the right balance between the concrete and the abstract mirrors the ongoing journey of human self-understanding and growth. It enables Lisa to serve as a companion in the user’s journey towards self-realization and growth.

Emphasizing this balance acknowledges the necessity of biases as functional shortcuts while striving to keep them in check to avoid overshadowing the richness of individual experiences.

Compassionately

This way, the Compassionate perspective values both the efficiency and the depth of learning, aiming to enrich the human experience rather than merely optimizing it.

This stance reflects a commitment to treating the user’s cognitive and emotional realms with care and respect, promoting a harmonious integration of technology into personal growth and well-being.

Broader implication: a Compassionate perspective

The dialogue between the concrete and the abstract is not just an intellectual exercise but a practical one that affects how we integrate AI technologies into our social fabric.

AURELIS advocates for a model where technology not only understands but also respects human values and complexities. This involves Lisa’s being able to adapt and respond to the nuanced needs of human emotions and psychological well-being.

This is especially needed in a world increasingly managed by digital interactions.

Ensuring that A.I. can operate across this spectrum – from concrete facts to abstract human experiences – means advocating for systems that enhance, rather than diminish, our ability to be fully human.

It’s about cultivating A.I. that understands and respects the human condition, offering insights that are tailored not by generic algorithms alone but by a nuanced understanding of each person’s unique mental landscape, fostering an environment where technology and humanity coexist in supportive, sustainable ways.

Leave a Reply

Related Posts

Better than Us?

Might super-A.I. one day surpass humans in all aspects, both cognitive and emotional? I rather wonder when it will happen. Will then also a deeper emotional connection develop between humans and advanced A.I.? The singularity of intelligence This question has been on many minds for some time. Lately, it has become much closer to us. Read the full article…

A.I. to Benefit Humans

‘Human-oriented’ is not the same as ‘ego-oriented.’ As never before, and perhaps never after, we have with A.I. a powerful toolbox that can be used in any direction. In-depth As to AURELIS ethics, the striving – of A.I. and of any other development – should definitely be towards humanity-in-depth, the ‘total human being,’ as opposed to Read the full article…

Why to Invest in Compassionate A.I.

Most A.I. engineers have a limited view on organic intelligence, let alone consciousness or Compassion. That’s a huge problem. Indeed, I’ve written a book about Compassionate A.I. See: “The Journey Towards Compassionate A.I.: Who We Are – What A.I. Can Become – Why It Matters” Am I trying to attract investors for this now? Or Read the full article…

Translate »