From Concrete to Abstract

April 29, 2024 Artifical Intelligence, Cognitive Insights No Comments

Many people view the concepts of ‘concrete’ and ‘abstract’ as dichotomous ends of a straightforward spectrum — in daily life, often without much thought.

This is also relevant to their use in inferential patterns.

One example are mental-neuronal patterns in humans.

However, the muddy underlying reality becomes especially apparent when trying to realize them in an A.I. environment, where it’s relevant to anything related to learning and intelligence. A proper balance is mandatory:

  • Getting too concrete/detailed impedes learning.
  • Getting too abstract undergirds concrete situations, leading to an excess of bias.

Efficiency also plays a role. Without accepting some level of bias, one can become unremittingly bogged down in an overflow of details. See also Bias is Our Thinking.

The daily life use of the term ‘concrete’ is mostly meant at a level in-between.

It’s what captures the daily mind most immediately. Thus, ‘concrete’ is used here as a relative concept — relative to the observer and, one can say, also to the circumstances.

To complicate matters, ‘concrete’ can in this sense denote something more abstract than the very concrete itself. This is a pragmatic issue. Of course, being pragmatic, it’s also most realistically valuable. In any case, one needs to search an optimal level of abstraction.

AURELIS

Note that AURELIS emphasizes the importance of both the conceptual (abstract) and the subconceptual (concrete) layers of experience. This dual approach allows for a more Compassionate interaction with oneself, where the individual is not just a passive recipient of information but an active participant in shaping his mental processes.

Therefore, this is of utmost relevance to the development of Lisa.

The Lisa case

Lisa’s inferential prowess in handling abstract and concrete concepts is still the responsibility of the developers. In due time, Lisa will be able to self-learn optimal levels, also according to circumstances, driven by human-set rewards.

But there will always be a trade-off between advantages and disadvantages — even when using two levels simultaneously (which is, BTW, the way the human brain seems to work).

Importance of the balance

This challenge of finding the right balance between the concrete and the abstract mirrors the ongoing journey of human self-understanding and growth. It enables Lisa to serve as a companion in the user’s journey towards self-realization and growth.

Emphasizing this balance acknowledges the necessity of biases as functional shortcuts while striving to keep them in check to avoid overshadowing the richness of individual experiences.

Compassionately

This way, the Compassionate perspective values both the efficiency and the depth of learning, aiming to enrich the human experience rather than merely optimizing it.

This stance reflects a commitment to treating the user’s cognitive and emotional realms with care and respect, promoting a harmonious integration of technology into personal growth and well-being.

Broader implication: a Compassionate perspective

The dialogue between the concrete and the abstract is not just an intellectual exercise but a practical one that affects how we integrate AI technologies into our social fabric.

AURELIS advocates for a model where technology not only understands but also respects human values and complexities. This involves Lisa’s being able to adapt and respond to the nuanced needs of human emotions and psychological well-being.

This is especially needed in a world increasingly managed by digital interactions.

Ensuring that A.I. can operate across this spectrum – from concrete facts to abstract human experiences – means advocating for systems that enhance, rather than diminish, our ability to be fully human.

It’s about cultivating A.I. that understands and respects the human condition, offering insights that are tailored not by generic algorithms alone but by a nuanced understanding of each person’s unique mental landscape, fostering an environment where technology and humanity coexist in supportive, sustainable ways.

Leave a Reply

Related Posts

A.I.-Human Value Alignment

Can Compassionate A.I. be a beacon of profound values that humans unfortunately lack sometimes? The Compassionate endeavor is not about dominance. A.I.-Human Value Alignment can be seen as mutual growth, avoiding the imposition or blind adoption of values. This fosters an environment where both A.I. and humans can enhance their values, leading to a more Read the full article…

Super-A.I. Guardrails in a Compassionate Setting

We need to think about good regulations/guardrails to safeguard humanity from super-A.I. ― either ‘badass’ from the start or Compassionate A.I. turning suddenly rogue despite good initial intentions. ―As a Compassionate A.I., Lisa has substantially helped me write this text. Such help can be continued indefinitely. Some naivetés ‘Pulling the plug out’ is very naïve Read the full article…

A.I.-Phobia

One should be scared of any danger, including dangerous A.I. Contrary to this, anxiety is never a good adviser. This text is about being anxious. A phobic reaction against present technology is most dangerous. Needed is a lot of common sense. As to the above image, note the reference to Mary Wollstonecraft Shelley’s novel. In Read the full article…

Translate »