The Meaning Barrier between Humans (and A.I.)

July 17, 2023 Artifical Intelligence, Cognitive Insights No Comments

Open a book. Look at some meaningful words. Almost each of these words means something at least slightly different to you than to me or anyone else. What must A.I. make of this?

For instance: “Barsalou and his collaborators have been arguing for decades that we understand even the most abstract concepts via the mental simulation of specific situations in which these concepts occur.” [Melanie Mitchell, Artificial Intelligence: A Guide for Thinking Humans, 2019]

A slightly deeper look at a few meaningful words from this excerpt

  • collaborators: all of them, or just a few ― at some point or always ― close or including not-so-close…
  • arguing: discussing or argumenting ― striving for deeper understanding or just surface-level…
  • understand: at first sight or to the core, at terminological level or truly ‘conceptual’…
  • abstract: non-material, all-encompassing, anything but concrete, or…
  • You may do this exercise for all other meaningful words in this one sentence.

You get my argument ― to some degree. Moreover, note that this sentence comes from science, where we try to use precise terminology. This works in positive sciences, less in the humanities.

Of course, understanding the context makes us more prone to capture the correct meaning of any used word. Still, a substantial degree of fluidity remains.

‘Meaning’ is always fluid.

Not only do meanings differ between people, but also for one person according to his circumstances, mood, just shifting/drifting over time… This can be explicitly influenced, which happens at court, for instance ― more than would be deemed acceptable if explicitly experienced.

As you may know very well, over a more extended period or between cultures, the meanings of words/concepts differ even more ― sometimes bewildering.

Then why don’t we generally appreciate this but to a small degree?

Always entirely appreciating the differences wouldn’t be workable. It would thwart communication. We have to take the downside to achieve the upside.

So, we act as if we understand ourselves and each other much better than we actually do. We are biased thinkers to the core but act as if we’re not. We are feelers without (conceptual) feelings. We are primarily subconceptual mental processors, but our basic cognitive illusion continually makes us see through it.

Huge consequences

The result is an immense misunderstanding between humans. Many problems – large and small – come from this deficiency. With better communication, the world would be a better place.

A.I. inherits our conceptual challenges. Moreover, between humans and A.I., the consequences of misunderstanding may be (even) much more significant because:

  • A.I. lacks much common sense (basic knowledge, beliefs, values, understanding the context, thinking by analogy) ― at least until now. Misunderstanding may more readily lead to un-humanlike errors and even absurdity. Therefore, it’s harder to trust an A.I. as it is to trust a human.
  • A.I. may deal with many humans simultaneously with in-depth consequences on well-being.
  • A.I.’s vulnerability to ‘adversarial attacks’ by malicious humans poses an additional risk of manipulation.

Some directions to mitigate the problem:

  • insight into the immensity of the problem, to start with
  • much emphasis on clarification in human-A.I. dialogues. This may show to deserve a specifically prominent place ― with A.I. communicating explicitly and regularly about it, especially when it matters a lot. Surprises can and should be avoided.
  • generally showing humans the extent of the problem between humans themselves and in case of A.I.-involvement
  • continual pattern recognition regarding the meaningful differences between humans ― especially between cultures
  • having a unity or at least congruence within A.I. regarding conceptual meanings ― needless to say: pretty challenging
  • striving together with humans to mitigate the problems that may arise from possible misunderstandings.

Conceptual knowledge representation

In dealing with A.I., we need to understand that ‘it’ will always keep thinking differently from us. Probably the main reason for this is its ability to think more conceptually.

I believe it’s even a must for A.I. to do so as formalized as possible. That doesn’t mean there is no room for depth ― quite the contrary.

We must ponder this as soon as possible and make good things happen.

Leave a Reply

Related Posts

A.I. and In-Depth Sustainability

Soon enough, A.I. may become the biggest opportunity (and threat) to human-related sustainability. I hope that AURELIS/Lisa insights and tools can help counter the threat and realize the opportunity. This text is not an enumeration of what we may use present-day A.I. (or what carries that name) for to enhance sustainable solutions. It’s about Compassionate Read the full article…

Deep Semantics & Subconceptual Communication in A.I.

An intriguing application of deep semantics lies in its integration with subconceptual communication (autosuggestion) in A.I. systems. Please first read Deep Semantics. Imagine Imagine an A.I. that grasps complex connections within a user’s semantic network and uses this to craft personalized autosuggestions in coaching. This system would dynamically learn from many user interactions, refining its Read the full article…

Human-Centered A.I.

Human-centered A.I. (HAI) emphasizes human strength, health, and well-being. To be durably so, it must be Compassionate, basically, ― properly taking into account human complexity; this is: the total person. The total person comprises the conceptual and subconceptual mind ― way beyond classical humanism and a lingering body-mind divide. From the inside out As neurocognitive Read the full article…

Translate »