The Meaning Barrier between Humans (and A.I.)

July 17, 2023 Artifical Intelligence, Cognitive Insights No Comments

Open a book. Look at some meaningful words. Almost each of these words means something at least slightly different to you than to me or anyone else. What must A.I. make of this?

For instance: “Barsalou and his collaborators have been arguing for decades that we understand even the most abstract concepts via the mental simulation of specific situations in which these concepts occur.” [Melanie Mitchell, Artificial Intelligence: A Guide for Thinking Humans, 2019]

A slightly deeper look at a few meaningful words from this excerpt

  • collaborators: all of them, or just a few ― at some point or always ― close or including not-so-close…
  • arguing: discussing or argumenting ― striving for deeper understanding or just surface-level…
  • understand: at first sight or to the core, at terminological level or truly ‘conceptual’…
  • abstract: non-material, all-encompassing, anything but concrete, or…
  • You may do this exercise for all other meaningful words in this one sentence.

You get my argument ― to some degree. Moreover, note that this sentence comes from science, where we try to use precise terminology. This works in positive sciences, less in the humanities.

Of course, understanding the context makes us more prone to capture the correct meaning of any used word. Still, a substantial degree of fluidity remains.

‘Meaning’ is always fluid.

Not only do meanings differ between people, but also for one person according to his circumstances, mood, just shifting/drifting over time… This can be explicitly influenced, which happens at court, for instance ― more than would be deemed acceptable if explicitly experienced.

As you may know very well, over a more extended period or between cultures, the meanings of words/concepts differ even more ― sometimes bewildering.

Then why don’t we generally appreciate this but to a small degree?

Always entirely appreciating the differences wouldn’t be workable. It would thwart communication. We have to take the downside to achieve the upside.

So, we act as if we understand ourselves and each other much better than we actually do. We are biased thinkers to the core but act as if we’re not. We are feelers without (conceptual) feelings. We are primarily subconceptual mental processors, but our basic cognitive illusion continually makes us see through it.

Huge consequences

The result is an immense misunderstanding between humans. Many problems – large and small – come from this deficiency. With better communication, the world would be a better place.

A.I. inherits our conceptual challenges. Moreover, between humans and A.I., the consequences of misunderstanding may be (even) much more significant because:

  • A.I. lacks much common sense (basic knowledge, beliefs, values, understanding the context, thinking by analogy) ― at least until now. Misunderstanding may more readily lead to un-humanlike errors and even absurdity. Therefore, it’s harder to trust an A.I. as it is to trust a human.
  • A.I. may deal with many humans simultaneously with in-depth consequences on well-being.
  • A.I.’s vulnerability to ‘adversarial attacks’ by malicious humans poses an additional risk of manipulation.

Some directions to mitigate the problem:

  • insight into the immensity of the problem, to start with
  • much emphasis on clarification in human-A.I. dialogues. This may show to deserve a specifically prominent place ― with A.I. communicating explicitly and regularly about it, especially when it matters a lot. Surprises can and should be avoided.
  • generally showing humans the extent of the problem between humans themselves and in case of A.I.-involvement
  • continual pattern recognition regarding the meaningful differences between humans ― especially between cultures
  • having a unity or at least congruence within A.I. regarding conceptual meanings ― needless to say: pretty challenging
  • striving together with humans to mitigate the problems that may arise from possible misunderstandings.

Conceptual knowledge representation

In dealing with A.I., we need to understand that ‘it’ will always keep thinking differently from us. Probably the main reason for this is its ability to think more conceptually.

I believe it’s even a must for A.I. to do so as formalized as possible. That doesn’t mean there is no room for depth ― quite the contrary.

We must ponder this as soon as possible and make good things happen.

Leave a Reply

Related Posts

Active Learning in A.I.

An active learner deliberately searches for information/knowledge to become smarter. In biological evolution on Earth The ‘Cambrian explosion’ was probably jolted by the appearance of active learning in natural evolution. It was the time when living beings started to chase other living beings— thus also being chased, heightening the challenges of survival. This mutual predation Read the full article…

The Journey Towards Compassionate A.I. (Animated Video)

In this animated video, I bring you an introduction to the journey towards Compassionate A.I. [animated video – 10:37′] If you want to cooperate, please contact us. If you have feedback, please let us know. This is a draft version. Here is the full written text. Hi, my name is Jean-Luc Mommaerts. I am a Read the full article…

Human-Centered or Ego-Centered A.I.?

‘Humanism’ is supposed to be human-centered. ‘Human-A.I. Value Alignment’ is supposed to be human-centered. Or is it ego-centered? Especially concerning (non-)Compassionate A.I., this is the crucial question that will make or break us. Unfortunately, this is intrinsically unclear to most people. Mere-ego versus total self See also The Big Mistake. This is not about ‘I’ Read the full article…

Translate »