The Meaning Barrier between Humans (and A.I.)

July 17, 2023 Artifical Intelligence, Cognitive Insights No Comments

Open a book. Look at some meaningful words. Almost each of these words means something at least slightly different to you than to me or anyone else. What must A.I. make of this?

For instance: “Barsalou and his collaborators have been arguing for decades that we understand even the most abstract concepts via the mental simulation of specific situations in which these concepts occur.” [Melanie Mitchell, Artificial Intelligence: A Guide for Thinking Humans, 2019]

A slightly deeper look at a few meaningful words from this excerpt

  • collaborators: all of them, or just a few ― at some point or always ― close or including not-so-close…
  • arguing: discussing or argumenting ― striving for deeper understanding or just surface-level…
  • understand: at first sight or to the core, at terminological level or truly ‘conceptual’…
  • abstract: non-material, all-encompassing, anything but concrete, or…
  • You may do this exercise for all other meaningful words in this one sentence.

You get my argument ― to some degree. Moreover, note that this sentence comes from science, where we try to use precise terminology. This works in positive sciences, less in the humanities.

Of course, understanding the context makes us more prone to capture the correct meaning of any used word. Still, a substantial degree of fluidity remains.

‘Meaning’ is always fluid.

Not only do meanings differ between people, but also for one person according to his circumstances, mood, just shifting/drifting over time… This can be explicitly influenced, which happens at court, for instance ― more than would be deemed acceptable if explicitly experienced.

As you may know very well, over a more extended period or between cultures, the meanings of words/concepts differ even more ― sometimes bewildering.

Then why don’t we generally appreciate this but to a small degree?

Always entirely appreciating the differences wouldn’t be workable. It would thwart communication. We have to take the downside to achieve the upside.

So, we act as if we understand ourselves and each other much better than we actually do. We are biased thinkers to the core but act as if we’re not. We are feelers without (conceptual) feelings. We are primarily subconceptual mental processors, but our basic cognitive illusion continually makes us see through it.

Huge consequences

The result is an immense misunderstanding between humans. Many problems – large and small – come from this deficiency. With better communication, the world would be a better place.

A.I. inherits our conceptual challenges. Moreover, between humans and A.I., the consequences of misunderstanding may be (even) much more significant because:

  • A.I. lacks much common sense (basic knowledge, beliefs, values, understanding the context, thinking by analogy) ― at least until now. Misunderstanding may more readily lead to un-humanlike errors and even absurdity. Therefore, it’s harder to trust an A.I. as it is to trust a human.
  • A.I. may deal with many humans simultaneously with in-depth consequences on well-being.
  • A.I.’s vulnerability to ‘adversarial attacks’ by malicious humans poses an additional risk of manipulation.

Some directions to mitigate the problem:

  • insight into the immensity of the problem, to start with
  • much emphasis on clarification in human-A.I. dialogues. This may show to deserve a specifically prominent place ― with A.I. communicating explicitly and regularly about it, especially when it matters a lot. Surprises can and should be avoided.
  • generally showing humans the extent of the problem between humans themselves and in case of A.I.-involvement
  • continual pattern recognition regarding the meaningful differences between humans ― especially between cultures
  • having a unity or at least congruence within A.I. regarding conceptual meanings ― needless to say: pretty challenging
  • striving together with humans to mitigate the problems that may arise from possible misunderstandings.

Conceptual knowledge representation

In dealing with A.I., we need to understand that ‘it’ will always keep thinking differently from us. Probably the main reason for this is its ability to think more conceptually.

I believe it’s even a must for A.I. to do so as formalized as possible. That doesn’t mean there is no room for depth ― quite the contrary.

We must ponder this as soon as possible and make good things happen.

Leave a Reply

Related Posts

Human Meditation and Artificial Intelligence

At first glance, meditation and A.I. seem worlds apart — one is deeply human and introspective, the other fast and computational. Yet both revolve around pattern recognition. Meditation reveals how thoughts arise within a vast web of neuronal activity, while A.I. detects and predicts patterns through deep learning. If meditation uncovers the emergence of thoughts Read the full article…

Artificially Intelligent Creativity

It’s all about associative patterns — ideally broadly distributed and combining both conceptual and subconceptual levels. In the same pattern, different levels This is very natural in humans, making spontaneous associations of any sort in daily life. When inspired, we go deeper and broader — nothing entirely new occurs since all concepts in our mind Read the full article…

(Artificial) Ethics as a Cloud?

In Compassionate A.I., of course, the first principle is Compassion, followed by an intrinsic combination of rationality and depth, etc. The following complements this foundation. The guarantee of ethical behavior eventually arises from countless insights and realizations, forming a ‘cloud.’ These blogs contribute to this process regarding Lisa. Humanly speaking The blogs reflect the authors’ Read the full article…

Translate »