Future A.I.: Fluid or Solid?

March 24, 2021 Artifical Intelligence No Comments

Humans are fluid thinkers. That gives us huge strength and some major challenges. The one does not go without the other. A.I. – including Semantic A.I. – is still a very different matter.

Through proper context, data becomes information.

Still, the information as it is stored in a book is not in any way like the information that we ‘store in our human memory.’

Likewise, smart data in a present-day A.I. system is not in any way like data stored in the human brain/mind. For starters, human ‘software’ = hardware. Mind IS brain ― at least, to a substantial degree.

Likewise, human intelligence is not in any way like so-called intelligence in present-day A.I. Mainly, human intelligence is much more fluid.

Even human solidity in thinking is embedded in fluidity.

In humans, every concept, every piece of solidity, every thought and feeling is the result of fluidity. Our stream of consciousness is a stream of non-conscious fluidity of which only parts break through the surface of conscious awareness. This way, we think parallel and distributed. [see: “Patterns in Neurophysiology”]

This makes us robust and highly efficient. It comes at a price: proneness to logical errors, forgetfulness, biases of all kinds, dissociation leading to health problems in body and mind. We should probably not want to create a duplicate of ourselves, even if that would be possible.

This is also about the age-old question of culture versus nature. [see: “Culture and Nature”]

Why should A.I. need to be fluid?

It is needed to be able to see reality as only a fluid system can see it. What we call ‘common sense’ certainly pertains to this domain.

In dealing with humans, only a fluid system can understand us as we naturally are. This needs to be reckoned with as A.I.-systems become more complex. We should strive not to robotize humans through A.I. applications. [see: “Robotizing Humans or Humanizing Robots”]

Semantic A.I.

= the use of knowledge graphs + present-day A.I. technologies + natural language processing.

A ‘semantic net’ is a knowledge graph, basically a huge bunch of concepts and conceptual links.

The aim of semantic A.I. is an integration of top-down processing (~knowledge graphs – deductive) and bottom-up processing (~deep neural nets – inductive). The former is solid; the latter is fluid. In this striving, it resembles the human case, but not in the way it tries to perform the combination. This last word is crucial. A combination is not a synthesis, no intrinsically integrated whole. Integration that may be achieved is not from-inside-out.

Semantic A.I. does not think parallel and distributed in a human way.

Compassionate A.I. takes proper care of fluidity.

As a general principle, solid A.I. – as any solidity – is only meaningful if it serves as a container to fluidity. This is by definition itself of meaningfulness. [see: “The Meaning of Meaning”] The information in a book is not meaningful by itself. It only becomes so through a (fluid) reader.

To meaningfully heal – from inside out, mentally and psycho-somatically – as well as to attain and enhance our typically human potential, fluidity is necessary. A coaching chatbot should be Compassionate.

In my view, of course, all A.I. should be Compassionate to a fair degree. [see: “The Journey Towards Compassionate A.I.”] This entails taking care of fluidity as is appropriate, profoundly thinking about the Compassionate side also when it doesn’t appear relevant at first sight.

Only this way does the future look Compassionate.

Leave a Reply

Related Posts

Lisa

Lisa will be an in-depth companion to many people, as well as a continuous coach on many domains. Who’s that girl? ‘Lisa’ is the name of the project in which Lisa is the A.I. female coach, Lars is the male variant. I use the term ‘Lisa’ indiscriminately. Lisa is a coaching chat-bot based on A.I. Read the full article…

Will A.I. Soon be Smarter than Us?

This text may be interesting to many because these ideas may shape the future of those many to the highest degree. It’s smart to see why something else will be even smarter. Soon? Soon enough. The ongoing evolution toward the title’s state will not be evident. In retrospect, it will be an amazingly rash evolution. Read the full article…

Sequential Problem Solving with Partial Observability

My goodness! Please hang on. Against all odds, this may get interesting. Besides, it’s about what you do every day, all day long. This is also what many would like A.I. to do for our sake. Even more, it is what artificial intelligence is about. Contrary to this, what is called A.I. these days is Read the full article…

Translate »