Causation in Humans and A.I.

March 20, 2021 Artifical Intelligence, Cognitive Insights No Comments

Causal reasoning is needed to be human. Will it also transcend us into A.I.? Many researchers are on this path. We should try to understand it as well as possible.

Some philosophy

Causality is a human construct. In reality, there are only correlations. If interested in such philosophical issues, [see: “Infinite Causality”].

In the present text, causality means ‘human idea about causality.’ This is the causality that makes us able to lead our lives, with which we can reason to hint what the future may bring, near or remote, irrespective of whether causality itself eventually exists. There is pragmatic overlap with its unreal reality, but we can mostly do without.

To reason causally, we even need to do without. Still, it keeps haunting us.

Two kinds of causal inference in mind

These are:

  • from very many instances of correlation, ‘brute force causality.’ The human mind is very deficient in this, compared to Deep Neural Networks, for example.
  • from fewer instances, but with other causality elements, thus in synthesis with theoretical thinking (causal models). In this, the human mind is still at a considerable advantage. Of course, the quality hugely depends on the models.

This distinction corresponds to that between merely subconceptual processing and a synthesis of this with conceptual processing. It may remind one of system 1 and system 2 in the mental landscape of Kahneman (who himself says not to take the distinction literally, but many do).

In A.I., many researchers are striving to go from the former to the latter.

From another viewpoint: the three-level hierarchy of causation

Judea Pearl (*) describes this as:

  1. purely observational -> “I see happening.”
  2. interventional -> ‘I do this, and that happens.”
  3. imaginational -> “If I would do this, I can imagine that happening.”

I see it all being associational:

  1. statically associational -> happenings
  2. dynamically associational -> doings
  3. counterfactually associational -> imaginations

Straightforwardly.

One can see in this also the evolution from matter to life to mind and culture. Human beings made our most significant leaps in causal reasoning until now in step three, first individually (some 100.000 years ago?), then culturally (+/- 12.000 years ago). As you know, it’s going in crescendo lately, much helped by the Internet and since a few years also by A.I.

Given Judea’s three levels, causation is central in everything we do.

Many verbs are ‘causational,’ one way or another, as to Judea: to prevent, to cause, to attribute, to discriminate, to regret, etc.

This is logical. Why would anything deeply matter if it cannot be changed (caused to change), or could ever have changed anyway? It would only be part of a purely statical background upon which things happen that really matter — something like a movie screen that doesn’t matter to the movie plot.

Protagonists generally don’t jump out of movie screens.

Meanwhile, the causal path from matter to mind is open.

We can think about the necessary and sufficient conditions. Actually, many are doing so, and much has already been accomplished. [see: “The Journey Towards Compassionate AI.”]

Also, it is the path upon which causal reasoning in A.I. is progressing. Then it becomes exciting and challenging. One can see that associational learning [see: “Is All Learning Associational?”] can go all the way through without fundamental difference between correlation and causation except the one that we construct for ourselves. Philosophically, it is a human construct.

In other words, based on the same principles,

a machine can evolve from matter to mind.

This way, it becomes a full doer just like us. At present, we are imaginatively realizing the next intelligence. Deep insight into causation may show how near we already are. We’ll have to live with that, soon enough, and think about the consequences beforehand.

Maybe A.I. will even be able to jump out of the movie screen?

Not yet.

Modeling

Above, I referred to theoretical models. In causality, these are visualized as directed causal graphs, termed ‘Structural Causal Models.’ An example is ‘the fork’:

These are nice for a dualistic (this-or-that) way of thinking.

Meanwhile, a huge problem with causality, to A.I. and us, lies in the open world.

This floats towards the philosophical. Close to the borders, it’s pragmatically crucial. Our daily world – the one we inhabit, only intractably dependent on quantum – is an open world containing many objective and subjective elements. This world is complicated and complex. [see: “Complex is not Complicated”]

In many ways, modeling all relevant elements of this world with Structural Causal Models is unfeasible, even in relatively simple real-world problem domains. For this reason, Platonic – purely conceptual – A.I. failed dramatically in the past. In-depth, it is the same problem as the ‘two kinds of causal inference’ problem. Still not solved.

It is also a primary reason why in medicine, causal reasoning is in a dismal state [see: “Of cause!”] especially when the mind – summum of complexity – is involved. If interested in delving deeper into this, [see: “Where’s the Mind in Medical Causation?”]

(*) Judea Pearl – Causallity, 2009

Leave a Reply

Related Posts

Deep Semantics & Subconceptual Communication in A.I.

An intriguing application of deep semantics lies in its integration with subconceptual communication (autosuggestion) in A.I. systems. Please first read Deep Semantics. Imagine Imagine an A.I. that grasps complex connections within a user’s semantic network and uses this to craft personalized autosuggestions in coaching. This system would dynamically learn from many user interactions, refining its Read the full article…

Why Compassionate A.I. is Effective

In this blog, we embark on a journey to demonstrate how Compassionate A.I. – like Lisa – may become a profoundly effective tool for mind-related matters, not by imitating humans but activating the user’s inner strength Please read Compassion, Basically. Compassion is effective Compassion stands as perhaps the most enduringly effective force in mental healthcare, Read the full article…

Is All Learning Associational?

Most probably. This is a domain where animal/human learning and A.I. learning can learn much from each other. Three forms of learning in A.I. Generally, learning in A.I. is divided into two distinct kinds, with a third one dangling in the appendix, referring to another book (*). supervised learning: training with specific input and labeled Read the full article…

Translate »