The Future is Prediction

May 20, 2021 Artifical Intelligence, Health & Healing No Comments

I bring the concept of prediction from different angles to show the common ground. Through this, one may get a glimpse of its future importance.

The Future of A.I.

The concept of prediction pops up regularly in different ways to look at future A.I. developments. For instance, in temporal difference (TD) learning as expanded upon by prof. Richard Sutton within reinforcement learning and also broader. [1]

In TD-learning – staying away from the many formulas – one can see the localizing of decisions in time bubbles optimally using available information. The future is not yet available in the bubble, so it needs to be predicted from just outside the bubble. From within the bubble, even this prediction needs to be guessed more or less.

So, basically, it’s a guess of a guess.

Slightly scary?

Continually, the bubble slides forward in time. The guess becomes less of a guess, and the system learns. This is probably the best way for the system to learn. Also, it is the most resource-efficient way, being computationally least costly and timing-friendly. Of course, it can (and should) be combined with other technologies when appropriate. It becomes really beautiful in dynamic combinations. Do you get a feel of what I mean?

The natural way

Nature also invented mechanisms to deal with prediction. One of them is our brain. Our sensory and motoric brainy systems are based on prediction, followed by adjustment to errors. [see: “The Brain as a Predictor“]

Together with the ubiquitous mental-neuronal patterns, prediction forms an ever-expanding source of natural (and even more, artificial) innovation. Even the way we humans ‘do conceptual thinking’ is through prediction.

There are also many practical natural consequences related to health(care), such as in the domain of placebo. [see: “Placebo and the Predictive Brain“]

For many engineers, this may look like a weird choice by nature. It’s contra-intuitive at first sight for a mechanical mind. However, as we just saw, high-level engineers of the future see it otherwise. This is not the only contra-intuitive field in A.I. When it gets very complex, it starts getting organic. An example for insiders: random forests of decision trees.

Natural trio

Forming a trio with ubiquitous patterns and prediction, one can see that nature generally works with the aim of ‘good enough.’

In short, time and again, perfect imperfection is seen as more worthwhile than pure perfection, with more robust and exponentially effective results.

Also, perfect imperfection is easily more exciting.

Relevance to causal thinking

One can see causal thinking as predicting within the past and predicting the present from the past, with the aim to predict the future from the present. In animal evolution, those who were able to do this were those who survived. This way, a pure cause-orientation turned considerably into future-orientation.

Thus, causal and predictive thinking became of immense importance also to us, being so natural that we mostly don’t notice it.

In coaching

Here, prediction lies in uncertainty, in guessing what the coachee means before the coach’s knowing so, and even sometimes before the coachee’s consciously knowing so.

This is much more powerful in the coaching dialogue than either saying nothing, literally repeating what has just been said, or acting from a formal ‘coach knows better’ attitude.

The right degree and quality of the coach’s prediction enable the coachee to respond to this in a warm and lively way that invites him to gain new experiences and meaningfulness. This enables him to grow. It is Compassion in action. [see: “Essence of Compassion“]

Additionally, it allows the coach to grow. It may even be necessary for this. Contrary to this, the use of mechanical instruments (therapeutic modalities) may easily impede spontaneous prediction, thus also learning and growth even over many years.

Lisa

A.I. and coaching come together in Lisa. [see: “Lisa“] Many lessons are learned and applied in this at different levels of abstraction. Moreover, Lisa will be fertile soil for many more insights.

Hm, this promises to be a fascinating future.

References

[1] Reinforcement Learning, Sec. Ed. Richard S. Sutton and Andrew G. Barto, 2018

Leave a Reply

Related Posts

Two Takes on Human-A.I. Value Alignment

Time and again, the way engineers (sorry, engineers) think and talk about human-A.I. value alignment as if human values are unproblematic by themselves strikes me as naive. Even more, as if the alignment problem can be solved by thinking about it in a mathematical, engineering way. Just find the correct code or something alike? No Read the full article…

Let Us Go for Science with a Heart

Human-related science has, until now, not always taken the view from the heart. A.I.-based science may profoundly change this picture. If you are an avid reader of my blogs, this one has little new. Yet by putting things together from a different perspective, a new picture may emerge. Reproducibility Science is about many people being Read the full article…

Can A.I. be Empathic?

Not readily in the completely human sense of course. But can A.I. show sufficient ‘empathy’ for a human to recognize it as such, relate to it and even ‘grow’ through it? [Please read: “Landscape of Empathy”] Humans tend to ‘recognize’ human features in many things. We look at the clouds and we see human faces Read the full article…

Translate »