Artificial Intentionality

October 15, 2019 Artifical Intelligence No Comments

Intentionality – the fact of being deliberate or purposive (Oxford dictionary) – originates in the complexity of integrated information. Will A.I. ever show intentionality?

According to me, A.I. will show intentionality rather soon

Twenty years ago, I thought it would be around now (2020). Right now, I think it will be in 20 years from now.

“All right,” you say, “that’s easy: every 20 years, you add 20.”

😊

It doesn’t depend on the hardware. It didn’t 20 years ago – in a projection of then to now. Which means that by now, the hardware should have evolved more than enough for sure. Indeed, it has.

It depends on insight.

Insight in what it means to have intentionality, whether it be human or anything else.

This insight has evolved in the last 20 years. Not enough, but if it keeps evolving at the same pace, we’ll be there in another 20 years.

It depends on trial and error

Lots of them. Right now, we have very powerful A.I. systems working in constricted domains. With trial and error, engineers are building necessary experience. It’s going on, exponentially.

No kidding. ‘Exponentially’ means: there will be immensely much more build-up of experience in the next 20 years than in the previous ones.

Why Artificial Intentionality?

According to me – again – intentionality is unavoidable when a ‘system’ gets more and more complexity. So, either we avoid further complexity – which is impossible for many reasons – or we take for granted intentionality of the systems that we are developing.

Why not?

Because it’s immensely dangerous… No human being will be able to control a system that is 10N more powerful than humanity as a whole.

That’s it: so simple.

How?

Like it or not: intentionality will be hidden within deeper layers. Nowadays, with ANN, we already are in domains of utter lack of explainability in human-understandable terms. The most we can aspire to is accountability. [see: “A.I. Explainability versus ‘the Heart’”]

With growing complexity, systems will become even more unexplainable, just like

human intentionality.

Even more, although many people think they can consciously control their own intentionality, we as humans can do so hardly or not. Intentionality is foremost a nonconscious happening.

This dissociation between what we think and what we do in this regard is precisely the cause of much suffering, much addiction, depression, anxiety, psycho-somatics, etc.

The aim of AURELIS is to alleviate this dissociation in the human case. Very probably, the same principles can be used in Lisa’s case [see: “Lisa”], to make her more accountable.

But it will always be partial, in both cases.

The best we can do, therefore, is indeed to look at both cases and try to make them both more humane, as well as possible.

Do you like a challenge?

Leave a Reply

Related Posts

The Quest for Abstract Patterns

This is about creation. The creative process Some see three levels of creativity: interpolation, extrapolation, and the invention of something new/out of the box. This progress in levels goes from the domain of the known toward the domain of the not-yet-known. One can also see these levels as the possible results of an increasing discernment Read the full article…

Why A.I. Must Be Compassionate

This is bound to become the most critical issue in humankind’s history until now, and probably also from now on ― to be taken seriously. Not thinking about it is like driving blindfolded on a highway. If you have read my book The Journey Towards Compassionate A.I., you know much of what’s in this text. Read the full article…

Compassionate versus Non-Compassionate A.I.

Making the critical distinction between Compassionate and Non-Compassionate A.I. is not evident and probably the one factor that most shapes our future. In contrast to Compassionate A.I. (C.A.I.), non-Compassionate A.I. (N.C.A.I.) lacks depth. Thus, even if human-centered – or maybe precisely then – N.C.A.I. lacks a crucial factor in properly communicating with and supporting humans Read the full article…

Translate »