Artificial Intentionality

October 15, 2019 Artifical Intelligence No Comments

Intentionality – the fact of being deliberate or purposive (Oxford dictionary) – originates in the complexity of integrated information. Will A.I. ever show intentionality?

According to me, A.I. will show intentionality rather soon

Twenty years ago, I thought it would be around now (2020). Right now, I think it will be in 20 years from now.

“All right,” you say, “that’s easy: every 20 years, you add 20.”

😊

It doesn’t depend on the hardware. It didn’t 20 years ago – in a projection of then to now. Which means that by now, the hardware should have evolved more than enough for sure. Indeed, it has.

It depends on insight.

Insight in what it means to have intentionality, whether it be human or anything else.

This insight has evolved in the last 20 years. Not enough, but if it keeps evolving at the same pace, we’ll be there in another 20 years.

It depends on trial and error

Lots of them. Right now, we have very powerful A.I. systems working in constricted domains. With trial and error, engineers are building necessary experience. It’s going on, exponentially.

No kidding. ‘Exponentially’ means: there will be immensely much more build-up of experience in the next 20 years than in the previous ones.

Why Artificial Intentionality?

According to me – again – intentionality is unavoidable when a ‘system’ gets more and more complexity. So, either we avoid further complexity – which is impossible for many reasons – or we take for granted intentionality of the systems that we are developing.

Why not?

Because it’s immensely dangerous… No human being will be able to control a system that is 10N more powerful than humanity as a whole.

That’s it: so simple.

How?

Like it or not: intentionality will be hidden within deeper layers. Nowadays, with ANN, we already are in domains of utter lack of explainability in human-understandable terms. The most we can aspire to is accountability. [see: “A.I. Explainability versus ‘the Heart’”]

With growing complexity, systems will become even more unexplainable, just like

human intentionality.

Even more, although many people think they can consciously control their own intentionality, we as humans can do so hardly or not. Intentionality is foremost a nonconscious happening.

This dissociation between what we think and what we do in this regard is precisely the cause of much suffering, much addiction, depression, anxiety, psycho-somatics, etc.

The aim of AURELIS is to alleviate this dissociation in the human case. Very probably, the same principles can be used in Lisa’s case [see: “Lisa”], to make her more accountable.

But it will always be partial, in both cases.

The best we can do, therefore, is indeed to look at both cases and try to make them both more humane, as well as possible.

Do you like a challenge?

Leave a Reply

Related Posts

Is All Learning Associational?

Most probably. This is a domain where animal/human learning and A.I. learning can learn much from each other. Three forms of learning in A.I. Generally, learning in A.I. is divided into two distinct kinds, with a third one dangling in the appendix, referring to another book (*). supervised learning: training with specific input and labeled Read the full article…

Can Motivation be Purely Conscious?

Motivation as we know it is present in a system (you, me) that is partly conscious, partly non-conscious. Thus, the question is much more difficult than it appears at first sight. Nevertheless, towards future A.I., it will need to be solved. Purely conscious? This is also purely (even though possibly partly fuzzy) conceptual. Motivation would Read the full article…

How to Contain Non-Compassionate Super-A.I.

We want super(-intelligent) A.I. to remain under meaningful human control to avoid that it will largely or fully destroy or subdue humanity (= existential dangers). Compassionate A.I. may not be with us for a while. Meanwhile, how can we contain super-A.I.? Future existential danger is special in that one can only be wrong in one Read the full article…

Translate »