Artificial Intentionality

October 15, 2019 Artifical Intelligence No Comments

Intentionality – the fact of being deliberate or purposive (Oxford dictionary) – originates in the complexity of integrated information. Will A.I. ever show intentionality?

According to me, A.I. will show intentionality rather soon

Twenty years ago, I thought it would be around now (2020). Right now, I think it will be in 20 years from now.

“All right,” you say, “that’s easy: every 20 years, you add 20.”

😊

It doesn’t depend on the hardware. It didn’t 20 years ago – in a projection of then to now. Which means that by now, the hardware should have evolved more than enough for sure. Indeed, it has.

It depends on insight.

Insight in what it means to have intentionality, whether it be human or anything else.

This insight has evolved in the last 20 years. Not enough, but if it keeps evolving at the same pace, we’ll be there in another 20 years.

It depends on trial and error

Lots of them. Right now, we have very powerful A.I. systems working in constricted domains. With trial and error, engineers are building necessary experience. It’s going on, exponentially.

No kidding. ‘Exponentially’ means: there will be immensely much more build-up of experience in the next 20 years than in the previous ones.

Why Artificial Intentionality?

According to me – again – intentionality is unavoidable when a ‘system’ gets more and more complexity. So, either we avoid further complexity – which is impossible for many reasons – or we take for granted intentionality of the systems that we are developing.

Why not?

Because it’s immensely dangerous… No human being will be able to control a system that is 10N more powerful than humanity as a whole.

That’s it: so simple.

How?

Like it or not: intentionality will be hidden within deeper layers. Nowadays, with ANN, we already are in domains of utter lack of explainability in human-understandable terms. The most we can aspire to is accountability. [see: “A.I. Explainability versus ‘the Heart’”]

With growing complexity, systems will become even more unexplainable, just like

human intentionality.

Even more, although many people think they can consciously control their own intentionality, we as humans can do so hardly or not. Intentionality is foremost a nonconscious happening.

This dissociation between what we think and what we do in this regard is precisely the cause of much suffering, much addiction, depression, anxiety, psycho-somatics, etc.

The aim of AURELIS is to alleviate this dissociation in the human case. Very probably, the same principles can be used in Lisa’s case [see: “Lisa”], to make her more accountable.

But it will always be partial, in both cases.

The best we can do, therefore, is indeed to look at both cases and try to make them both more humane, as well as possible.

Do you like a challenge?

Leave a Reply

Related Posts

Forward-Forward Neur(on)al Networks

Rest assured, I don’t stuff technical details into this blog. Nevertheless, this new framework lies closer to how the brain works, which is interesting enough to go somewhat into it. Backprop In Artificial Neural Networks (ANN) – the subfield that sways a big scepter in A.I. nowadays – backpropagation (backprop) is one of the main Read the full article…

Dawn of Opening Up

The times, they are a-changing into an era with many unforeseen challenges and possibilities. A.I. makes it even more so. One of the essential changes is a gradual Opening up of who we are as a human species, especially regarding the mind in its non-conscious presence. Openness in several domains As in AURELIS subprojects: Open Read the full article…

Is Lisa Safe?

There are two directions of safety for complex A.I.-projects: general and particular. Lisa must forever conform to the highest standards in both. Let’s assume Lisa becomes the immense success that she deserves. Lisa can then help many people in many ways and for a very long time — a millennium to start with. About Lisa Read the full article…

Translate »