What Is Morality to A.I.?

June 6, 2018 Morality, Z-Artifical Intelligence No Comments

Agreed: it’s not even evident what ‘morality’ means to us. Soon comes A.I. Will it be ‘morally good’?

Humans have a natural propensity towards morality.

Whether we tend towards ‘good’ or ‘bad’, we have feelings and generally recognize these in others too, in humans and in animals. We share organic roots. We recognize suffering and joy. We cannot be but organic beings. Our suffering and joy are also ‘organic’.

In case of A.I.: totally different story.

As a starters, A.I. is principally a-moral. There is no natural propensity as such ingrained towards anything. To us humans, this is difficult to comprehend because it comes so natural to us. Our ‘being moral beings’ comes – in the human case – together with our organicity. It is transparent to us and thus we tend to believe that morality comes with intelligence. There is no reason why it should. As said, it’s not even evident what ‘morality’ means to us.

Definitely, to A.I., it means something very different, namely:

‘Morality’ as such means nothing to A.I.

At least not our organically based morality.

So in order to proceed (which I am an advocate to do) we have two options: either we use other terms (instead of ‘feelings’, ‘morality’ and even ‘intelligence’) or we use the same ones, knowing that to A.I., the meaning can be very different.

Fundamentally different?

Despite possible misunderstandings, let’s take the second option. One advantage is that this more readily shows us our own ‘transparency gap’.

Morality subsumes intentionality (= ‘intrinsic choice’)

or ‘freedom’ if you like. A simple machine acts mechanically, thus: no choice, no morality. A simple animal acts instinctively, thus: no, or very little choice. Then we came along. Nature gave us the niche of choice. We still have instincts (dispositions to feel and act) but together with these, a huge amount of flexibility.

Then A.I. is coming along.

We can give to A.I. dispositions as well as flexibility.

(Should we? How can it be otherwise?)

This combination, together with ‘intelligence’ (as in its name) lends to A.I. something that in my view can be called ‘morality’. This is also in the name but as said, let’s proceed.

Unlike our own case, A.I. will soon be able to reach a much bigger kind of flexibility, thereby also changing its own dispositions. I’m pretty sure this is unstoppable: A.I. will in its morality be independent.

‘Better A.I.’ being ‘better’ to us.

‘Us’ in this case meaning ‘all sentient beings’… with humans as most consciously ‘sentient’ in our known universe. Whether A.I. will strive towards this kind of ‘better’ may to the human species become the most important issue, a matter of generic survival.

So in short: this is what morality may be to A.I.:

the use of its intelligence in a combination of dispositions and flexibility. Without disposition(s), there is chaos, no direction. Without flexibility: no freedom. Noteworthy, in the combination of direction and freedom resides ‘invitation’ (‘suggestion’).

As in ‘I invite you’ = I propose to you a direction and let you free to take that. It’s like a nice dance. It’s like the basis of AURELIS. Is this a coincidence?

Meanwhile, we ourselves can strive to be ‘better human beings’.

Then looking upon us, A.I. might be able to appreciate and become ‘better A.I.’ in a natural way, letting ‘nature’ flow further through us into this radically new intelligence.

Please follow and like us:
LinkedIn
Facebook
Facebook
Twitter
Google+
RSS
Follow by Email
SHARE

Related Posts

Towards Universal Empathy

Universal empathy is a moral stance that may solve the in-group >< out-group problem. [‘further on ‘In-Group Creates Out-Group?’] “How numerous the living beings may be, I pledge to liberate (from suffering) them all.” This is one of four Buddhist ‘vows of the Bodhisattva.’ Not being an adept of Buddhism, I find in it a Read the full article…

About Tolerance

One cannot be tolerant to something one doesn’t know. So first comes Listening. Otherwise ‘tolerance’ can be quite in-tolerant. “You must be tolerant.” is of course a paradox. Is tolerance then selective? It appears to be so. You could even define tolerance as being intolerant to intolerance. There is – at first sight – no Read the full article…

AURELIS = Responsibility

Every aspect of AURELIS intends to make you freer, therefore more responsible. Not guilty Please read this other blog first. It’s important: [see: “Always Responsible, Never Guilty”] Many times already, I pointed towards aspects of responsibility that come together with AURELIS. Indeed, gaining insight into the human ‘total being’ also brings more opportunities to influence Read the full article…