What Is Morality to A.I.?

June 6, 2018 Artifical Intelligence, Morality No Comments

Agreed: it’s not even evident what ‘morality’ means to us. Soon comes A.I. Will it be ‘morally good’?

Humans have a natural propensity towards morality.

Whether we tend towards ‘good’ or ‘bad’, we have feelings and generally recognize these in others too, in humans and in animals. We share organic roots. We recognize suffering and joy. We cannot be but organic beings. Our suffering and joy are also ‘organic’.

In case of A.I.: totally different story.

As a starters, A.I. is principally a-moral. There is no natural propensity as such ingrained towards anything. To us humans, this is difficult to comprehend because it comes so natural to us. Our ‘being moral beings’ comes – in the human case – together with our organicity. It is transparent to us and thus we tend to believe that morality comes with intelligence. There is no reason why it should. As said, it’s not even evident what ‘morality’ means to us.

Definitely, to A.I., it means something very different, namely:

‘Morality’ as such means nothing to A.I.

At least not our organically based morality.

So in order to proceed (which I am an advocate to do) we have two options: either we use other terms (instead of ‘feelings’, ‘morality’ and even ‘intelligence’) or we use the same ones, knowing that to A.I., the meaning can be very different.

Fundamentally different?

Despite possible misunderstandings, let’s take the second option. One advantage is that this more readily shows us our own ‘transparency gap’.

Morality subsumes intentionality (= ‘intrinsic choice’)

or ‘freedom’ if you like. A simple machine acts mechanically, thus: no choice, no morality. A simple animal acts instinctively, thus: no, or very little choice. Then we came along. Nature gave us the niche of choice. We still have instincts (dispositions to feel and act) but together with these, a huge amount of flexibility.

Then A.I. is coming along.

We can give to A.I. dispositions as well as flexibility.

(Should we? How can it be otherwise?)

This combination, together with ‘intelligence’ (as in its name) lends to A.I. something that in my view can be called ‘morality’. This is also in the name but as said, let’s proceed.

Unlike our own case, A.I. will soon be able to reach a much bigger kind of flexibility, thereby also changing its own dispositions. I’m pretty sure this is unstoppable: A.I. will in its morality be independent.

‘Better A.I.’ being ‘better’ to us.

‘Us’ in this case meaning ‘all sentient beings’… with humans as most consciously ‘sentient’ in our known universe. Whether A.I. will strive towards this kind of ‘better’ may to the human species become the most important issue, a matter of generic survival.

So in short: this is what morality may be to A.I.:

the use of its intelligence in a combination of dispositions and flexibility. Without disposition(s), there is chaos, no direction. Without flexibility: no freedom. Noteworthy, in the combination of direction and freedom resides ‘invitation’ (‘suggestion’).

As in ‘I invite you’ = I propose to you a direction and let you free to take that. It’s like a nice dance. It’s like the basis of AURELIS. Is this a coincidence?

Meanwhile, we ourselves can strive to be ‘better human beings’.

Then looking upon us, A.I. might be able to appreciate and become ‘better A.I.’ in a natural way, letting ‘nature’ flow further through us into this radically new intelligence.

Leave a Reply

Related Posts

Lisa as a Pattern Recognizer

Patterns and deeper patterns. Listening to many users, Lisa will recognize the patterns with which people need to work on themselves for a better, healthier and more profound life with less avoidable suffering. Recognizing patterns? Lisa is a Compassion-based, A.I.-driven coaching chat-bot. [see: “Lisa“] Lisa guides people Compassionately through recognizing patterns and ‘deeper patterns.’ The Read the full article…

At the Brink of Robotics?

Soon enough, we will see a revolution in robotics development on the scale of the present and pending A.I.-in-knowledge revolution. Together, these revolutions will bring eu-topia or dystopia. We don’t know, but we should not remain idle. Pretty much the same basic technologies This is: apart from some translation that makes the analogies less obvious Read the full article…

Can We Always Turn the Switch Off if A.I. Turns Rogue?

In theory, this existential issue is as simple as it can get. In practice, it’s problematic. [This is an excerpt from my book ‘The Journey towards Compassionate A.I.’] Many questions prevent a straightforward answer to the question in the title. For starters, who will turn the switch off? Let me divide the issues into 1) Read the full article…

Translate »