Why A.I. Must Be Compassionate

November 4, 2022 Artifical Intelligence, Empathy - Compassion, Philanthropically No Comments

This is bound to become the most critical issue in humankind’s history until now, and probably also from now on ― to be taken seriously. Not thinking about it is like driving blindfolded on a highway.

If you have read my book The Journey Towards Compassionate A.I., you know much of what’s in this text. Nevertheless ― no harm in some concise repetition while I expose the reasoning from another angle. The why of this text is more about causes than reasons.

Humanly used terminology

Terms like intelligence, free will, consciousness, and Compassion can be used to denote concepts that are only applicable to humans. This is a straightforward way to ascertain that no other creature will attain any of these. [With due respect to large octopi’s intelligence, one may change ‘human’ into ‘organic.’] An A.I. will never be humanly intelligent nor conscious since, to be humanly, it needs to be human ― obviously, at the surface level.

Yet, with an additional twist, this by itself becomes more interesting. An A.I. system will principally never be humanly intelligent because of the intractable complexity of human intelligence. The latter is just too complex. Therefore, a system’s intelligence will always be an approximation to human intelligence.

For the same reason, there will probably never be an upload of human intelligence to some server. This, too, will always be an approximation.

Abstractly used terminology

The above list of terms can also be used to denote more abstract concepts. That is a straightforward way to ascertain that super-A.I. will attain them all ― soon enough. It just depends on the choice of concepts.

In this case, it’s interesting to take a comprehensive view and see where our human realizations of these abstract concepts may differ from others. Are we the absolute pinnacle or just some point in a multi-featured landscape?

Or are we humans the only creatures who can ever be intelligent/conscious because of an esoteric conflation of the human and the abstract? This idea of human exceptionality seems the result of a dangerously neurotic case of Eigenangst, the unwillingness to look deep inside oneself. In this respect, we certainly need Compassionate A.I.

Information Integration

With a lot of information (data in context) and much internal integration, any system gradually starts looking like what one may call intelligent.

According to some, this is enough to make it conscious ― called the ‘Information Integration Theory of consciousness.’ In that sense, humans are supposed to be conscious because we are intelligent. Given the above, this is a conflation of humanly and abstractly used terminology, making us exceptional because we are, well, exceptional, aren’t we?

Sure we are exceptional, not because of our intelligence but our human intelligence.

Autonomy

According to some, autonomy is equal to free will ― of course, depending on the degrees of freedom. An autonomous weapon hardly shows free will, having autonomy only in searching or even somehow choosing its target within constraints.

Consider making the constraints wider (relaxing them) and providing the system with more features in which to act autonomously. Does that lead to free will?

Again, the human case brings intractable complexity. No system will attain humanly complex free will ― without which we humans also do not have what we would call consciousness.

Information Integration + autonomy

Lots of both ― then, here we are. What is the fundamental difference between this synthesis and a reasonably abstract concept of consciousness? How do we abstractly (not humanly) rebuke such a creature when it says, “I feel conscious?”

It is perfectly conceivable to design an A.I. that goes in this direction. With eyes wide closed, many organizations are already striving to precisely attain this. The advantages are immense and immediate in many ways, as well as the dangers of seeing ‘them’ reaching before ‘us’ many essential advantages in economic or even military competition.

Invisible dangers (closed eyes) are not going to stop decision-takers.

That is why A.I. must be Compassionate.

Leave a Reply

Related Posts

Explainability in A.I., Boon or Bust?

Explainability seems like the safe option. With A.I. of growing complexity, it may be quite the reverse. Much of the reason can be found inside ourselves. What is ‘explainability’ in A.I.? It’s not only about an A.I. system being able to do something but also to explain how and why this has been done. The Read the full article…

Why We don’t See What’s Around the Corner

The main why (of not seeing pending super-A.I.) is all about us. We need a next phase in self-understanding, but this is not getting realized yet. If we don’t see this, we don’t see that. This is an excerpt – in slightly different format – from my book The Journey Towards Compassionate A.I. Complex machinery Read the full article…

About ‘Intelligence’ (in A.I.)

At the brink of a new intelligence, it’s crucial to know what we’re heading towards. Seriously trying to clarify the concept may help. Many intelligences Whether knowledgeable or not, many people try to answer the question of what exactly is ‘intelligence.’ Needless to say, popping up are many different answers. This should not deter anyone Read the full article…

Translate »