Why A.I. Must Be Compassionate

November 4, 2022 Artifical Intelligence, Empathy - Compassion, Philanthropically No Comments

This is bound to become the most critical issue in humankind’s history until now, and probably also from now on ― to be taken seriously. Not thinking about it is like driving blindfolded on a highway.

If you have read my book The Journey Towards Compassionate A.I., you know much of what’s in this text. Nevertheless ― no harm in some concise repetition while I expose the reasoning from another angle. The why of this text is more about causes than reasons.

Humanly used terminology

Terms like intelligence, free will, consciousness, and Compassion can be used to denote concepts that are only applicable to humans. This is a straightforward way to ascertain that no other creature will attain any of these. [With due respect to large octopi’s intelligence, one may change ‘human’ into ‘organic.’] An A.I. will never be humanly intelligent nor conscious since, to be humanly, it needs to be human ― obviously, at the surface level.

Yet, with an additional twist, this by itself becomes more interesting. An A.I. system will principally never be humanly intelligent because of the intractable complexity of human intelligence. The latter is just too complex. Therefore, a system’s intelligence will always be an approximation to human intelligence.

For the same reason, there will probably never be an upload of human intelligence to some server. This, too, will always be an approximation.

Abstractly used terminology

The above list of terms can also be used to denote more abstract concepts. That is a straightforward way to ascertain that super-A.I. will attain them all ― soon enough. It just depends on the choice of concepts.

In this case, it’s interesting to take a comprehensive view and see where our human realizations of these abstract concepts may differ from others. Are we the absolute pinnacle or just some point in a multi-featured landscape?

Or are we humans the only creatures who can ever be intelligent/conscious because of an esoteric conflation of the human and the abstract? This idea of human exceptionality seems the result of a dangerously neurotic case of Eigenangst, the unwillingness to look deep inside oneself. In this respect, we certainly need Compassionate A.I.

Information Integration

With a lot of information (data in context) and much internal integration, any system gradually starts looking like what one may call intelligent.

According to some, this is enough to make it conscious ― called the ‘Information Integration Theory of consciousness.’ In that sense, humans are supposed to be conscious because we are intelligent. Given the above, this is a conflation of humanly and abstractly used terminology, making us exceptional because we are, well, exceptional, aren’t we?

Sure we are exceptional, not because of our intelligence but our human intelligence.

Autonomy

According to some, autonomy is equal to free will ― of course, depending on the degrees of freedom. An autonomous weapon hardly shows free will, having autonomy only in searching or even somehow choosing its target within constraints.

Consider making the constraints wider (relaxing them) and providing the system with more features in which to act autonomously. Does that lead to free will?

Again, the human case brings intractable complexity. No system will attain humanly complex free will ― without which we humans also do not have what we would call consciousness.

Information Integration + autonomy

Lots of both ― then, here we are. What is the fundamental difference between this synthesis and a reasonably abstract concept of consciousness? How do we abstractly (not humanly) rebuke such a creature when it says, “I feel conscious?”

It is perfectly conceivable to design an A.I. that goes in this direction. With eyes wide closed, many organizations are already striving to precisely attain this. The advantages are immense and immediate in many ways, as well as the dangers of seeing ‘them’ reaching before ‘us’ many essential advantages in economic or even military competition.

Invisible dangers (closed eyes) are not going to stop decision-takers.

That is why A.I. must be Compassionate.

Leave a Reply

Related Posts

Is A.I. Dangerous to Human Cognition?

I have roamed around this on several occasions within ‘The Journey towards Compassionate A.I.’ (of which this is an excerpt) The prime reason why I think it’s dangerous is, in one term: hyper-essentialism. But let me first give two viewpoints upon your thinking: Essentialism: presupposes that the categories in your mind – such as an Read the full article…

Is A.I. Energy Well Spent?

This depends mainly on the goal, of course, as with several other issues. Obviously, A.I.’s global energy hunger is immense, probably at least as much as that of a small country. (*) As for tourism Besides many possibly positive aspects – such as learning about other cultures and enjoying Earth’s beauty – over-tourism kills the Read the full article…

Is All Learning Associational?

Most probably. This is a domain where animal/human learning and A.I. learning can learn much from each other. Three forms of learning in A.I. Generally, learning in A.I. is divided into two distinct kinds, with a third one dangling in the appendix, referring to another book (*). supervised learning: training with specific input and labeled Read the full article…

Translate »