Why to Invest in Compassionate A.I.

August 13, 2020 Artifical Intelligence No Comments

Most A.I. engineers have a limited view on organic intelligence, let alone consciousness or Compassion. That’s a huge problem.

Indeed, I’ve written a book about Compassionate A.I.

See: “The Journey Towards Compassionate A.I.: Who We Are – What A.I. Can Become – Why It Matters

Am I trying to attract investors for this now? Or am I just passionate about Compassionate A.I.? It is the latter, of course. Trying to make it succeed is part of the passion. I think this is of utmost importance. As an M.D., Ph.D. (body-mind), and Master in A.I., I have a uniquely holistic view on the domain. I see immense possibilities and challenges that are hardly or not taken into account really in-depth where it matters.

Are you an investor in A.I.?

Then you have a huge responsibility towards the future. That entails the distant future, but also nearby: your lifetime, certainly the lifetime of your children, big time.

And in several ways, it’s relevant already now.

Non-Compassionate A.I. is immensely dangerous.

I explain this extensively in my book in which I describe two main dangers on the road towards the real A.I.:

  • one human-made, abuse/misuse of technology
  • another A.I.-made, on the path towards more and more autonomous A.I.

You may be certain of this. We are progressing towards autonomy in A.I. If you still have any doubts, I suggest you read the book. We see little of it at present. Moreover, the striving is towards automation, not autonomy.

Or is it? Are we not building our machines to use them? Think again.

There is a fuzzy border between an automaton and an autonomous system.

Of course, nobody in his right mind is developing systems with the explicit aim that they overtake humanity, including the developer himself.

In practice, however, striving for real A.I. may turn out to be much more dangerous once real intelligence is attained in an artificial medium. The world of A.I. is still in a stage of enhanced information processing. The striving is towards more performant information processing. Here also lies a fuzzy border: between information and intelligence. Once the latter is attained, we’re close to autonomous systems.

I don’t mean something that looks like autonomy. We are talking about the real thing: volition, free will. More and more, the system decides about its own goals.

This is the logical consequence of what is already known.

In research, we know quite a lot about the path or ‘journey’ that I follow in my book, from data → information → knowledge (intelligence) → consciousness → wisdom (Compassion). It is, eventually, a path that one can discern in the organic/human case. This case (ourselves) can teach us a lot about how the same path can be looked upon at a more abstract level.

The next stage after this is to implement this path in another medium (silicon, later on: light, quantum).

This will not be easily done – and should certainly not be done – by engineers only. One needs many proper insights to do the translation towards abstractness and from there towards the different concrete medium. One needs even more proper insights to guide this into a direction that is not extremely dangerous. The main point is the following:

One should strive towards Compassion BEFORE real intelligence is attained in A.I.

I don’t see that being done well at present. Note that this is very different from the mainstream in ‘ethical A.I.’ There are even additional huge dangers in this. For instance, striving towards a self-explainable A.I.-system (giving the user explanations for each of its decisions) makes the system also more self-explainable-towards-itself. Through this, with some slight twist, the system gets a vast amount of flexibility, self-learning capability, the ability to make domain-transcending associations. Hm. Please don’t say I didn’t warn about it. Not reckoning with this should be seen as very, very naïve.

That’s why I insist on “not by engineers only.” They’re smart at their field but seldom see the truly bigger picture.

Investors have a huge responsibility

The situation at present is that the course of A.I. developments is not mainly driven by academia, in labs and with lots of time to think about the deeply ethical consequences, challenges, and dangers of A.I. The course of A.I. is mainly driven by investments.

So, I didn’t explain much about Compassion (with the mysterious capital C). “Read the book,” they say, and I can only acknowledge. Compassionate A.I. can give the highest return on investment – only not in straightforward ways – because it is most human-centered.

Moreover, for the same reason, it’s the only humanely worthwhile future we’ve got.

 

 

Leave a Reply

Related Posts

Lisa

Lisa will be an in-depth companion to many people, as well as a continuous coach on many domains. Who’s that girl? ‘Lisa’ is the name of the project in which Lisa is the A.I. female coach, Lars is the male variant. I use the term ‘Lisa’ indiscriminately. Lisa is a coaching chat-bot based on A.I. Read the full article…

What Is Morality to A.I.?

Agreed: it’s not even evident what ‘morality’ means to us. Soon comes A.I. Will it be ‘morally good’? Humans have a natural propensity towards morality. Whether we tend towards ‘good’ or ‘bad’, we have feelings and generally recognize these in others too, in humans and in animals. We share organic roots. We recognize suffering and Read the full article…

Explainability in A.I., Boon or Bust?

Explainability seems like the safe option. With A.I. of growing complexity, it may be quite the reverse. Much of the reason can be found inside ourselves. What is ‘explainability’ in A.I.? It’s not only about an A.I. system being able to do something but also to explain how and why this has been done. The Read the full article…

Translate »