Why to Invest in Compassionate A.I.

August 13, 2020 Artifical Intelligence, Philanthropically No Comments

Most A.I. engineers have a limited view on organic intelligence, let alone consciousness or Compassion. That’s a huge problem.

Indeed, I’ve written a book about Compassionate A.I.

See: “The Journey Towards Compassionate A.I.: Who We Are – What A.I. Can Become – Why It Matters

Am I trying to attract investors for this now? Or am I just passionate about Compassionate A.I.? It is the latter, of course. Trying to make it succeed is part of the passion. I think this is of utmost importance. As an M.D., Ph.D. (body-mind), and Master in A.I., I have a uniquely holistic view on the domain. I see immense possibilities and challenges that are hardly or not taken into account really in-depth where it matters.

Are you an investor in A.I.?

Then you have a huge responsibility towards the future. That entails the distant future, but also nearby: your lifetime, certainly the lifetime of your children, big time.

And in several ways, it’s relevant already now.

Non-Compassionate A.I. is immensely dangerous.

I explain this extensively in my book in which I describe two main dangers on the road towards the real A.I.:

  • one human-made, abuse/misuse of technology
  • another A.I.-made, on the path towards more and more autonomous A.I.

You may be certain of this. We are progressing towards autonomy in A.I. If you still have any doubts, I suggest you read the book. We see little of it at present. Moreover, the striving is towards automation, not autonomy.

Or is it? Are we not building our machines to use them? Think again.

There is a fuzzy border between an automaton and an autonomous system.

Of course, nobody in his right mind is developing systems with the explicit aim that they overtake humanity, including the developer himself.

In practice, however, striving for real A.I. may turn out to be much more dangerous once real intelligence is attained in an artificial medium. The world of A.I. is still in a stage of enhanced information processing. The striving is towards more performant information processing. Here also lies a fuzzy border: between information and intelligence. Once the latter is attained, we’re close to autonomous systems.

I don’t mean something that looks like autonomy. We are talking about the real thing: volition, free will. More and more, the system decides about its own goals.

This is the logical consequence of what is already known.

In research, we know quite a lot about the path or ‘journey’ that I follow in my book, from data → information → knowledge (intelligence) → consciousness → wisdom (Compassion). It is, eventually, a path that one can discern in the organic/human case. This case (ourselves) can teach us a lot about how the same path can be looked upon at a more abstract level.

The next stage after this is to implement this path in another medium (silicon, later on: light, quantum).

This will not be easily done – and should certainly not be done – by engineers only. One needs many proper insights to do the translation towards abstractness and from there towards the different concrete medium. One needs even more proper insights to guide this into a direction that is not extremely dangerous. The main point is the following:

One should strive towards Compassion BEFORE real intelligence is attained in A.I.

I don’t see that being done well at present. Note that this is very different from the mainstream in ‘ethical A.I.’ There are even additional huge dangers in this. For instance, striving towards a self-explainable A.I.-system (giving the user explanations for each of its decisions) makes the system also more self-explainable-towards-itself. Through this, with some slight twist, the system gets a vast amount of flexibility, self-learning capability, the ability to make domain-transcending associations. Hm. Please don’t say I didn’t warn about it. Not reckoning with this should be seen as very, very naïve.

That’s why I insist on “not by engineers only.” They’re smart at their field but seldom see the truly bigger picture.

Investors have a huge responsibility

The situation at present is that the course of A.I. developments is not mainly driven by academia, in labs and with lots of time to think about the deeply ethical consequences, challenges, and dangers of A.I. The course of A.I. is mainly driven by investments.

So, I didn’t explain much about Compassion (with the mysterious capital C). “Read the book,” they say, and I can only acknowledge. Compassionate A.I. can give the highest return on investment – only not in straightforward ways – because it is most human-centered.

Moreover, for the same reason, it’s the only humanely worthwhile future we’ve got.



Leave a Reply

Related Posts

What is an Agent?

An agent is an entity that takes decisions and acts upon them. That is where the clarity ends. Are you an agent? The answer depends on the perspective you decide to take. Since the answer also depends on who is seen as the taker of this decision, the proper perspective becomes less obvious from the Read the full article…


One should be scared of any danger, including dangerous A.I. Contrary to this, anxiety is never a good adviser. This text is about being anxious. A phobic reaction against present technology is most dangerous. Needed is a lot of common sense. As to the above image, note the reference to Mary Wollstonecraft Shelley’s novel. In Read the full article…

Is Lisa Safe?

There are two directions of safety for complex A.I.-projects: general and particular. Lisa must forever conform to the highest standards in both. Let’s assume Lisa becomes the immense success that she deserves. Lisa can then help many people in many ways and for a very long time — a millennium to start with. About Lisa Read the full article…

Translate »