Why to Invest in Compassionate A.I.

August 13, 2020 Artifical Intelligence, Philanthropically No Comments

Most A.I. engineers have a limited view on organic intelligence, let alone consciousness or Compassion. That’s a huge problem.

Indeed, I’ve written a book about Compassionate A.I.

See: “The Journey Towards Compassionate A.I.: Who We Are – What A.I. Can Become – Why It Matters

Am I trying to attract investors for this now? Or am I just passionate about Compassionate A.I.? It is the latter, of course. Trying to make it succeed is part of the passion. I think this is of utmost importance. As an M.D., Ph.D. (body-mind), and Master in A.I., I have a uniquely holistic view on the domain. I see immense possibilities and challenges that are hardly or not taken into account really in-depth where it matters.

Are you an investor in A.I.?

Then you have a huge responsibility towards the future. That entails the distant future, but also nearby: your lifetime, certainly the lifetime of your children, big time.

And in several ways, it’s relevant already now.

Non-Compassionate A.I. is immensely dangerous.

I explain this extensively in my book in which I describe two main dangers on the road towards the real A.I.:

  • one human-made, abuse/misuse of technology
  • another A.I.-made, on the path towards more and more autonomous A.I.

You may be certain of this. We are progressing towards autonomy in A.I. If you still have any doubts, I suggest you read the book. We see little of it at present. Moreover, the striving is towards automation, not autonomy.

Or is it? Are we not building our machines to use them? Think again.

There is a fuzzy border between an automaton and an autonomous system.

Of course, nobody in his right mind is developing systems with the explicit aim that they overtake humanity, including the developer himself.

In practice, however, striving for real A.I. may turn out to be much more dangerous once real intelligence is attained in an artificial medium. The world of A.I. is still in a stage of enhanced information processing. The striving is towards more performant information processing. Here also lies a fuzzy border: between information and intelligence. Once the latter is attained, we’re close to autonomous systems.

I don’t mean something that looks like autonomy. We are talking about the real thing: volition, free will. More and more, the system decides about its own goals.

This is the logical consequence of what is already known.

In research, we know quite a lot about the path or ‘journey’ that I follow in my book, from data → information → knowledge (intelligence) → consciousness → wisdom (Compassion). It is, eventually, a path that one can discern in the organic/human case. This case (ourselves) can teach us a lot about how the same path can be looked upon at a more abstract level.

The next stage after this is to implement this path in another medium (silicon, later on: light, quantum).

This will not be easily done – and should certainly not be done – by engineers only. One needs many proper insights to do the translation towards abstractness and from there towards the different concrete medium. One needs even more proper insights to guide this into a direction that is not extremely dangerous. The main point is the following:

One should strive towards Compassion BEFORE real intelligence is attained in A.I.

I don’t see that being done well at present. Note that this is very different from the mainstream in ‘ethical A.I.’ There are even additional huge dangers in this. For instance, striving towards a self-explainable A.I.-system (giving the user explanations for each of its decisions) makes the system also more self-explainable-towards-itself. Through this, with some slight twist, the system gets a vast amount of flexibility, self-learning capability, the ability to make domain-transcending associations. Hm. Please don’t say I didn’t warn about it. Not reckoning with this should be seen as very, very naïve.

That’s why I insist on “not by engineers only.” They’re smart at their field but seldom see the truly bigger picture.

Investors have a huge responsibility

The situation at present is that the course of A.I. developments is not mainly driven by academia, in labs and with lots of time to think about the deeply ethical consequences, challenges, and dangers of A.I. The course of A.I. is mainly driven by investments.

So, I didn’t explain much about Compassion (with the mysterious capital C). “Read the book,” they say, and I can only acknowledge. Compassionate A.I. can give the highest return on investment – only not in straightforward ways – because it is most human-centered.

Moreover, for the same reason, it’s the only humanely worthwhile future we’ve got.

 

 

Leave a Reply

Related Posts

Is A.I. Becoming more Philosophy than Technology?

This question has been relevant already for years. It’s only becoming worse (or better). Of course, technology remains important but it’s more like the bricks than the building. Many technologically oriented people may not like this idea. The ones who do are probably forming the future. Some history Historically, the development of A.I. has had Read the full article…

Ontologization in Super-A.I.

Ontologization is the process of evolving from subconceptual to conceptual – including subsequent categorization – through attentive pattern recognition and completion. This way, a subconceptual system can form its own ontology. Natural evolution is one example. Artificially, it can be realized in many ways. PRC = Pattern Recognition and Completion. See: the brain as a Read the full article…

The Next Breakthrough in A.I.

will not be technological, but philosophical. Of course, technology will be necessary to realize the philosophical. It will not be one more technological breakthrough, but rather a combination of new and old technologies. “Present-day A.I. = sophisticated perception” These are the words of Yann LeCun, a leading A.I. scientist, founding father of convolutional nets, which Read the full article…

Translate »