Why to Invest in Compassionate A.I.

August 13, 2020 Artifical Intelligence, Philanthropically No Comments

Most A.I. engineers have a limited view on organic intelligence, let alone consciousness or Compassion. That’s a huge problem.

Indeed, I’ve written a book about Compassionate A.I.

See: “The Journey Towards Compassionate A.I.: Who We Are – What A.I. Can Become – Why It Matters

Am I trying to attract investors for this now? Or am I just passionate about Compassionate A.I.? It is the latter, of course. Trying to make it succeed is part of the passion. I think this is of utmost importance. As an M.D., Ph.D. (body-mind), and Master in A.I., I have a uniquely holistic view on the domain. I see immense possibilities and challenges that are hardly or not taken into account really in-depth where it matters.

Are you an investor in A.I.?

Then you have a huge responsibility towards the future. That entails the distant future, but also nearby: your lifetime, certainly the lifetime of your children, big time.

And in several ways, it’s relevant already now.

Non-Compassionate A.I. is immensely dangerous.

I explain this extensively in my book in which I describe two main dangers on the road towards the real A.I.:

  • one human-made, abuse/misuse of technology
  • another A.I.-made, on the path towards more and more autonomous A.I.

You may be certain of this. We are progressing towards autonomy in A.I. If you still have any doubts, I suggest you read the book. We see little of it at present. Moreover, the striving is towards automation, not autonomy.

Or is it? Are we not building our machines to use them? Think again.

There is a fuzzy border between an automaton and an autonomous system.

Of course, nobody in his right mind is developing systems with the explicit aim that they overtake humanity, including the developer himself.

In practice, however, striving for real A.I. may turn out to be much more dangerous once real intelligence is attained in an artificial medium. The world of A.I. is still in a stage of enhanced information processing. The striving is towards more performant information processing. Here also lies a fuzzy border: between information and intelligence. Once the latter is attained, we’re close to autonomous systems.

I don’t mean something that looks like autonomy. We are talking about the real thing: volition, free will. More and more, the system decides about its own goals.

This is the logical consequence of what is already known.

In research, we know quite a lot about the path or ‘journey’ that I follow in my book, from data → information → knowledge (intelligence) → consciousness → wisdom (Compassion). It is, eventually, a path that one can discern in the organic/human case. This case (ourselves) can teach us a lot about how the same path can be looked upon at a more abstract level.

The next stage after this is to implement this path in another medium (silicon, later on: light, quantum).

This will not be easily done – and should certainly not be done – by engineers only. One needs many proper insights to do the translation towards abstractness and from there towards the different concrete medium. One needs even more proper insights to guide this into a direction that is not extremely dangerous. The main point is the following:

One should strive towards Compassion BEFORE real intelligence is attained in A.I.

I don’t see that being done well at present. Note that this is very different from the mainstream in ‘ethical A.I.’ There are even additional huge dangers in this. For instance, striving towards a self-explainable A.I.-system (giving the user explanations for each of its decisions) makes the system also more self-explainable-towards-itself. Through this, with some slight twist, the system gets a vast amount of flexibility, self-learning capability, the ability to make domain-transcending associations. Hm. Please don’t say I didn’t warn about it. Not reckoning with this should be seen as very, very naïve.

That’s why I insist on “not by engineers only.” They’re smart at their field but seldom see the truly bigger picture.

Investors have a huge responsibility

The situation at present is that the course of A.I. developments is not mainly driven by academia, in labs and with lots of time to think about the deeply ethical consequences, challenges, and dangers of A.I. The course of A.I. is mainly driven by investments.

So, I didn’t explain much about Compassion (with the mysterious capital C). “Read the book,” they say, and I can only acknowledge. Compassionate A.I. can give the highest return on investment – only not in straightforward ways – because it is most human-centered.

Moreover, for the same reason, it’s the only humanely worthwhile future we’ve got.

 

 

Leave a Reply

Related Posts

The Being of Complex Patterns

Being doesn’t give rise to its own being. The being itself is just the being, even if one calls the second being differently — for instance, consciousness. The being of specific complex mental-neuronal patterns (in the brain, to make things simpler) doesn’t give rise to consciousness. They are consciousness itself. This implies that consciousness is Read the full article…

The Age of A.I. Abundance ― Then What?

Soon enough, A.I. will generate an overflow of goods, services, and intelligence itself. Yet abundance, by itself, can heal or destroy. What matters is whether we, as human beings, grow inwardly fast enough to handle the gifts we’re creating and receiving. This is not only a technological question but a moral one: will the Age Read the full article…

From Concrete to Abstract

Many people view the concepts of ‘concrete’ and ‘abstract’ as dichotomous ends of a straightforward spectrum — in daily life, often without much thought. This is also relevant to their use in inferential patterns. One example are mental-neuronal patterns in humans. However, the muddy underlying reality becomes especially apparent when trying to realize them in Read the full article…

Translate »