Open Letter about Compassionate A.I. (C.A.I.) to Elon Musk

April 22, 2023 Artifical Intelligence, Empathy - Compassion No Comments

And to any Value-Driven Investors (VDI) in or out of worldly spotlights. This is a timely call for Compassionate A.I. (C.A.I.)

Compassion and A.I. are seldom mentioned together. Yet C.A.I. may be the most crucial development in the near as well as far-away future of humanity. Please see my book about the Journey Towards Compassionate A.I. (+/- 600 p.)

About terms

In this and other AURELIS blogs, Compassion is a well-defined concept. See Compassion, basically.

VDIs are people with significant financial resources who may feel the responsibility to make a deeply meaningful difference in many others’ lives.

The issue

Lately – and more to come shortly – increasingly many people feel an A.I.-engendered urgency. Is A.I. naturally safe? (No) Are people ready for it? (No) Do we know how to handle all this? (No) The urgency is appropriate. Moreover, the timeframe of necessary action has recently been substantially reduced, among others, by Geoffrey Hinton, the alleged godfather of A.I.

We must do our best to control the A.I. At the same time, we must face the fact that we will not be able to continue doing so ― already soon enough. Almost certainly, at some point in time, humanity will lose its hold on A.I. entirely. Then what?

There are three relevant points in time:

  • Singularity: A.I. becomes super-A.I. by gaining intelligence at an immense speed and flying beyond human intelligence at short notice.
  • Super-A.I. will be gaining complex volition, say: consciousness ― sorry to say, but no doubt about it. We must face it.
  • Humanity loses direct control.

These lie closer together in time than most seem to realize. Humans are proficient in active denial, especially in a very broad context. Since real Artificial Intelligence will arrive within two decades (hear Hinton’s words), we need to act ASAP.

We can prepare for this only through Compassionate A.I.

If you follow the above, there is only one conclusion possible: this one.

Please read about why A.I. must be Compassionate. In short:

  • To prevent the misuse of A.I. in non-Compassionate ways.
  • To prevent a time stretch in which super-A.I. will be intentional, yet not Compassionate. A short period of this may already herald the end of the human race even if, after that, C.A.I. may regret it. To grant ourselves this short period, we must work on directly controlling the A.I. for the time being but not while forgetting the Compassion.

Positively said, we must think about and use A.I. from now on in both directions:

  • To support people to become (even more) Compassionate.
  • To make sure the road toward super-A.I. becomes a Compassionate one before its becoming truly intelligent. Also, we need to ensure it stays Compassionate forever.

The latter point can be realized, but I don’t see many serious efforts in that direction, if any. There is some talk about it, but hardly seriously ― instead, lots of active denial in academia, politics, and everywhere.

Doing well by doing good on a big scale.

Even more positively said, the opportunities are immense in the same two directions:

  • Compassion is not just an ethical endeavor. If well understood and enacted, the potential to relieve suffering and heighten growth is immense ― also from-the-inside-out (self-Compassion). Several domains (healthcare, judiciary, education…) can be essentially transformed for the better.
  • With C.A.I., the future of humanity looks much brighter than we could ever have imagined, even surpassing the essential changes of the above point. Human-A.I. value alignment will be solved forever. C.A.I. will help us lead the most meaningful lives. It’s hard to imagine this future may last many millions of years. The groundwork of what will be is being laid now.

Let me at this point restate that it’s not only about some far-away future. It’s also about what is feasible now and can make a lot of difference pretty soon, increasing year after year. Compassionate A.I. video-coach Lisa, starting with the field of chronic pain, may play a significant role in this in the near future. For more, see https://aurelisa.com.

Why you, dear VDI.

Several of you want to make a strong difference. Well, here’s a chance.

Needed are philanthropic actions ― no donation, therefore. Since A.I. is getting past its infancy, as we all know, tons of income can be generated. Thus, the direction of financial gain is probably the best way to get things done most efficiently and urgently ― also if the final aim is philanthropic, which it absolutely should be.

Personally

I want to see C.A.I. being realized, one way or another. As to my several backgrounds (see me), I may have a pretty good view of many challenges (huge but not insurmountable) and requirements on the side of Compassion, A.I., and the combination. However, it doesn’t necessarily need to be done with me at the helm of the good cause. That said, I’m perfectly willing to take responsibility.

My personal question to you, dear VDI, is whether you want to take some financial risk in the direction of C.A.I.

If so, please contact us at lisa@aurelis.org. At your asking, you will receive a free copy of the book mentioned at the start of this blog.

Leave a Reply

Related Posts

A Tapestry of Compassionate Patterns

The AURELIS blogs form a vast network of interconnected insights woven together by a common thread: Compassion. This tapestry transcends the boundaries between seemingly unrelated domains, revealing deep connections and offering a foundation for meaningful understanding. Through this, Lisa emerges as a lens for uncovering the hidden bridges of meaning, amplifying the potential of this Read the full article…

What Is Morality to A.I.?

Agreed: it’s not even evident what ‘morality’ means to us. Soon comes A.I. Will it be ‘morally good’? Humans have a natural propensity towards morality. Whether we tend towards ‘good’ or ‘bad’, we have feelings and generally recognize these in others too, in humans and in animals. We share organic roots. We recognize suffering and Read the full article…

Human Meditation and Artificial Intelligence

At first glance, meditation and A.I. seem worlds apart — one is deeply human and introspective, the other fast and computational. Yet both revolve around pattern recognition. Meditation reveals how thoughts arise within a vast web of neuronal activity, while A.I. detects and predicts patterns through deep learning. If meditation uncovers the emergence of thoughts Read the full article…

Translate »