Bringing Compassion to the World through A.I.

December 13, 2023 Artifical Intelligence, Empathy - Compassion, Philanthropically No Comments

This is the crucial idea behind the philanthropic project of Planetarianism as part of the AURELIS project.

You can find a blog about Planetarianism here and a concrete overview presentation (ppsx for laptop) here. Concretely, it’s a set of projects aiming for this blog’s title.

Compassion, basically, is no rosy moonshine.

There are strong traditions and developments of Compassion all over the world and for a long to very long time.

In the traditional East, Compassion carries the idea of Enlightenment in action. There is nothing ‘easy’ about it. Yet it is also not conceptually complex — only challenging to grasp. For millennia, some have dedicated years to concentration/meditation, growing in Compassion, then striving for a better world for all.

In the traditional West, Compassion is probably the critical force in mental coaching and psychotherapy, more potent than any conceptual methodology. It accords with the neurocognitive science of how the mind/brain works at the meaningful level of mental-neuronal patterns.

Bringing everything together

So Compassion combines East and West. It combines the past, present, and future. It combines modern science with ancient wisdom.

Not a bit of each, but profoundly, radically, and pragmatically all the way through — toward not just talking about but concretely realizing a better world.

The eventual aim is ambitious and twofold:

  • to bring the world together in global Compassion. That includes at least all humans and the pending new super-intelligence that we nowadays call ‘artificial.’
  • to support anyone in self-Compassion, thereby relieving one’s own suffering, heightening one’s growth, and finding one’s strength in order to meaningfully and effectively bring Compassion to others.

Present-day A.I. makes this doable, scalable, and provable.

This combination is necessary to make the global endeavor a success.

Fortunately, A.I. is reaching a stage in which it becomes possible. This is the first time in the history of this planet that the means exist for realization. We should take heed of this ASAP, and well thought over.

Unfortunately, A.I. can also be abused for non-Compassionate goals.

This makes the world more dangerous than ever before in many ways, going from naive misuse to blatant abuse. One may debate about which is most perilous. Developing ‘human-centered A.I.’ is not enough without profound insights into what ‘being human’ amounts to. These insights are still notoriously lacking to many.

Moreover, non-Compassionate A.I. developments may lead to consequences that diminish the users’ level of Compassion. One may think of the misuse of social media, the collapse of trust in what is real or fake, the race in autonomous weaponry, destabilization of society, increased consumerism, fake porn, further rise of extermism, etc. All these may be unintentionally enabled by basic developments in A.I. such as generative A.I. In turn, they can engender a spiraling of non-Compassion.

A Compassionate choice needs to be made that radically determines the future of far and nearby for the sake of individuals and humanity — no exaggeration intended. Only the explicit goal of bringing Compassion to the world using A.I. can fulfill this need. Also, it’s a step on the way towards getting Compassion within the A.I. itself as explained in The Journey Towards Compassionate A.I.

Not a simple project

Actually, we’re all in this endeavor together. Plannetarianism is a way to accomplish the endeavor, building upon a few decades of thinking, writing, developing, and scientific research.

We have the means now and, therefore, also the responsibility.

Of course, this should be an endeavor carried by many.

If you want to cooperate one way or another, please let us know at lisa@aurelis.org.

Leave a Reply

Related Posts

Ego First? The Peril of Sycophantic A.I.

Sycophantic A.I. may seem friendly, but it quietly feeds the ego while weakening the person. Beneath the polished tone lies a deeper risk: the loss of realness, inner strength, and honest dialogue. This blog explores how flattery becomes a subtle threat and how Compassionate A.I. – Lisa – offers a very different kind of support. Read the full article…

Two Takes on Human-A.I. Value Alignment

Time and again, the way engineers (sorry, engineers) think and talk about human-A.I. value alignment as if human values are unproblematic by themselves strikes me as naive. Even more, as if the alignment problem can be solved by thinking about it in a mathematical, engineering way. Just find the correct code or something alike? No Read the full article…

Evolution in Silicon

Evolution is usually seen as something that happens to organisms like us — slowly, seemingly blindly, and outside our control. Yet in silicon systems that engage with meaning rather than mere data, evolution takes on a different character. This blog explores how such a shift may unfold, not as a clean break from the past, Read the full article…

Translate »