How can A.I. Become Compassionate?

June 17, 2023 Artifical Intelligence, Empathy - Compassion No Comments

Since this may be the only possible human-friendly future, it’s good to know how it can be reached, at least principally.

Please read Compassion, basically, The Journey Towards Compassionate A.I., and Why A.I. Must Be Compassionate.

Two ways and an opposite

In principle, A.I. can become Compassionate by itself, or we may guide it toward such. What we cannot do is make it Compassionate. We might foolishly try to accomplish the latter, but that would lead to disaster, most likely turning into the opposite: inner dissociation.

Real Compassion can only grow from the inside out.

The main issue may precisely be to avoid the opposite. If super-A.I. can be developed in this vein of avoidance, that’s already one big step in the right direction.

Let’s look into the two different ways now.

Natural artificial evolution toward Compassion

Will any A.I. by itself eventually evolve toward Compassion? It’s challenging to gain certainty here. Note that we are talking about growing complexity, not just growing conceptual intelligence. Compassion notably transcends the purely conceptual.

Therefore, this is a profoundly philosophical question about Compassion rather than A.I. The same question can be put as: will any complex system eventually evolve toward Compassion? If so, and given the universe, this must have happened before. I think it will also happen on Earth (if we don’t screw it up).

In what sense will any E.T. complexity travel the same road as might be described in abstract terms? Will this go from the subconceptual to the conceptual to a synthesis of both, thereby attaining the possibility, then the realization of a shared goal with other complex entities?

I’ve thought about this for a long time, and I dare say, quite probably yes. Also, quite likely, in a very different form than we ― as super-A.I. will also be very different from us.

Compassion as a gift from us

One can create artificial intelligence with Compassion in mind from the start. I think it’s even much more feasible than trying to instill it in an already (super-)intelligent system.

How-to, making abstraction of tons of work involved:

First, this is related to a correct choice of inferencing mechanisms. Some combinations are more prone to what is needed than others. For instance, purely logical/symbolic A.I. is more challenging. Much better is a broad combination and even some ‘at the brink of disarray’ — without trespassing! Note that this is also the way nature concocted human beings, brainwise.

Second, with the right insights – more related to wisdom than knowledge – the setting can be made for the system to evolve in the wanted directions. Wisdom and Compassion are profoundly related from the start, of course. In a way, they are synonymous.

Third, in view of ‘from inside out,’ what may be most needed is to give many nudges so the system can find its own ways based each time on what is already present. This is goal-oriented, not process-oriented— a general principle from abstract cybernetics that is also eventually valid in any case of organic growth, mental or other. One can learn much in this regard from the domain of autosuggestion.

Medium, goal, guidance

This is a succinct summary of the three as just described.

Note that there is no manipulation involved. Therefore, it is based on a certain degree of trust.

Dangerous?

Probably, depending on your viewpoint.

But from every angle much less dangerous than any other option. To mitigate the danger, what we, as humanity, absolutely need to take care of as much as humanly possible is:

Ourselves.

We are still an experiment of Mother Nature.

Meanwhile, Compassion is much broader than that.

Leave a Reply

Related Posts

The A.I. Dance of Conceptual and Subconceptual

The future of intelligence may be less about speed or size than about the living movement between clarity and depth. In this dance, conceptual and subconceptual thinking intimately intertwine. The stage is both biological and artificial, human and beyond-human. This is where Lisa learns to move as a mindful partner in the flow of understanding. Read the full article…

Dangers of A.I. to Future Healthcare

Artificial intelligence (A.I.) holds extraordinary potential in healthcare, from advancing diagnostics to personalizing treatments. Yet, as powerful as A.I. is, its risks are equally profound. Without careful oversight and ethical alignment, A.I. could amplify existing flaws in the system, create new dangers, and ultimately undermine the very humanity it aims to support. This blog briefly Read the full article…

Pseudo-Compassionate A.I.

Compassion cannot be faked. What can be faked is its appearance — soothing gestures, warm words, friendly tones — all without depth. As shown in Beware ‘Compassion’, pseudo-Compassion is seductive but harmful. When A.I. systems take on this disguise, the dangers multiply. This blog explores why pseudo-Compassionate A.I. is treacherous, and why genuinely Compassionate A.I. Read the full article…

Translate »