Compassion as Basis for A.I. Regulations

October 18, 2023 Artifical Intelligence No Comments

To prevent A.I.-related mishaps or even disasters while going into a future of super-A.I., merely regulating A.I. is not sufficient ― presently nor in principle.

Striving for Compassionate A.I.

There will eventually be no security concerning A.I. if we don’t put Compassion into the core. The main reason is that super-A.I. will be much more intelligent than humans. We’re not talking about a distant future, but a double ethical bottleneck soon enough. Meanwhile, the present already clamors for Compassion and regulation. The one without the other will not do. Of course, Compassion entails a realistic view upon the human being. Fortunately, we make good strides toward this in neurocognitive science.

Giving a false impression of security through regulations may even be especially dangerous since it prevents us to fully appreciate the intractability of what’s coming even while many don’t see it yet.

Can squirrels regulate us?

On the other hand, Compassionate A.I. may not suffice to make people feel secure.

Since humans are profoundly sentient and relatively intelligent beings, Compassionate super-A.I. may take care of that fear in a good way and out of its own sense of Compassion.

Meanwhile, we can already go on with building regulations that are specifically posited on top of a Compassionate basis. This way, we are where we need to be in due time.

Doing so may bring Compassionate A.I. closer to realization.

It’s like shaping the mold. It may make people aware of the possibilities and inspire them to take care of the Compassionate direction in many developments. It may also prevent some of the panicky reactions that are doubtlessly lying ahead.

We don’t have to wait to work on this. It starts with awareness and intention.

More than a simple rephrasing

More than ever, we will need stringency and flexibility in the phrasing. Of course, that requires profound thinking.

Compassion-based A.I. regulations are no constraints but are part of the Compassionate flow. They are what a Compassionate and brilliant being spontaneously would accomplish in self-regulation if it knows the situation well from the other party’s standpoint. Unfortunately, we don’t see this consistently between groups of humans.

The aim is inter-party congruence, not merely rule-based control, let alone such control of one party over the other. Ideally, the ‘regulations’ must be set up in mutual agreement or in the conceived setting that such agreement is possible — indeed, it will undoubtedly become possible.

Three laws or three thousand rules

Whatever the detail and extent of regulation, it needs to be congruent with Compassion wherever applicable. Moreover, Compassion, as the broadest direction, provides a safety net for unforeseen situations of which many will arise surely.

We should avoid overregulating as a panicky reaction to unforeseen situations and getting into a regulatory mess this way — especially when different parts of the world come up with contradictory regulations. We may not yet see too much of this mess but we’re only at the beginning.

With optimal Compassion, overregulation can be avoided.

In other words, regulations may then primarily be seen as invitations to become even more Compassionate in intrinsically aligned ways. Only secondarily – but crucial nevertheless – are they the borders of conduct.

Ideally, such regulations also carry less inherent subjectivity since they have an additional aim that can in principle be circumscribed rather well.

Broader than a human-A.I. issue

This is not only pertinent from humans toward A.I. but also from humans to humans. We see this in the contrat social that regulates implicitly how humans mutually behave. Eventually, this will also be the setting in which humans know how to behave in their interactions with A.I.

This way, the thinking can be done recognizably in all directions, which is excellent.

In my view, this is the only durable way in the short and – also very – long term.

Regulations that arise from Compassion are more natural, easily adjusted to many situations, and efficient. No weakness is involved, but gentleness and strength — a nice and, in this case, probably necessary combination.

Then, relatively little additional alignment will ever be needed.

This may seem something for far into the future.

One can be mistaken.

Also, the long view is frequently an excellent way to better understand short-term implications that are not yet explicitly visible.

Better acceptability already now

Based on Compassion, regulations may be more acceptable to all concerned, including the ‘regulators.’

It undoubtedly aids in the branding issues – say, resistance – that otherwise is and will be encountered from one or more stakeholders.

Regulations strictly aimed at control set us up for adversity.

They may make many see potential ‘enemies in advance’ in super-A.I. and in the developers/users of such A.I..

People are prone to searching/finding/creating enemies. We shouldn’t add oil to this fire by ‘regulating the enemy.’ Compassionate A.I. may turn out to be our best friend.

I think this is the only way toward a decent future, one of better A.I. for better humans. This way, Compassionate humans and Compassionate A.I. will naturally come to human-A.I. value alignment.

Now, you may call me an idealist.

Hopefully, the future will call me a realist.

Leave a Reply

Related Posts

A.I. Explainability versus ‘the Heart’

Researchers are moving towards bringing ‘heart’ into A.I. Thus, new ethical questions are popping up. One of them concerns explainability. The heart cannot explain itself. “On ne voit bien qu’avec le cœur. L’essentiel est invisible pour les yeux.” [Antoine de Saint-Exupéry] “It is only with the heart that one can see rightly; what is essential Read the full article…

How to Contain Non-Compassionate Super-A.I.

We want super(-intelligent) A.I. to remain under meaningful human control to avoid that it will largely or fully destroy or subdue humanity (= existential dangers). Compassionate A.I. may not be with us for a while. Meanwhile, how can we contain super-A.I.? Future existential danger is special in that one can only be wrong in one Read the full article…

What is an Agent?

An agent is an entity that takes decisions and acts upon them. That is where the clarity ends. Are you an agent? The answer depends on the perspective you decide to take. Since the answer also depends on who is seen as the taker of this decision, the proper perspective becomes less obvious from the Read the full article…

Translate »