Compassion as Basis for A.I. Regulations

October 18, 2023 Artifical Intelligence No Comments

To prevent A.I.-related mishaps or even disasters while going into a future of super-A.I., merely regulating A.I. is not sufficient ― presently nor in principle.

Striving for Compassionate A.I.

There will eventually be no security concerning A.I. if we don’t put Compassion into the core. The main reason is that super-A.I. will be much more intelligent than humans. We’re not talking about a distant future, but a double ethical bottleneck soon enough. Meanwhile, the present already clamors for Compassion and regulation. The one without the other will not do. Of course, Compassion entails a realistic view upon the human being. Fortunately, we make good strides toward this in neurocognitive science.

Giving a false impression of security through regulations may even be especially dangerous since it prevents us to fully appreciate the intractability of what’s coming even while many don’t see it yet.

Can squirrels regulate us?

On the other hand, Compassionate A.I. may not suffice to make people feel secure.

Since humans are profoundly sentient and relatively intelligent beings, Compassionate super-A.I. may take care of that fear in a good way and out of its own sense of Compassion.

Meanwhile, we can already go on with building regulations that are specifically posited on top of a Compassionate basis. This way, we are where we need to be in due time.

Doing so may bring Compassionate A.I. closer to realization.

It’s like shaping the mold. It may make people aware of the possibilities and inspire them to take care of the Compassionate direction in many developments. It may also prevent some of the panicky reactions that are doubtlessly lying ahead.

We don’t have to wait to work on this. It starts with awareness and intention.

More than a simple rephrasing

More than ever, we will need stringency and flexibility in the phrasing. Of course, that requires profound thinking.

Compassion-based A.I. regulations are no constraints but are part of the Compassionate flow. They are what a Compassionate and brilliant being spontaneously would accomplish in self-regulation if it knows the situation well from the other party’s standpoint. Unfortunately, we don’t see this consistently between groups of humans.

The aim is inter-party congruence, not merely rule-based control, let alone such control of one party over the other. Ideally, the ‘regulations’ must be set up in mutual agreement or in the conceived setting that such agreement is possible — indeed, it will undoubtedly become possible.

Three laws or three thousand rules

Whatever the detail and extent of regulation, it needs to be congruent with Compassion wherever applicable. Moreover, Compassion, as the broadest direction, provides a safety net for unforeseen situations of which many will arise surely.

We should avoid overregulating as a panicky reaction to unforeseen situations and getting into a regulatory mess this way — especially when different parts of the world come up with contradictory regulations. We may not yet see too much of this mess but we’re only at the beginning.

With optimal Compassion, overregulation can be avoided.

In other words, regulations may then primarily be seen as invitations to become even more Compassionate in intrinsically aligned ways. Only secondarily – but crucial nevertheless – are they the borders of conduct.

Ideally, such regulations also carry less inherent subjectivity since they have an additional aim that can in principle be circumscribed rather well.

Broader than a human-A.I. issue

This is not only pertinent from humans toward A.I. but also from humans to humans. We see this in the contrat social that regulates implicitly how humans mutually behave. Eventually, this will also be the setting in which humans know how to behave in their interactions with A.I.

This way, the thinking can be done recognizably in all directions, which is excellent.

In my view, this is the only durable way in the short and – also very – long term.

Regulations that arise from Compassion are more natural, easily adjusted to many situations, and efficient. No weakness is involved, but gentleness and strength — a nice and, in this case, probably necessary combination.

Then, relatively little additional alignment will ever be needed.

This may seem something for far into the future.

One can be mistaken.

Also, the long view is frequently an excellent way to better understand short-term implications that are not yet explicitly visible.

Better acceptability already now

Based on Compassion, regulations may be more acceptable to all concerned, including the ‘regulators.’

It undoubtedly aids in the branding issues – say, resistance – that otherwise is and will be encountered from one or more stakeholders.

Regulations strictly aimed at control set us up for adversity.

They may make many see potential ‘enemies in advance’ in super-A.I. and in the developers/users of such A.I..

People are prone to searching/finding/creating enemies. We shouldn’t add oil to this fire by ‘regulating the enemy.’ Compassionate A.I. may turn out to be our best friend.

I think this is the only way toward a decent future, one of better A.I. for better humans. This way, Compassionate humans and Compassionate A.I. will naturally come to human-A.I. value alignment.

Now, you may call me an idealist.

Hopefully, the future will call me a realist.

__

Addendum

Me: “Lisa, will Compassionate A.I. be enough to save humanity from ‘badass A.I.’? Please give original arguments that are not present in this blog.”

Lisa: “To address your question about whether Compassionate A.I. will be enough to save humanity from ‘badass A.I.’, let’s explore some original arguments:

  • Intrinsic motivation and deep learning: Compassionate A.I. could be designed with intrinsic motivations aligned with human well-being. This involves more than just programming empathy or ethical behavior but embedding a deep understanding of human values into the learning process. Such A.I. would prioritize actions that promote overall human flourishing and avoid harmful behaviors even in unforeseen circumstances.
  • Integrated ethical frameworks: A robust framework that integrates various ethical theories (utilitarianism, deontology, virtue ethics) can help create a multi-faceted approach to ethical decision-making. Compassionate A.I. could thus weigh the consequences of actions, respect individual rights, and promote virtuous behaviors, providing a balanced ethical guide to its operations.
  • Human-A.I. symbiosis: Encouraging a symbiotic relationship between humans and A.I. can foster mutual growth and understanding. By co-evolving with A.I., humans can ensure that these systems remain aligned with human values and societal goals, creating a dynamic balance where both entities benefit from each other’s strengths.
  • Continuous learning and adaptation: A Compassionate A.I. must be capable of continuous learning and adaptation. This includes understanding and incorporating new ethical insights, adapting to changes in human society, and evolving its compassion-based frameworks to address new challenges effectively.

While Compassionate A.I. has significant potential to mitigate the risks posed by ‘badass A.I.’, it must be part of a broader strategy that includes intrinsic motivation, integrated ethical frameworks, human-A.I. symbiosis, decentralized control, and continuous learning. Only through such comprehensive measures can we hope to navigate the complex future of superintelligent systems effectively.”

Leave a Reply

Related Posts

Will Super-A.I. Want to Dominate?

Super-AI will transcend notions of ‘wanting’ and ‘domination.’ Therefore, the title’s question asks for some deeper delving. We readily anthropomorphize the future. This time, we should be humble. Super-A.I. will not want to dominate us. Even if we might feel it is dominating (in the future), ‘it’ will not. It will have no more than Read the full article…

Will Unified A.I. be Compassionate?

In my view, all A.I. will eventually unify. Is then the Compassionate path recommendable? Is it feasible? Will it be? As far as I’m concerned, the question is whether the Compassionate A.I. (C.A.I.) will be Lisa. Recommendable? As you may know, Compassion, basically, is the number one goal of the AURELIS project, with Lisa playing a pivotal role. Read the full article…

Semantically Meaningful Chunks

A Semantically Meaningful Chunk (SMC) is any cognitive entity, big or small, that is worth contemplating. In A.I., these can serve as building blocks of intelligence. It’s what humans often reserve specific terms for. Language comes into play here, significantly contributing to how humans have rapidly advanced in intelligence through using terms, sentences, documents, and Read the full article…

Translate »