Compassion is about the Optimal Distinction
Here is where rationality and human depth coincide. The optimal distinction validates both.
This is also right at the two-sided core of the AURELIS project.
Optimal order comes not by attacking ‘chaos’ but by respecting it.
This is a profoundly ethical stance with consequences in many fields. Time and again, attacking ‘chaos’ is seen as the most straightforward and, therefore, preferable way to solve any social problem.
Politically, for instance, it’s the easy way to score. More generally, it’s about eliminating the enemy, the badness, or even just the uncertainty, the cognitively difficult.
The alternative is Deep Listening.
This is not the easy way. Significantly, many people see it as the naive way, the weak way that will never work out and, therefore, makes ‘own people’ vulnerable.
You may recognize right-wing harshness versus left-wing softness in this. Compassion is neither of both, demanding a gentle and firm attitude, also when it comes to dealing with vagueness.
The thing with Compassion
Put in one way, the aim of Compassion lies in respecting clarity as well as possible while also granting vagueness its place if relevant ― and only when it is relevant. This is, as little as possible but without killing it.
What regularly happens is people attacking vagueness just for the sake of getting rid of it ASAP, not for the sake of making the best distinctions — then calling that modality science or rationality. Compassion means taking up the more demanding job of optimally distinguishing.
For instance, if we don’t opt for the hard work in the domain of stress, we can treat it as an indiscriminate blob and remain largely blind to its probably hugely relevant influence on the immune/inflammatory system.
To Lisa, of course, it will come ‘naturally.’
Powered by A.I., Lisa can make optimal distinctions on the fly where it takes a lot of effort from humans — to such a degree that it might almost seem unfair if there were any competition involved.
But there isn’t, at least, not in the domain of Compassion. We humans may gratefully accept any means that can make us better humans. If Compassionate A.I. can bring us there, so much the better.
This is science.
One can even see in making optimal distinctions the whole of science. It naturally leads to making the instruments from which all further scientific facts flow.
In a technological environment, facts don’t just pop up. They are sought. The kinds of facts that we can seek and find utterly depend on the distinctions that are made from the start. As said, these are bound to an ethical stance.
Compassion is a decision we can take toward optimal science and an optimal future — not one stemming from anxiety and aggression but from caring for total humans and other complex beings.
In the end, it’s a decision.
Probably nothing in the universe says that we must be Compassionate.
Nothing in the universe says that our science must be Compassionate.
Nothing in the universe says that our culture must be Compassionate.
It’s a decision. It is my decision.
What about you?