No, I won’t solve this problem now, especially since that’s impossible. Nevertheless, we must make valuable strides. How else are we going to see human-A.I. value alignment?
One critical stride is the insight into how difficult it is to develop such a declaration ― at least in a sustainable way, even though it’s getting highly urgent given pending super-A.I. For some deeper thoughts, see the category Morality.
Different from the universal declaration of human rights
It’s easier to formalize rights than values. Rights are more concrete from the start. Also, rights generally start from values that are more fundamental.
Concrete values are based on more profound but less concrete values. Eventually, when reaching the stage of meaningful values (‘end values’), they seem to have lost every aspect of being concrete. For instance, is beauty an end value?
Meaningfulness is not concrete, by definition.
Still, we should strive to reach a universal declaration of human values.
This will pose the same problem as a few millennia of Western and Eastern ethical philosophizing.
Within the usual frame, we’re not going to solve it in a few years. Thus, we will be too late to form our own universal declaration of human values in a way that we agree with ourselves well before the advent of super-A.I.
Still, we should strive for it.
Can one strong world government impose a universal declaration of values?
I think we can agree this is impossible. One step in that direction will be followed by two in the opposite sense. No part of the world will gladly accept the values of another as ‘universal,’ imposed through sheer force.
For instance, if the UN tries to impose this, that may be the end of the UN as we know it.
The worst is thinking that one specific culture is the only superior one by nature.
In that case, several cultures will want their specific values to be considered as universal. Since this endeavor goes as deep as it gets, unless we solve the cultures thing, it risks unleashing the mother of all wars.
We don’t want that but avoiding it will not be a piece of cake. The wars that we see now – and there are many – are eventually all about cultural values, with the most dangerous ones being most value-oriented.
Second worst option: thinking that one simple ethical conceptual framework can be universal by nature.
We have millennia of ethical thinking to counter that seemingly straightforward contemplation. It hasn’t worked out.
Nevertheless, this naïve idea underlies the thinking of many A.I. developers and academic thinkers. Contrary to this, we’re not going to make it merely conceptually.
Also, no simple ‘ethical system’ exists in the human brain, waiting to be discovered as the final solution. But we can learn very relevant ethical stuff from brainy insights, as we can learn from organic evolution on the planet. Both provide at least a direction and some pertinent questions.
You probably know the AURELIS answer.
It’s the path toward global Compassion. This is a choice for Compassion, basically, as the core of human values. Somewhat more concretely, one take – and just only one – is formed by the Five Aurelian Values.
Even more concretely, a few developments are underway:
- Bottom up, the AURELIS striving in this respect is Planetarianism.
- Top down, the AURELIS striving is Open Leadership.
- Individually, the major aim of AURELIS-coaching (and Lisa) is to support many people in heightening self-Compassion ― at their own choice since it may and cannot be done in any other way.
While the journey may be long, each step is worthwhile.
Are you ready?