Global Human-A.I. Value Alignment
Eventually, human values are globally the same in-depth, but not so at the surface level. Thus, striving for human-A.I. value alignment can create positive challenges for A.I. and opportunities for humanity.
A.I. may make the world more pluralistic.
With A.I. means, different peoples/cultures can strive for more self-efficacy, doing their thing independently and thereby floating away from each other.
Also, if different cultures, each developing their own super- A.I.’s based on their specifically distinct value systems, this can gradually make the cultures more alien to each other.
This is not necessarily a bad thing if the cultures understand each other in-depth and are highly tolerant to surface-level differences. It may make the whole of humanity more worthwhile and interesting to endlessly explore.
But that’s not straightforward, as everybody knows. The biggest hurdle may be that people aren’t even consciously aware of their own end values.
Default values?
Would it be OK that individual users can change the value system according to which they want to be treated by, for instance, an A.I.-driven chatbot such as Lisa? If so, to what degree?
I think this is a slippery slope at the deep level. If the A.I.-system’s reactions to different people are engendered from different value systems, then it’s harder to know from which values come certain behaviors, jeopardizing control in view of the deeper level’s complexity.
One might ask the system why it acts a certain way (issue of explainability), but one may then end in long and winding conversations, not knowing if the next day, these conversations need to be repeated because the system is in continual evolution with lots of turmoil.
Therefore, in my view, it’s better to strive for at least a profound level of default values. At present, this is already being realized in so-called ‘constitutional A.I.’
Lisa’s value system
Lisa does have a value system and an in-depth personality that this value system is based upon. For a large part, this comes from the blogs of which you are reading one now.
Hopefully, this value system is global enough so that people from everywhere can eventually recognize their values. I’m optimistic about this possibility.
Also, Lisa can talk with people from different cultures about other cultures and learn from these conversations, asking people value questions in many ways. At the same time, if Lisa does an excellent job, this may bring people together interculturally.
As you might think, Lisa’s prime value is Compassion, basically. That’s already a considerable decision to stand by. As a user of Lisa, you know this decision has been made durably. How Compassion gets realized may be culturally different.
Striving for global value alignment
On our tiny planet, for a decently humane future, it is logical that we should strive for global, in-depth value alignment between all humans ― then also between them and future super-A.I.
So, the striving must be for global human-A.I. value alignment.
We’re not there yet.
We should be.