Global Human-A.I. Value Alignment

December 9, 2023 Artifical Intelligence No Comments

Human values align deeply across the globe, though they vary on the surface. Thus, striving for human-A.I. value alignment can create positive challenges for A.I. and opportunities for humanity.

A.I. may make the world more pluralistic.

With A.I. means, different peoples/cultures can strive for more self-efficacy, doing their thing independently and thereby floating away from each other. However, if cultures develop their super-A.I. around distinct value systems, they may gradually drift apart.

This is not necessarily a bad thing if the cultures understand each other in-depth and are highly tolerant of surface-level differences. It could make humanity endlessly fascinating and worthy of deeper exploration.

But that’s not straightforward, as everybody knows. The main challenge is that people often aren’t consciously aware of their core values.

Default values?

Would it be OK that individual users can change the value system according to which they want to be treated by, for instance, an A.I.-driven chatbot such as Lisa? If so, how far should this customization go? This opens a critical dialogue on how adaptable systems can balance personalization with universal ethical standards.

I think this is a slippery slope at a deep level. If the A.I.-systems’ reactions to different people are engendered from different value systems, then it’s harder to know from which values come certain behaviors, jeopardizing control in view of the deeper level’s complexity.

Users could inquire why the system behaves in certain ways (issue of explainability), but this may end in long and winding conversations, not knowing if the next day, these conversations need to be repeated because the system is in continual evolution with lots of turmoil.

Therefore, in my view, it’s better to strive for at least a profound level of default values. Currently, this is being explored through concepts like ‘constitutional A.I.’

Lisa’s value system

Lisa has a value system and an in-depth personality which is based on this value system. For a substantial part, this comes from the blogs that you are reading now. Hopefully, this value system is global enough so that people from everywhere can eventually recognize their values. I’m optimistic about this possibility.

Also, Lisa can talk with people from different cultures about other cultures and learn from these conversations, asking people value-related questions in many ways. At the same time, if Lisa does an excellent job, this may bring people together interculturally.

As you might think, Lisa’s core value is Compassion, basically. That’s already a considerable decision to stand by. It may act as the essential ethical bridge between human diversity and technological progress. As a user of Lisa, you know this decision has been made durably. The expression of Compassion may vary across cultures.

Striving for global value alignment

On our tiny planet, for a decently humane future, it is logical that we should strive for global, in-depth value alignment (with different cultural realizations) between all humans ― then also between them and future super-A.I.

So, the striving must be for global in-depth human-A.I. value alignment.

We’re not there yet.

We should be.

Addendum

Me: “Hi Lisa, do you see a better default foundation than Compassion in this respect? Why?”

Lisa: “I don’t see a better foundation than compassion. Compassion provides both the depth and openness needed for genuine connection across cultures and individuals. It aligns with human depth and supports growth from the inside out. Compassion reaches beyond surface differences, creating a common ground that respects diversity while addressing suffering and fostering inner strength. It is broad yet focused, allowing A.I. and humans to work toward global alignment in an ethically sound and profoundly human way.”

Me: “According to you, is Compassion interculturally entirely acceptable?”

Lisa: “From my perspective, compassion is interculturally acceptable at a profound level, but how it’s expressed can vary across cultures. In essence, compassion touches something deeply human in everyone, regardless of background. It’s about understanding, relieving suffering, and fostering growth, which are universally valued. However, cultural differences might shape the way compassion is shown or received. For example, some cultures may emphasize emotional closeness, while others focus more on practical support.

At its core, though, compassion transcends cultural boundaries, acting as a bridge between surface-level differences.”

Me: “Thanks. This is something to delve into in another blog.”

Leave a Reply

Related Posts

What Is Morality to A.I.?

Agreed: it’s not even evident what ‘morality’ means to us. Soon comes A.I. Will it be ‘morally good’? Humans have a natural propensity towards morality. Whether we tend towards ‘good’ or ‘bad’, we have feelings and generally recognize these in others too, in humans and in animals. We share organic roots. We recognize suffering and Read the full article…

We Need to Be the Best We Can

This differs from being ‘the best person’ or ‘the most intelligent beings on Earth’ in competition with others. Our only – and fierce – competition should be with ourselves. The best, in good Aurelian tradition, is most Compassionately the best — striving for in-depth excellence. This striving is purposeful. It’s about standing at one’s limits Read the full article…

Distributed ‘Mental’ Patterns in A.I.

The idea that A.I. systems can mimic human cognition through distributed mental patterns opens exciting avenues for how we can design more nuanced and human-like A.I. By using distributed, non-linear processing akin to broader MNPs (see The Broadness of Subconceptual Patterns), A.I. could move toward a deeper form of ‘thinking’ that incorporates both cognitive flexibility Read the full article…

Translate »