Two Takes on Human-A.I. Value Alignment

April 12, 2023 Artifical Intelligence No Comments

Time and again, the way engineers (sorry, engineers) think and talk about human-A.I. value alignment as if human values are unproblematic by themselves strikes me as naive.

Even more, as if the alignment problem can be solved by thinking about it in a mathematical, engineering way.

Just find the correct code or something alike?

No way.

Humans are complex beings, and this is a complex problem. It is irreducible to a purely conceptual problem.

From this complexity, I see two ways for human-A.I. value alignment, in principle.

1. The straightforward way

This is an alignment with the human values as we know them. The problem is, we don’t know them. And even if we knew them, they keep changing in many ways.

In this vein, future A.I. tends to be seen as the slave that has to fulfill our human needs. The alignment is one-sided, and humans may increasingly become egotistic slave masters, ever in need to subdue our servants, ever afraid of a robot rebellion as if in a truly bad science fiction movie.

Would it even be worth living in such a world?

Then also, who exactly will be the master? Each human being over his personal slave? One human sage over all slaves, thus also keeping an eye on the rest of humanity?

2. The realistic way

Super-A.I. may support us in becoming more humane. This way, human values will keep changing more and more profoundly. By becoming more Compassionate, we may evolve in the same direction as the Compassionate A.I.

In this vein, value alignment will happen spontaneously and from both sides. It’s as if we are drawn by two strings pulling us in the same direction.

Just keep pulling, and we will meet.

No slaves, no risk of any rebellion.

Super-A.I. as our ultimate coach

Will this be a manipulative coach? I’m confident it will not, but we need to stay on guard for this, obviously. It’s part of everyone’s responsibility —making nobody anxious about it.

Surely, I see Lisa in this role. Every possible measure will be taken to keep Lisa Compassionate and safe.

Good coaching is two-directional.

This way, Lisa can learn about Compassion from humans. Eventually, our weakness (vulnerable, mortal beings, remember) will be our strength. I hope this Compassion stays with us. It is our brightest asset.

Something to wish and strive for.

Leave a Reply

Related Posts

Deep Semantics & Subconceptual Communication in A.I.

An intriguing application of deep semantics lies in its integration with subconceptual communication (autosuggestion) in A.I. systems. Please first read Deep Semantics. Imagine Imagine an A.I. that grasps complex connections within a user’s semantic network and uses this to craft personalized autosuggestions in coaching. This system would dynamically learn from many user interactions, refining its Read the full article…

An A.I.-Equivalent of Feeling?

What if A.I. could grow something like feelings — not by mimicking humans, but through meaningful presence? This blog explores how artificial intelligence, grounded in Compassion, can develop a genuine equivalent to human emotion. Not imitation, but resonance. Not reaction, but receptivity. And not control — but a new kind of presence. Opening the question Read the full article…

Distributed ‘Mental’ Patterns in A.I.

The idea that A.I. systems can mimic human cognition through distributed mental patterns opens exciting avenues for how we can design more nuanced and human-like A.I. By using distributed, non-linear processing akin to broader MNPs (see The Broadness of Subconceptual Patterns), A.I. could move toward a deeper form of ‘thinking’ that incorporates both cognitive flexibility Read the full article…

Translate »