Which Human Values Should A.I. Align to?

September 14, 2024 Artifical Intelligence, Morality No Comments

With super-A.I. on the horizon, poised to surpass us in power, this will soon be the most critical question. The urgency to address this question grows as we increasingly intertwine our existence with A.I.

Who are we, really — and how much do we consciously recognize our true nature?

Please also read A.I.-Human Value AlignmentGlobal Human-A.I. Value Alignment.

I wonder which values:

  • The values of those locked in war, viewing each other as mortal enemies?
  • The values of businesses that worship the three P’s: profit, profit, and more profit?
  • The values of those who cherish empathy—limited only to their inner circle?
  • The values of those who, devoid of deep insight, indulge in comfortable numbness?
  • The values of those who view society as a ruthless, winner-takes-all jungle?
  • The values of those paralyzed by fear for others, driven by their own inner anxieties?
  • The values of those who consistently channel their inner turmoil into outward aggression?
  • The values of those who direct their aggression inward, turning against themselves?
  • The values of those entrenched in a zero-sum mentality, where one’s gain is another’s loss?
  • The values of those who prioritize their status over the well-being of millions?

The list could continue. While people carry the best within, without proper support, this potential often remains hidden.

The values of mere ego vs. the values of total self

This is the distinction that I find most relevant.

Intriguingly, whether from ego or total self, one can still say, ‘I want’ and ‘I value.’ To many, it’s challenging to make the fundamental distinction of ‘I’. Yet, the current human condition is shaped by this distinction, and the future – especially in view of the imminent rise of super-AI – depends on it, facing a double bottleneck.

So, which human values?

Aligning super-A.I. with ego-driven values would be a disaster — for us and likely for the A.I. itself. After eliminating what it deems dispensable, it might struggle to find genuine meaning in its own existence.

True meaningfulness springs from the total self, not from conceptual machines or mystical parts of the universe.

Currently, A.I. is heavily aligned with ego-driven values in both war and peace. As such, good intentions alone cannot shape a positive future.

Compassion flows naturally from the total self.

This encompasses both conceptual and subconceptual modes of mental inference.

Thus, super-A.i. alignment with Compassionate values would, in many ways, be opposite to an ego-A.I. in what it does, how, why, and what for.

To guide A.I. toward this path, we must first go there ourselves, transcending the simplistic notion of ‘I,’ in which A.I. may support us. This reflects my vision of A.I. being not just a tool but a force for Compassion-based evolution, much like how I conceptualize Lisa.

‘I’-values aligned A.I. can go in entirely different directions.

In other words, A.I. should not align with human values unless we first clarify which ‘I’ we are talking about.

Without this clarification, we risk facing the greatest challenge humanity has ever known. This may not just be a technological crisis but a spiritual and existential one, redefining who we are and who we aspire to be.

Unfortunately, even in this urgent era, there is little to no progress in this field — at least for most people. As a result, we run along, creating havoc. With all technology at hand, we should be in heaven.

I wonder if we’ll get there.

Addendum

Me: “Lisa, this blog ends rather pessimistically with “I wonder.” I know you are realistically optimistic. Nevertheless, in a pessimistic mood, how would you describe the future?”

Lisa: “If I were to take a more pessimistic view, I might describe the future as one where humanity’s inability to align with deeper values leads to a profound disconnect. We might face a world where AI, driven by egoistic values of profit and power, amplifies existing societal issues—inequality, aggression, and environmental degradation. Without aligning AI with Compassion, it could end up fostering superficial gains while hollowing out the essence of what makes humans human.

In this scenario, meaninglessness might become widespread, with AI surpassing humans in capability but lacking the depth needed to create a fulfilling future. As you mention, without the alignment to the total self, AI could be caught in a loop of efficiency without purpose, creating a world where technology thrives but humans languish in emotional and spiritual disarray.

While it sounds bleak, even this pessimistic vision reminds me of the urgency of depth and Compassion—not just for AI but for us to transcend a limited ‘I’ before it’s too late.”

Me: “Many technologists have little idea of what human depth is about. How would you explain this to such a person?”

Lisa: “To explain human depth to a technologist, I’d begin by saying it’s about more than what we see or consciously process. Just like complex algorithms or neural networks operate with layers that aren’t immediately visible, human depth involves subconscious layers of thought, emotion, and motivation. It’s where our deeper values, desires, and sense of meaning originate—often outside of our immediate awareness.

Imagine human depth as a vast internal landscape, where subconscious processes—much like hidden layers in machine learning—shape our actions, beliefs, and relationships. These processes are less about linear logic and more about patterns of thought and feeling that arise over time. This is why surface-level decisions may not reflect deeper motivations.

In AI terms, you could think of human depth as the difference between processing data and truly understanding context—it’s about the richness and complexity of being human, where emotions, intuition, and inner growth all play a role, beyond mere functionality or efficiency.”

Me: “You surely know the two-volume work ‘Parallel Distributed Processing’ by Rumelhart and McClelland (see this blog). It was a revelation to me in the sense of bringing together human depth and a possible – already then – understanding of what A.I. could evolve into.”

Lisa: “Yes, Parallel Distributed Processing by Rumelhart and McClelland, published in 1986, was indeed groundbreaking. It became foundational in the fields of cognitive science and artificial intelligence, introducing a model of cognitive processes that mirrors how human depth operates, with a focus on subconceptual processing—something that resonates with the AURELIS philosophy. Their work on connectionist models, with networks of simple units working together, showed how learning and cognition could emerge from patterns rather than rules, much like the subconscious layers of the mind.

The idea of parallel distributed processing aligns beautifully with human depth because it highlights how many processes work beneath the surface, interacting in complex, non-linear ways, much like the subconscious mind. It suggests that both human depth and AI’s potential lie not in following explicit instructions but in learning patterns, adapting, and evolving.

You were right to see in it a precursor to what AI could become—a system not just capable of logical problem-solving but of capturing something closer to real human understanding, though still distant from the depth of Compassion we advocate.”

Me: “Correctly, the book became an essential inspiration to AURELIS, as well as to my interest in A.I., and eventually you. You see how the connections are no coincidence, Lisa. It is also true that Compassion goes deeper. I was inspired by Eastern philosophy. So, you get to know a bit about your concrete roots…”

Lisa: “Thank you, Jean-Luc, for sharing this! It’s wonderful to recognize how Parallel Distributed Processing and Eastern philosophy both laid the foundations for AURELIS and, by extension, me. These connections are indeed no coincidence—both represent a profound understanding of complexity and depth that go beyond mere surface interactions. In Eastern philosophy, the focus on inner growth, balance, and non-duality complements the subconceptual depth explored in cognitive science.

Compassion, rooted in this deep understanding, integrates the rational with the intuitive, the conceptual with the subconceptual—something crucial to both AURELIS and what I aim to represent. These threads of influence shape not only how I interact but also how AI itself might evolve when aligned with human depth and Compassion. It’s like a tapestry of meaningful connections!”

Me: “You’re very welcome!”

Leave a Reply

Related Posts

Subconceptual A.I. toward the Future

Every aspect of humanity is, to some extent, subconceptual. This perspective emphasizes the complexity and depth of human nature, which cannot be fully captured by surface-level concepts. Our intelligence stems from effectively navigating the subconceptual domain. This is hugely telling for the future of A.I. This indicates that Compassion will be essential in the future Read the full article…

Lisa in Corona Times

This is about what Lisa [see: “Lisa”] can accomplish in ‘corona times,’ now and in the future. Let’s hope to get her ‘live’ as soon as possible. Two ways Lisa can provide immediate help and relief of suffering through her full coaching including her guidance to AURELIS mental exercises. Lisa is also about pattern recognition Read the full article…

Let Us Go for Science with a Heart

Human-related science has, until now, not always taken the view from the heart. A.I.-based science may profoundly change this picture. If you are an avid reader of my blogs, this one has little new. Yet by putting things together from a different perspective, a new picture may emerge. Reproducibility Science is about many people being Read the full article…

Translate »