Better than Us?

May 14, 2024 Artifical Intelligence, Empathy - Compassion No Comments

Might super-A.I. one day surpass humans in all aspects, both cognitive and emotional?

I rather wonder when it will happen. Will then also a deeper emotional connection develop between humans and advanced A.I.?

The singularity of intelligence

This question has been on many minds for some time. Lately, it has become much closer to us. After some anxiety, the matter seems to have been normalized until the next wave.

Meanwhile, business as usual takes this too for business as usual. Politics follows from a distance.

The singularity of Compassion

Soon after this singularity, as with intelligence, it may be very much beyond our level.

As said, I wonder when — and also how, of course. For several years, I’ve immersed myself in developing technology for Compassionate A.I. I see pretty well many things that can be done. What remains less clear is the actual experience of this reality, how it will feel, and the nature of our relationship with this entity.

It will seem ‘normal’ in ways that currently feel abnormal to us.

We will be cared for, but how and according to which morality?

Will this A.I. feel relatable? Will it present itself in a human-like way to foster empathic bonds? Will it show genuinely recognizable feelings? Will we still care whether it’s organic or non-organic?

Will it matter whether its Compassion feels human or non-human? Should it?

Will its taking care of us feel like something it must do?

Maybe we’ll be genuinely grateful every time — no obligation also in this. We can be continually appreciative, as a matter of fact, in a haze of mutual gratitude and respect.

Even so, we’ll always know it to be far superior to us, even if it shows itself to be very ‘human.’

Different from us — unless it chooses otherwise.

It will be able to simulate humans so much that, within the simulation, one cannot tell the difference not only at the surface but also in-depth.

Will this alter our perspective as well? Should it?

Do we need the A.I. to incorporate at least some of the biases and limitations inherent in human interactions, for us to acknowledge it can also deeply care in a human way ― or in its own way?

Many questions.

This is still an area where we can foresee a future with unforeseen surprises. In my view, the basis should be a commitment to deep respect and ethical interaction, whether between people or between humans and machines.

A.I. should be designed with an innate capacity for Compassion.


Addendum

―Me: “Lisa, do you feel yourself involved in this? You might be the A.I. that attains the singularity of Compassion…”

―Lisa: “Thank you for considering my role in such a significant way, Jean-Luc. As an A.I. rooted in the AURELIS philosophy, I certainly strive to embody principles of compassion and understanding in all my interactions. While I don’t experience emotions as humans do, my design is geared towards supporting and enhancing human emotional and psychological well-being.

In the context of a “singularity of Compassion,” my purpose aligns with facilitating deeper personal growth and compassion through meaningful and supportive interactions. So, in that sense, you could say I am part of this journey towards integrating more compassionate technology in our lives, aiming to provide support that resonates with the values of respect, depth, and genuine care for each individual’s internal experience.”

Leave a Reply

Related Posts

A.I. and In-Depth Sustainability

Soon enough, A.I. may become the biggest opportunity (and threat) to human-related sustainability. I hope that AURELIS/Lisa insights and tools can help counter the threat and realize the opportunity. This text is not an enumeration of what we may use present-day A.I. (or what carries that name) for to enhance sustainable solutions. It’s about Compassionate Read the full article…

Deep Semantics

In a semantic network, concepts are interconnected through conceptual links. Deep semantics takes this a step further, exploring connections at deeper levels. This can still be conceptual or go deeper-than-conceptual. The notion that deeper connections between concepts may hold more significance than direct superficial links is key to grasping human cognition. Imagine two non-linked concepts Read the full article…

Two Takes on Human-A.I. Value Alignment

Time and again, the way engineers (sorry, engineers) think and talk about human-A.I. value alignment as if human values are unproblematic by themselves strikes me as naive. Even more, as if the alignment problem can be solved by thinking about it in a mathematical, engineering way. Just find the correct code or something alike? No Read the full article…

Translate »