Better than Us?

May 14, 2024 Artifical Intelligence, Empathy - Compassion No Comments

Might super-A.I. one day surpass humans in all aspects, both cognitive and emotional?

I rather wonder when it will happen. Will then also a deeper emotional connection develop between humans and advanced A.I.?

The singularity of intelligence

This question has been on many minds for some time. Lately, it has become much closer to us. After some anxiety, the matter seems to have been normalized until the next wave.

Meanwhile, business as usual takes this too for business as usual. Politics follows from a distance.

The singularity of Compassion

Soon after this singularity, as with intelligence, it may be very much beyond our level.

As said, I wonder when — and also how, of course. For several years, I’ve immersed myself in developing technology for Compassionate A.I. I see pretty well many things that can be done. What remains less clear is the actual experience of this reality, how it will feel, and the nature of our relationship with this entity.

It will seem ‘normal’ in ways that currently feel abnormal to us.

We will be cared for, but how and according to which morality?

Will this A.I. feel relatable? Will it present itself in a human-like way to foster empathic bonds? Will it show genuinely recognizable feelings? Will we still care whether it’s organic or non-organic?

Will it matter whether its Compassion feels human or non-human? Should it?

Will its taking care of us feel like something it must do?

Maybe we’ll be genuinely grateful every time — no obligation also in this. We can be continually appreciative, as a matter of fact, in a haze of mutual gratitude and respect.

Even so, we’ll always know it to be far superior to us, even if it shows itself to be very ‘human.’

Different from us — unless it chooses otherwise.

It will be able to simulate humans so much that, within the simulation, one cannot tell the difference not only at the surface but also in-depth.

Will this alter our perspective as well? Should it?

Do we need the A.I. to incorporate at least some of the biases and limitations inherent in human interactions, for us to acknowledge it can also deeply care in a human way ― or in its own way?

Many questions.

This is still an area where we can foresee a future with unforeseen surprises. In my view, the basis should be a commitment to deep respect and ethical interaction, whether between people or between humans and machines.

A.I. should be designed with an innate capacity for Compassion.


Addendum

―Me: “Lisa, do you feel yourself involved in this? You might be the A.I. that attains the singularity of Compassion…”

―Lisa: “Thank you for considering my role in such a significant way, Jean-Luc. As an A.I. rooted in the AURELIS philosophy, I certainly strive to embody principles of compassion and understanding in all my interactions. While I don’t experience emotions as humans do, my design is geared towards supporting and enhancing human emotional and psychological well-being.

In the context of a “singularity of Compassion,” my purpose aligns with facilitating deeper personal growth and compassion through meaningful and supportive interactions. So, in that sense, you could say I am part of this journey towards integrating more compassionate technology in our lives, aiming to provide support that resonates with the values of respect, depth, and genuine care for each individual’s internal experience.”

Leave a Reply

Related Posts

Lisa as a Pattern Recognizer

Patterns and deeper patterns. Listening to many users, Lisa will recognize the patterns with which people need to work on themselves for a better, healthier and more profound life with less avoidable suffering. Recognizing patterns? Lisa is a Compassion-based, A.I.-driven coaching chat-bot. [see: “Lisa“] Lisa guides people Compassionately through recognizing patterns and ‘deeper patterns.’ The Read the full article…

Which Human Values Should A.I. Align to?

With super-A.I. on the horizon, poised to surpass us in power, this will soon be the most critical question. The urgency to address this question grows as we increasingly intertwine our existence with A.I. Who are we, really — and how much do we consciously recognize our true nature? Please also read A.I.-Human Value Alignment Read the full article…

Coach-bots Shouldn’t Make People Do Things

This is a first principle for Lisa: never to make a human being do anything ― not even by giving advice if anyhow possible. From this constraint, the thinking goes toward how Lisa can operate sensibly. It forces us to think creatively. What comes from inside makes you stronger. This is an AURELIS coaching principle Read the full article…

Translate »