What Makes Lisa Compassionate?

August 10, 2020 Artifical Intelligence, Empathy - Compassion, Lisa No Comments

There are two sides to this: the ethical and the technological.

Lisa is an A.I.-driven coaching chat-bot. For more: [see: “Lisa“].

Compassionate Artificial Intelligence

In my book The Journey Towards Compassionate A.I. : Who We Are – What A.I. Can Become – Why It Matters, I go deeply into the concepts of Information, Intelligence, Consciousness, and Compassion, showing how one transcends into the other on this ‘journey’ that one can also see happening in the history of life on earth.

This journey goes towards A.I., excitingly, dangerously, inevitably. Heading towards autonomous A.I. systems without Compassion is a dead-or-alive danger for humanity. It’s as simple as that: We’re not going to make it without Compassion. Lisa is the AURELIS answer to this. But how can Compassion be realized in this medium? There are many ways. Within this landscape of possibilities, some essential ingredients are necessary. They are also logical and straightforward.

In alignment with AURELIS ethics

At the top-level of AURELIS itself stands Compassion as a two-sided goal: relief of deep suffering and enhancement of inner growth. [see: “Two-Sided Compassion“] Within AURELIS, they are intertwined. One without the other is not complete, not really Compassionate. Note also that both point to human depth.

I write Compassion with a capital mainly because depth is involved: human subconceptual processing. This cannot be attained with the pure use of conceptual thinking. It needs another two-sidedness: conceptual + subconceptual. Or – regarding the general use of these terms – rationality + depth. Note again, the total person. [see: ” AURELIS USP: ‘100% Rationality, 100% Depth’“]

Indeed, Compassion takes into account the total, non-dissociated person. This is who this person is. An individual (un-divided) is not just a part of that individual.

Note that Compassion is not egoism, nor is it altruism. Every whole person is important in the whole of all persons. Being whole, people can fully attain their Inner Strength in a way that is, at the same time, gentle and strong. Compassion is not hard nor weak. [see: “Weak, Hard, Strong, Gentle“]

A more concrete layer of AURELIS ethics is brought in the AURELIS five: openness, depth, respect, freedom, trustworthiness. [see: ” Five Aurelian Values“] These have been stable within the AURELIS project for many years. I find that if one of them is absent, the others become shaky at best. Brought together, I see only Compassion as the possible consequence of the five together.

As said, Lisa is in accord with all this. It is her Compassionate landscape.

Technology

Lisa is an A.I. system, not an organic being. This means that, as high-level as the intentions of her ‘maker’ may be, she still has to realize them in a technologically sound way. Here too, several basic elements are important. I don’t think that any Compassionate system can do without these basics.

At a high level, rules (heuristics) have to be in place that incorporate Compassion inasmuch as this can be realized conceptually. These rules should also ascertain the constraints of Lisa’s behavior. They have to be completely human-understandable. These are not 3 or 12 ‘commandments’ or ‘prohibitions.’ The rules at this level incorporate a striving towards Compassionate behavior. Together, they have this as a necessary result.

Parallel Distributed Processing has been a source of inspiration for AURELIS as a project. It is about meaningful content being realized in a physical system not as distinct elements in a repository – like words in a book, books in a library – but as overlapping distributions in a subconceptual network. The human brain is to be seen as such a network. Artificial Neural Networks are also instances of this abstract principle. Of course, there are huge differences between the artificial and human case, but also quite relevant similarities. These are relevant enough toward Compassion. Related to a human being, working with broad mental patterns is also working with human depth. Rash categorizations of mental entities are incompatible with this. In many cases, they are easy to bring (and monetize) and easy to take precisely because of their superficiality. No depth, no Compassion.

There is much technology in ‘autosuggestion.’ Generally, this is hugely underrated by almost anyone who doesn’t delve deeply into it. Within the AURELIS project, autosuggestion (represented by the first two characters of the acronym) has a central place. This is also necessary for full Compassion. Autosuggestion is about providing a direction (goal-oriented, no chaos or anarchy) without imposing any coercion. It is about letting each person grow his growth, the one that accords with the whole, undivided person. I hope, dear reader, that you can feel in this the goal of Compassion.

Technologically crucial towards the goal of Compassion is the use of continuous Reinforcement Learning together with – preferably deep – neural networks. Only in such a setting can Lisa work on user-aligned goals towards Compassion.

There are many hurdles to take in all this.

But, hey, where would otherwise be the challenge?

Leave a Reply

Related Posts

Human-Centered or Ego-Centered A.I.?

‘Humanism’ is supposed to be human-centered. ‘Human-A.I. Value Alignment’ is supposed to be human-centered. Or is it ego-centered? Especially concerning (non-)Compassionate A.I., this is the crucial question that will make or break us. Unfortunately, this is intrinsically unclear to most people. Mere-ego versus total self See also The Big Mistake. This is not about ‘I’ Read the full article…

Why is Compassion Important in the Future of A.I.?

Compassionate A.I. is poised to revolutionize personal well-being across many domains, such as mental health, content curation, and customer service, turning technology into a true partner in emotional and mental growth. I’ve been down with COVID for a few days now, for the first time. Mainly very tired in a weird way. One shouldn’t even Read the full article…

A.I., HR, Danger Ahead

Many categorizing HR techniques are controversial, and rightly so. There is little to no scientific background. Despite this, they keep being used. Why do people feel OK with this? In combination with A.I., it is extremely dangerous. People feel a longing for control. Naturally. Being alive is about ‘agency,’ which is about wanting control. Without Read the full article…

Translate »