What Makes Lisa Compassionate?

August 10, 2020 Artifical Intelligence, Empathy - Compassion, Lisa No Comments

There are two sides to this: the ethical and the technological.

Lisa is an A.I.-driven coaching chat-bot. For more: [see: “Lisa“].

Compassionate Artificial Intelligence

In my book The Journey Towards Compassionate A.I. : Who We Are – What A.I. Can Become – Why It Matters, I go deeply into the concepts of Information, Intelligence, Consciousness, and Compassion, showing how one transcends into the other on this ‘journey’ that one can also see happening in the history of life on earth.

This journey goes towards A.I., excitingly, dangerously, inevitably. Heading towards autonomous A.I. systems without Compassion is a dead-or-alive danger for humanity. It’s as simple as that: We’re not going to make it without Compassion. Lisa is the AURELIS answer to this. But how can Compassion be realized in this medium? There are many ways. Within this landscape of possibilities, some essential ingredients are necessary. They are also logical and straightforward.

In alignment with AURELIS ethics

At the top-level of AURELIS itself stands Compassion as a two-sided goal: relief of deep suffering and enhancement of inner growth. [see: “Two-Sided Compassion“] Within AURELIS, they are intertwined. One without the other is not complete, not really Compassionate. Note also that both point to human depth.

I write Compassion with a capital mainly because depth is involved: human subconceptual processing. This cannot be attained with the pure use of conceptual thinking. It needs another two-sidedness: conceptual + subconceptual. Or – regarding the general use of these terms – rationality + depth. Note again, the total person. [see: ” AURELIS USP: ‘100% Rationality, 100% Depth’“]

Indeed, Compassion takes into account the total, non-dissociated person. This is who this person is. An individual (un-divided) is not just a part of that individual.

Note that Compassion is not egoism, nor is it altruism. Every whole person is important in the whole of all persons. Being whole, people can fully attain their Inner Strength in a way that is, at the same time, gentle and strong. Compassion is not hard nor weak. [see: “Weak, Hard, Strong, Gentle“]

A more concrete layer of AURELIS ethics is brought in the AURELIS five: openness, depth, respect, freedom, trustworthiness. [see: ” Five Aurelian Values“] These have been stable within the AURELIS project for many years. I find that if one of them is absent, the others become shaky at best. Brought together, I see only Compassion as the possible consequence of the five together.

As said, Lisa is in accord with all this. It is her Compassionate landscape.


Lisa is an A.I. system, not an organic being. This means that, as high-level as the intentions of her ‘maker’ may be, she still has to realize them in a technologically sound way. Here too, several basic elements are important. I don’t think that any Compassionate system can do without these basics.

At a high level, rules (heuristics) have to be in place that incorporate Compassion inasmuch as this can be realized conceptually. These rules should also ascertain the constraints of Lisa’s behavior. They have to be completely human-understandable. These are not 3 or 12 ‘commandments’ or ‘prohibitions.’ The rules at this level incorporate a striving towards Compassionate behavior. Together, they have this as a necessary result.

Parallel Distributed Processing has been a source of inspiration for AURELIS as a project. It is about meaningful content being realized in a physical system not as distinct elements in a repository – like words in a book, books in a library – but as overlapping distributions in a subconceptual network. The human brain is to be seen as such a network. Artificial Neural Networks are also instances of this abstract principle. Of course, there are huge differences between the artificial and human case, but also quite relevant similarities. These are relevant enough toward Compassion. Related to a human being, working with broad mental patterns is also working with human depth. Rash categorizations of mental entities are incompatible with this. In many cases, they are easy to bring (and monetize) and easy to take precisely because of their superficiality. No depth, no Compassion.

There is much technology in ‘autosuggestion.’ Generally, this is hugely underrated by almost anyone who doesn’t delve deeply into it. Within the AURELIS project, autosuggestion (represented by the first two characters of the acronym) has a central place. This is also necessary for full Compassion. Autosuggestion is about providing a direction (goal-oriented, no chaos or anarchy) without imposing any coercion. It is about letting each person grow his growth, the one that accords with the whole, undivided person. I hope, dear reader, that you can feel in this the goal of Compassion.

Technologically crucial towards the goal of Compassion is the use of continuous Reinforcement Learning together with – preferably deep – neural networks. Only in such a setting can Lisa work on user-aligned goals towards Compassion.

There are many hurdles to take in all this.

But, hey, where would otherwise be the challenge?

Leave a Reply

Related Posts

Inspiration is Key to A.I. Research

A.I. research should prioritize rationality as well as profound human depth (inspiration). As you may know, this is a perfect Aurelian combination. It’s relevant to much inventive thinking, arguably most of all to A.I. research. The initial phase of research should focus on thinking about the problem. No papers, whiteboards, discussions, or code – just Read the full article…

Compassion as Basis for A.I. Regulations

To prevent A.I.-related mishaps or even disasters while going into a future of super-A.I., merely regulating A.I. is not sufficient ― presently nor in principle. Striving for Compassionate A.I. There will eventually be no security concerning A.I. if we don’t put Compassion into the core. The main reason is that super-A.I. will be much more Read the full article…

How to Contain Non-Compassionate Super-A.I.

We want super(-intelligent) A.I. to remain under meaningful human control to avoid that it will largely or fully destroy or subdue humanity (= existential dangers). Compassionate A.I. may not be with us for a while. Meanwhile, how can we contain super-A.I.? Future existential danger is special in that one can only be wrong in one Read the full article…

Translate »