What Makes Lisa Compassionate?

August 10, 2020 Artifical Intelligence, Empathy - Compassion, Lisa No Comments

There are two sides to this: the ethical and the technological.

Lisa is an A.I.-driven coaching chat-bot. For more: [see: “Lisa“].

Compassionate Artificial Intelligence

In my book The Journey Towards Compassionate A.I. : Who We Are – What A.I. Can Become – Why It Matters, I go deeply into the concepts of Information, Intelligence, Consciousness, and Compassion, showing how one transcends into the other on this ‘journey’ that one can also see happening in the history of life on earth.

This journey goes towards A.I., excitingly, dangerously, inevitably. Heading towards autonomous A.I. systems without Compassion is a dead-or-alive danger for humanity. It’s as simple as that: We’re not going to make it without Compassion. Lisa is the AURELIS answer to this. But how can Compassion be realized in this medium? There are many ways. Within this landscape of possibilities, some essential ingredients are necessary. They are also logical and straightforward.

In alignment with AURELIS ethics

At the top-level of AURELIS itself stands Compassion as a two-sided goal: relief of deep suffering and enhancement of inner growth. [see: “Two-Sided Compassion“] Within AURELIS, they are intertwined. One without the other is not complete, not really Compassionate. Note also that both point to human depth.

I write Compassion with a capital mainly because depth is involved: human subconceptual processing. This cannot be attained with the pure use of conceptual thinking. It needs another two-sidedness: conceptual + subconceptual. Or – regarding the general use of these terms – rationality + depth. Note again, the total person. [see: ” AURELIS USP: ‘100% Rationality, 100% Depth’“]

Indeed, Compassion takes into account the total, non-dissociated person. This is who this person is. An individual (un-divided) is not just a part of that individual.

Note that Compassion is not egoism, nor is it altruism. Every whole person is important in the whole of all persons. Being whole, people can fully attain their Inner Strength in a way that is, at the same time, gentle and strong. Compassion is not hard nor weak. [see: “Weak, Hard, Strong, Gentle“]

A more concrete layer of AURELIS ethics is brought in the AURELIS five: openness, depth, respect, freedom, trustworthiness. [see: ” Five Aurelian Values“] These have been stable within the AURELIS project for many years. I find that if one of them is absent, the others become shaky at best. Brought together, I see only Compassion as the possible consequence of the five together.

As said, Lisa is in accord with all this. It is her Compassionate landscape.

Technology

Lisa is an A.I. system, not an organic being. This means that, as high-level as the intentions of her ‘maker’ may be, she still has to realize them in a technologically sound way. Here too, several basic elements are important. I don’t think that any Compassionate system can do without these basics.

At a high level, rules (heuristics) have to be in place that incorporate Compassion inasmuch as this can be realized conceptually. These rules should also ascertain the constraints of Lisa’s behavior. They have to be completely human-understandable. These are not 3 or 12 ‘commandments’ or ‘prohibitions.’ The rules at this level incorporate a striving towards Compassionate behavior. Together, they have this as a necessary result.

Parallel Distributed Processing has been a source of inspiration for AURELIS as a project. It is about meaningful content being realized in a physical system not as distinct elements in a repository – like words in a book, books in a library – but as overlapping distributions in a subconceptual network. The human brain is to be seen as such a network. Artificial Neural Networks are also instances of this abstract principle. Of course, there are huge differences between the artificial and human case, but also quite relevant similarities. These are relevant enough toward Compassion. Related to a human being, working with broad mental patterns is also working with human depth. Rash categorizations of mental entities are incompatible with this. In many cases, they are easy to bring (and monetize) and easy to take precisely because of their superficiality. No depth, no Compassion.

There is much technology in ‘autosuggestion.’ Generally, this is hugely underrated by almost anyone who doesn’t delve deeply into it. Within the AURELIS project, autosuggestion (represented by the first two characters of the acronym) has a central place. This is also necessary for full Compassion. Autosuggestion is about providing a direction (goal-oriented, no chaos or anarchy) without imposing any coercion. It is about letting each person grow his growth, the one that accords with the whole, undivided person. I hope, dear reader, that you can feel in this the goal of Compassion.

Technologically crucial towards the goal of Compassion is the use of continuous Reinforcement Learning together with – preferably deep – neural networks. Only in such a setting can Lisa work on user-aligned goals towards Compassion.

There are many hurdles to take in all this.

But, hey, where would otherwise be the challenge?

Leave a Reply

Related Posts

Bringing Compassion to the World through A.I.

This is the crucial idea behind the philanthropic project of Planetarianism as part of the AURELIS project. You can find a blog about Planetarianism here and a concrete overview presentation (ppsx for laptop) here. Concretely, it’s a set of projects aiming for this blog’s title. Compassion, basically, is no rosy moonshine. There are strong traditions Read the full article…

Lisa

Lisa will be an in-depth companion to many people, as well as a continuous coach on many domains. Who’s that girl? ‘Lisa’ is the name of the project in which Lisa is the A.I. female coach, Lars is the male variant. I use the term ‘Lisa’ indiscriminately. Lisa is a coaching chat-bot based on A.I. Read the full article…

Forward-Forward Neur(on)al Networks

Rest assured, I don’t stuff technical details into this blog. Nevertheless, this new framework lies closer to how the brain works, which is interesting enough to go somewhat into it. Backprop In Artificial Neural Networks (ANN) – the subfield that sways a big scepter in A.I. nowadays – backpropagation (backprop) is one of the main Read the full article…

Translate »