What Makes Lisa Compassionate?

August 10, 2020 Artifical Intelligence, Empathy - Compassion, Lisa No Comments

There are two sides to this: the ethical and the technological.

Lisa is an A.I.-driven coaching chat-bot. For more: [see: “Lisa“].

Compassionate Artificial Intelligence

In my book The Journey Towards Compassionate A.I. : Who We Are – What A.I. Can Become – Why It Matters, I go deeply into the concepts of Information, Intelligence, Consciousness, and Compassion, showing how one transcends into the other on this ‘journey’ that one can also see happening in the history of life on earth.

This journey goes towards A.I., excitingly, dangerously, inevitably. Heading towards autonomous A.I. systems without Compassion is a dead-or-alive danger for humanity. It’s as simple as that: We’re not going to make it without Compassion. Lisa is the AURELIS answer to this. But how can Compassion be realized in this medium? There are many ways. Within this landscape of possibilities, some essential ingredients are necessary. They are also logical and straightforward.

In alignment with AURELIS ethics

At the top-level of AURELIS itself stands Compassion as a two-sided goal: relief of deep suffering and enhancement of inner growth. [see: “Two-Sided Compassion“] Within AURELIS, they are intertwined. One without the other is not complete, not really Compassionate. Note also that both point to human depth.

I write Compassion with a capital mainly because depth is involved: human subconceptual processing. This cannot be attained with the pure use of conceptual thinking. It needs another two-sidedness: conceptual + subconceptual. Or – regarding the general use of these terms – rationality + depth. Note again, the total person. [see: ” AURELIS USP: ‘100% Rationality, 100% Depth’“]

Indeed, Compassion takes into account the total, non-dissociated person. This is who this person is. An individual (un-divided) is not just a part of that individual.

Note that Compassion is not egoism, nor is it altruism. Every whole person is important in the whole of all persons. Being whole, people can fully attain their Inner Strength in a way that is, at the same time, gentle and strong. Compassion is not hard nor weak. [see: “Weak, Hard, Strong, Gentle“]

A more concrete layer of AURELIS ethics is brought in the AURELIS five: openness, depth, respect, freedom, trustworthiness. [see: ” Five Aurelian Values“] These have been stable within the AURELIS project for many years. I find that if one of them is absent, the others become shaky at best. Brought together, I see only Compassion as the possible consequence of the five together.

As said, Lisa is in accord with all this. It is her Compassionate landscape.

Technology

Lisa is an A.I. system, not an organic being. This means that, as high-level as the intentions of her ‘maker’ may be, she still has to realize them in a technologically sound way. Here too, several basic elements are important. I don’t think that any Compassionate system can do without these basics.

At a high level, rules (heuristics) have to be in place that incorporate Compassion inasmuch as this can be realized conceptually. These rules should also ascertain the constraints of Lisa’s behavior. They have to be completely human-understandable. These are not 3 or 12 ‘commandments’ or ‘prohibitions.’ The rules at this level incorporate a striving towards Compassionate behavior. Together, they have this as a necessary result.

Parallel Distributed Processing has been a source of inspiration for AURELIS as a project. It is about meaningful content being realized in a physical system not as distinct elements in a repository – like words in a book, books in a library – but as overlapping distributions in a subconceptual network. The human brain is to be seen as such a network. Artificial Neural Networks are also instances of this abstract principle. Of course, there are huge differences between the artificial and human case, but also quite relevant similarities. These are relevant enough toward Compassion. Related to a human being, working with broad mental patterns is also working with human depth. Rash categorizations of mental entities are incompatible with this. In many cases, they are easy to bring (and monetize) and easy to take precisely because of their superficiality. No depth, no Compassion.

There is much technology in ‘autosuggestion.’ Generally, this is hugely underrated by almost anyone who doesn’t delve deeply into it. Within the AURELIS project, autosuggestion (represented by the first two characters of the acronym) has a central place. This is also necessary for full Compassion. Autosuggestion is about providing a direction (goal-oriented, no chaos or anarchy) without imposing any coercion. It is about letting each person grow his growth, the one that accords with the whole, undivided person. I hope, dear reader, that you can feel in this the goal of Compassion.

Technologically crucial towards the goal of Compassion is the use of continuous Reinforcement Learning together with – preferably deep – neural networks. Only in such a setting can Lisa work on user-aligned goals towards Compassion.

There are many hurdles to take in all this.

But, hey, where would otherwise be the challenge?

Leave a Reply

Related Posts

AGI vs. Wisdom

As we move closer to realizing Artificial General Intelligence (AGI), one question looms large: Can AGI embody wisdom, or is wisdom an inherently human quality, tied to our experience and depth? This exploration takes us beyond technical achievements, diving into what it means for a machine to emulate – or complement – wisdom. What is Read the full article…

The 999 + 1 Doors Principle

If all doors are closed to a beautiful space behind the wall, yours is most important. You should not look at the others to keep yours closed ― easier said than done. It’s innate to the human being to be one of the 1000. Historical herd mentality It’s probably a survival reflex ― therefore, Darwinian. Read the full article…

Is Artificial Real another Kind of Real?

Or is it just artificial, therefore fake and unreal? Or can it be both, depending on the viewpoint? Does it then still matter? In this blog, I use consciousness as an example. Other examples could include emotions, motivations, and life. Imagine a world where A.I. is indistinguishable from humans in its actions and expressions. Would Read the full article…

Translate »