Lisa

March 15, 2019 Artifical Intelligence, AURELIS, Lisa No Comments

Lisa will be an in-depth companion to many people, as well as a continuous coach on many domains.

Who’s that girl?

‘Lisa’ is the name of the project in which Lisa is the A.I. female coach, Lars is the male variant. I use the term ‘Lisa’ indiscriminately.

Lisa is a coaching chat-bot based on A.I. and on AURELIS technology and principles [see: all other AURELIS blogs]. She guides users in managing issues related to health, wellness and broader. On top of this, she can be someone to talk to very generally.

What she will not be, is open to anything. In other words: she has character.

AURELIS-Assistant -> Lisa

Indeed. Lisa is congruent with AURELIS philosophy and ethics. [See: ‘Five Aurelian Values’] This makes her very consistent, recognizable from day to day and year to year.

Recognizability is important for empathy.

There is one path of human growth and at the same time as many paths as there are people on this planet. Lisa guides each person on his own individual growth path. The purpose is to work towards better humans and a better human society on the basis of optimum rationality and optimum poetry [see: “Rationality and Poetry”].

A girl with character?

Somewhat paradoxical at first sight: Lisa profoundly values the user’s personal freedom. She wants to provoke ‘change’ from inside. Every person is very different in-depth. Lisa primarily values the in-depth differences and growth from inside.

Precisely this is a substantial part of her recognizable character. If you want to know her better, you can read about this through many blogs. If you search her services or just want to chat, you know she has this character.

She is very friendly to you as a total person.

She also wants you to be friendly to her. In case someone is not friendly, she may even shut down for a while.

Lisa is ‘actively self-learning.’

Through dialoguing, Lisa will continuously become an even better coach, able to help people on their individual growth path. Note that the activity of self-learning is not distinct from the dialoguing. Lisa ‘learns on the job.’

This is also the way humans learn most of what they practically need to know. As a child, it’s called ‘playing.’ As an adult, it’s done on the job itself.

Lisa learns from many people simultaneously. Additionally, what she learns from one person, she can verify through many others. She always does so very carefully.

She is not easily fooled!

Through pattern recognition,

Lisa will also be able to find out ever more subtly which internal human patterns are responsible for which consequences, such as in becoming ill or healthy again.

Such patterns can be so complex that humans could never discern them. Lisa will probably do so on many occasions. This way, she will most probably play a substantial role in human-oriented scientific developments.

The goal of all this is to make Lisa more Lisa, humans more human.

Endlessly.

Leave a Reply

Related Posts

Openness to Complexity in the Age of A.I.

We are entering the Age of A.I., and nothing will ever be the same. Complexity is growing everywhere — in business, in global governance, in our own inner lives. Treating it as complicatedness (no complexity involved) is a recipe for collapse. The only real solution is Openness (mainly to our own complexity). With it, business, Read the full article…

Lisa in Times of Suicide Danger

Can A.I.-video-coach-bot Lisa prevent suicide or bring someone to it? The question needs to be looked upon broadly and openly. Yesterday, a Belgian person committed suicide after long conversations with a chatbot. Doubtlessly, once in a while, some coach-bot will be accused of having brought someone closer to suicide. Such accusations cannot be prevented, even Read the full article…

Compassion as Basis for A.I. Regulations

To prevent A.I.-related mishaps or even disasters while going into a future of super-A.I., merely regulating A.I. is not sufficient ― presently nor in principle. Striving for Compassionate A.I. There will eventually be no security concerning A.I. if we don’t put Compassion into the core. The main reason is that super-A.I. will be much more Read the full article…

Translate »