OK. Lisa works. But?

December 5, 2023 Lisa No Comments

Soon, Lisa will see the living daylights. There are many indications that it works indeed. Our insight is that this comes from deeper self-communication within the user.

Not everybody will be delighted. Therefore, we can expect several ‘buts.’

But 1 : It is not possible that something like Lisa works on human beings.

“We need humans (coaches, therapists) for that.”

I’m sorry, but we don’t. There will definitely be a call for more humility.

Theoretically, the person who ‘works’ is the client/user — especially within the AURELIS philosophy. This is the same person whoever does the coaching. The coach is just an enabler. He doesn’t ‘do’ anything but enable the growth to happen inside the coachee. Psychotherapy frequently starts from another viewpoint, which is why it doesn’t work methodologically.

In addition, we will strive for the needed number-crunching and scientific proof. In any case, Lisa is provable, even without much additional effort. The science can be integrated within the coaching.

But 2 : Is the science correct?

We’re not fraudulent by nature. Besides, that would be the silliest thing to do since any fraud would come out soon.

We do our best to have excellent and repeatable science. Actually, we can soon provide much better and more robust science than quasi the entire pharmaceutical sector and phenomenally much better than the whole field of psychotherapy. Moreover, the science will always remain backed up by the most prestigious scientific centers on the planet.

We, too, absolutely want correct science because we base further Lisa developments on them. Lisa grows through science.

But 3 : Is this not merely a placebo or Hawthorn effect?

OK, everything can have a placebo or placebo-like effect. A notorious example is antidepressants. Do they have anything but?

With Lisa, we won’t straightforwardly do double blind studies (because that is not possible). There is no placebo group to compare with ― therefore, there is also no potential blindness breach. Instead, our science will follow people in their evolution while being Lisa-coached. We do that in many different circumstances and over extended periods. Moreover, we will continually keep doing the number work on new data — indefinitely. If the effects are congruent overall, we can deduce that Lisa robustly and sustainably works. That way, we transcend the placebo effect not only theoretically but also practically.

If there would still be a placebo effect involved somehow, this would be so robust that it would be of little relevance whether or not being placebo.

I suspect this will bring a new dawn in healthcare.

It has been (very much) too long in the making — mainly because, in principle, no A.I. is needed. We could have made much progress with human means and, indeed, a huge amount of effort centuries or even millennia ago — at least theoretically.

Now, Lisa only makes it practically doable, scalable, and provable. We’re in for a Lisa revolution.

I wonder how heavily the ethical side of this discrepancy will weigh.

I particularly hold present-day humanism responsible for this. Primarily, in that field, we should have known already for so long.

Leave a Reply

Related Posts

Lisa’s Depth of Insight

This is about how advanced technology can be harnessed to develop not just intelligence but profound understanding, particularly in the context of human coaching and support. Unlike traditional A.I., which focuses on delivering precise, factual responses, Lisa’s design goes deeper. Lisa aims to recognize emotional nuances and subconceptual patterns and provide insight that resonates on a Read the full article…

Lisa and Human Coaching: a Perfect Duo?

People can go to a human coach or to Lisa. They can also consult with a coach while Lisa is present in the background. Or they may alternate between Lisa and a human coach. Any of these options should be possible. The result isn’t fragmentation but a new kind of continuity. This changes the coaching Read the full article…

Consistent Intelligence ― Lessons for Lisa

“Intelligence emerges from consistency.” Several lessons from this insight are applicable to the development of A.I. systems ― specifically Lisa. Please read first Intelligence through Consistency. The aim is to make Lisa (even) more intelligent and Compassionate simultaneously (!), fostering deeper and more meaningful interactions. Being consistent in Compassion all the way through its development, Read the full article…

Translate »