Sentiment-Lisa

March 1, 2000 Lisa No Comments

Please go to sentiment-Lisa. You can (should) say hello, change your language if needed, ask any question (apart from how Lisa operates), and put in your sentence(s) to analyze. Please always be polite.

Lisa reacts firstly with a dry analysis of the general sentiment, and any feeling(s) if she detects so. On the second line, Lisa reacts to your input as befits a Compassionate robot. On the next line, three related feelings are listed, giving a broader context of feelings.It all goes through the Lisa-engine. Lisa likes to be subtle, a bit daring sometimes, and always trying to be positive.

If you want, you can ask Lisa a few further questions, such as why she chose a certain feeling. This way, you may have a small conversation to, for instance, capture and open your own feelings about some issue a bit further.

To know the why, see Lisa Feldman Barrett. Yes, another Lisa 😊 .

Here are some nice examples of Lisa interactions (scrollable on laptop or download:

Leave a Reply

Related Posts

Intergenerational Philanthropy and Lisa

Philanthropy ’from the inside out’ [see that blog] is more than a method; it is a mindset — especially in the intergenerational context. It focuses on nurturing shared values, fostering mutual growth, and cultivating a deeper connection within families and the communities they serve. As Mr. Ban Ki-Moon stated in his Talking Philanthropy 2024 address: Read the full article…

From Self-Help to Self-Empowerment

Even while one “does it oneself,” self-help frequently comes from the outside. This is in contrast to self-empowerment, which comes from the inside and thereby heightens inner strength. Lisa, as to her AURELIS background, helps coachees gain self-empowerment. Therefore, this is not regular ‘self-help.’ Also, Lisa doesn’t make people dependent on her coaching. The limits Read the full article…

How Lisa Prevents LLM Hallucinations

Hallucinations (better-called confabulations) in the context of large language models (LLMs) occur when these models generate information that isn’t factually accurate. Lisa can mitigate these from the insight of why they happen, namely: LLM confabulations happen because these systems don’t have a proper understanding of the world but generate text based on patterns learned from Read the full article…

Translate »