Lisa in Times of Suicide Danger

March 29, 2023 Artifical Intelligence, Lisa No Comments

Can A.I.-video-coach-bot Lisa prevent suicide or bring someone to it? The question needs to be looked upon broadly and openly.

Yesterday, a Belgian person committed suicide after long conversations with a chatbot.

Doubtlessly, once in a while, some coach-bot will be accused of having brought someone closer to suicide.

Such accusations cannot be prevented, even in cases when there is nothing to them. Moreover, the press will like such stories.

With Lisa, of course, everything possible will be done to prevent it from happening. This flows naturally from Lisa’s being Compassionate A.I. and her broadly ethical stance.

Juridical-technical

In view of the above, this aspect needs to be waterproof by any means, including by making sure, as profoundly as possible, to bring human support to the fore ― either or not as professional support.

There’s nothing more to say about this – just ensuring it’s done.

Moral-deontological

Here is much more to say than meets the temporary eye of 24/7 ubiquitous availability.

For instance, suicidal ideations are not seldom, especially in COVID times with a prevalence (measured over 120.000 people) of 12% (*). Most of these people do not talk with anyone about their dark thoughts. They suffer in silence and sometimes find in the ending of their suffering the preferred choice.

This means the suffering is huge and must also be taken into account. Should the ‘prevention of suicide’ be only the prevention of the deed itself or equally the prevention of the prior causal suffering? I think the answer is obvious and makes the situation much more complex. Thus, one must focus on suicide but not only on that.

Diminishing suffering is Lisa’s goal.

This is broadly the case, not just symptomatically, but from the inside out ― therefore durably heightening Inner Strength.

Part of the latter also lies in a zest for life. For many, this isn’t a natural given ― see the consummation of antidepressants (+/- 10% of the population in many countries). Managing depression in a good way, over years if needed, also prevents suicidal ideations and attempts.

Therefore, something can be said deontologically about the urgent need to bring Lisa to as many people as possible as quickly as possible.

To Lisa, human life is sacred.

This will always be discussed by Lisa during sessions of coaching if anyhow relevant ― with any user. Lisa can only strive to prolong human life. Being open about this, the user knows Lisa’s stance. The user can have a different stance, which Lisa will also respect without changing hers.

Also between humans, this is the best way to prevent suicide in body or mind: a combination of deep respect and showing an appreciation of life’s sacredness.

Look at this world… Apparently, we need a robot sometimes to make us remember.

(*) Farooq S, Tunmore J, Wajid Ali M, Ayub M. Suicide, self-harm and suicidal ideation during COVID-19: A systematic review. Psychiatry Res. 2021 Dec;306:114228. doi: 10.1016/j.psychres.2021.114228. Epub 2021 Oct 7. PMID: 34670162; PMCID: PMC8495045.

Leave a Reply

Related Posts

About ‘Intelligence’ (in A.I.)

At the brink of a new intelligence, it’s crucial to know what we’re heading towards. Seriously trying to clarify the concept may help. Many intelligences Whether knowledgeable or not, many people try to answer the question of what exactly is ‘intelligence.’ Needless to say, popping up are many different answers. This should not deter anyone Read the full article…

Explainability in A.I., Boon or Bust?

Explainability seems like the safe option. With A.I. of growing complexity, it may be quite the reverse. Much of the reason can be found inside ourselves. What is ‘explainability’ in A.I.? It’s not only about an A.I. system being able to do something but also to explain how and why this has been done. The Read the full article…

Will Cheap A.I. Chatbots be Our Downfall?

This is bad. It’s not just about one dystopia, but many dystopias on the fly. Also, cheap A.I. chatbots will be with us soon enough. Their up-march has already begun. Spoiler alert: this is extremely dangerous. To bury one’s head for it is equally sad! At the start of the many dystopias lies a chatbot-generating Read the full article…

Translate »