Coach-bots Shouldn’t Make People Do Things

November 11, 2023 Artifical Intelligence No Comments

This is a first principle for Lisa: never to make a human being do anything ― not even by giving advice if anyhow possible.

From this constraint, the thinking goes toward how Lisa can operate sensibly. It forces us to think creatively.

What comes from inside makes you stronger.

This is an AURELIS coaching principle about Inner Strength. The same change at only the surface level is generally less durably effective than when it comes from inside out ― for reason of inner complexity and the way the brain works in mental change.

Making people do things doesn’t honor our brain/mind complexity. Thus, it can engender much resistance to change.

It’s preferable to let any change come from the user ― more specifically, from deep inside. This gives a sense of spontaneity. It also provides – if adequately supported and explained – a sense of self-responsibility.

Nowadays, many people still need an explanation about human depth. Nevertheless, a proper insight into this is essential for any coaching.

In the case of a chat-bot

Here comes an additional ethical concern. In my view, never should any robot straightforwardly make any human being do things. This ethical principle is needed to prevent us from sliding toward a future in which our self-made intelligent artificial creature will subdue humanity.

Only humans should make other humans do things ― Compassionately.

A robot may invite humans. Of course, an invitation can be pretty powerful. ‘Not making anyone do things’ is, therefore, in the strictest sense, rather an intention that should be strictly followed. One can embed such an intention into a Compassionate A.I. as part of the Compassion.

Autosuggestively

Suggestive technology should never be misused, especially not by A.I. agents. This is challenging to discern, so it needs to be guaranteed by the developers.

As a user, you need to be able to trust the developing organization. With Lisa, I hope this will never be an issue.

In any case, auto-suggestion denotes that the user is the focus of the energy (deep human motivation) that is being provoked. The human user is active. No things are being done to him ― no suggestions implanted or anything.

Change is invited when the user is ready for it and congruent with himself as a total person.

The goal of A.I. coaching

This must always be a better A.I. for better humans. Compassionate A.I. will naturally pivot toward this goal in many ways.

This is not only my hope but my active endeavor for years. I hope it will continue to be so.

The future will be bright if we make it happen.

Leave a Reply

Related Posts

Deep Semantics & Subconceptual Communication in A.I.

An intriguing application of deep semantics lies in its integration with subconceptual communication (autosuggestion) in A.I. systems. Please first read Deep Semantics. Imagine Imagine an A.I. that grasps complex connections within a user’s semantic network and uses this to craft personalized autosuggestions in coaching. This system would dynamically learn from many user interactions, refining its Read the full article…

Compassion: Highway to Super-Intelligence?

The race toward super-intelligent A.I. is usually framed as a competition in raw computing power, problem-solving capabilities, and efficiency. But what if the key to real super-intelligence isn’t just about faster calculations? What if it’s about something deeper? Compassion ― not as a sentimental ideal, but as a structural necessity for intelligence itself. Could it Read the full article…

Harmony in Compassionate A.I.

Compassionate Artificial Intelligence (C.A.I.) represents a remarkable fusion of technical capability and human-centric depth. At its heart lies the dynamic interplay of harmony and verticality, essential elements that nurture a profound and self-sustaining Compassion. Lisa, as an embodiment of C.A.I., illustrates how these principles come to life, creating systems that not only engage but inspire Read the full article…

Translate »