A.I. and In-Depth Sustainability

July 20, 2021 Artifical Intelligence, Sustainability No Comments

Soon enough, A.I. may become the biggest opportunity (and threat) to human-related sustainability.

I hope that AURELIS/Lisa insights and tools can help counter the threat and realize the opportunity.

This text

is not an enumeration of what we may use present-day A.I. (or what carries that name) for to enhance sustainable solutions.

It’s about Compassionate A.I. [see: “The Journey Towards Compassionate A.I.”] and its opportunities concerning sustainability in a profound sense and at the long and short-enough term.

Most of all, it’s about us.

In-depth sustainability

Caring for the environment seems like a lot of things to be done. That, however, is a bridge too far at the start. Without proper motivation, nothing will sustainably be done.

Caring for proper motivation seems like nothing but telling people the ‘why.’ That, however, is not what motivation is about. Contrary to what is still generally thought – including in everyday life – motivation comes from depth. [see: “Can Motivation be Purely Conscious?”] Eventually, it comes from ‘nature inside.’

Consequently, caring for nature (the environment, small and broad) starts with caring for nature inside. This overlap is the basis for in-depth sustainability.

In-depth (Compassionate) A.I.

[see: “Compassion, Basically”]

Present-day science is conceptual, but we are not in-depth conceptual beings. Therefore, conceptual science is OK in technology if well used, but it’s problematic when about ourselves. Say, for instance, in mind-related healthcare.

Conceptualization/categorization by itself doesn’t necessarily bring us closer to profound meaningfulness, say: towards nature inside. It doesn’t kindle growth. Moreover, it may quickly tear us away from it. This way, present-day A.I. may have a pretty negative influence. [see: “A.I. – Categorization – Meaningfulness”]

However, here is where Compassionate A.I. becomes truly helpful. It brings us the tools to better understand ourselves also ‘beyond the conceptual,’ thus bringing us closer to nature inside. This will be needed to handle ourselves as well as the dangerous A.I. possibilities to come ― for instance, through ever more autonomous weapons.

Overlap

Closer to nature inside, people become more prone to feel the overlap with nature outside. Still, the proper support is needed to turn it into action. At least, in-depth motivation will be easier to elicit.

If done with many people, this can have a huge impact.

There’s real potential to do so.

The call from nature by way of psychological and psycho-somatic issues is getting louder every year. In short, mind-related symptoms are mainly the consequence of dissociation from nature inside. [see: “Inner Dissociation is NEVER OK!”] The suffering of this can be relieved in two ways, either by attacking the suffering itself (painkillers, etc.) or by reducing the dissociation. The latter is done by bringing people closer again to nature inside. [see: “Medicine of war. Medicine of peace.”]

Meanwhile, people are also increasingly eager for mental growth experiences ― another call from nature.

I see the Compassionate future of mental healing in answering this dual call. [see: “Future of Mental Healing“] For this to happen, psychotherapy’s doors need to be opened. [see: “Psychotherapy’s Doors“] Technology and insights are available towards an immensely positive impact with the use of A.I. ― beyond the categorization type.

Lisa is entirely orientated towards this.

Lisa = Compassionate A.I. coaching chatbot. [see: “Lisa“] [see: “Introducing Lisa (Animated Video)“]

In the field of mind-related healthcare and more, the goal of Lisa is healing from the inside. [see: “Goal of Lisa Coaching”] This is: bringing people closer to nature inside. There is a direct link from Compassion to sustainability. [see: “Sustainability through Compassion”]

Through the Internet, this can be brought 24/7 to millions.

This vein may be our only viable option.

The world will not be sustainable without the in-depth sustainability of many. Superficial layers are cosmetic ― quickly lost if attained. A durable solution happens in-depth, or it doesn’t at all.

Of course, wishful thinking is not enough. On the other side, no sustainable castle will be built on clouds without natural overlap.

The future starts inside.

Leave a Reply

Related Posts

Transparency in A.I.

We should strive to the highest degree of transparency in A.I., but not at the detriment of ourselves. Information transparency In conceptual information processing systems (frequently called A.I.), transparency is the showing of all data (information, concepts) that are used in decision making of any kind. In system-human interaction, the human may ask the system Read the full article…

Is Climate Change More Critical than A.I.?

Reckoning in years, probably not. Of course, in no way do I want to underestimate the immense importance of Climate Change. ►►► WHY read this? The curve of A.I. becoming an existential issue may soon enough become much steeper than that of Climate Change. ◄◄◄ There is no way around the experts’ opinion that we Read the full article…

Analogy ― Last Frontier in A.I.?

Big data, hugely efficient algorithms and immense computing power lead to present-day successes in A.I. Significant hurdles remain in learning from few occurrences and bringing to bear in one domain what has been learned in another ― thus accomplishing more general intelligence. Central to both is the use of analogy. Humans are analogists From childhood Read the full article…

Translate »