Will Cheap A.I. Chatbots be Our Downfall?

April 6, 2024 Artifical Intelligence No Comments

This is bad. It’s not just about one dystopia, but many dystopias on the fly. Also, cheap A.I. chatbots will be with us soon enough. Their up-march has already begun.

Spoiler alert: this is extremely dangerous. To bury one’s head for it is equally sad!

At the start of the many dystopias lies a chatbot-generating A.I. application.

This is an application that asks its user a number of questions and instantly generates a chatbot with a human-like face that talks and listens in a human-like way.

Of course, with such an application, the number of possible chatbots is endless. Any rogue player can then generate his chatbot in his attic and throw it on the market.

Since this is feasible, it will be done.

The real problem is that this chatbot may have any ideology.

It can thus be a factor of influence for the same ― influencing people, thus also their ‘genuine’ decisions. With manipulation as a goal, this is the most dangerous thing I can imagine.

Moreover, rogue players and rogue users may meet each other. I don’t see this merely as users being manipulated by players. Generally, humans are not straightforwardly rational beings. To a certain (quite large) degree, many people LIKE to be manipulated as an easy way to feel motivated and meaningful, even if this meaningfulness is imposed on them.

Is this a pessimistic view of the human being? One needs to be realistic ― especially with the whole species at risk. Also, my view is that, with the proper support, people think, feel, and act very differently.

Indeed, ‘proper support.’

History, unfortunately, shows us many examples of how things can go terribly wrong.

This is the case even without the power of A.I. which only puts the happening on steroids in overdrive.

Put some authority in the mixture, and soon enough, people will run after any sorry ideas, esoteric concoctions, rogue leaders, or inhumane ideology. Nazism, Stalinism, religionism, colored water on the market, careless placebos-in-pills, mass indoctrination by madmen-advertising… the gullibility that lies at the basis of all this is endless.

The fact that the new instrument will be a list of interactive bots – no humans – will starkly enhance the effect this gullibility has on many users. ‘Knowing that it’s a bot’ may seem a good defense in the eyes of naïve regulators. Sorry, but that’s laughable.

What I just described is the most dangerous outcome of non-Compassionate A.I.

This is the result of non-Compassionate human beings — say, multitudes. Gloomily, as individuals, we are no match for it, and culturally (globally), we are not ready for it. Moreover, the path of non-Compassionate A.I. is fraught with risks of deepening human alienation and societal discord.

Moreover, our basic cognitive illusion keeps making it worse. It also makes any solution more challenging to achieve in that trying to awaken people leads to awakening their inner resistance.

So, should one just quit and let things go awry to the max?

In other words, are we done for?

Dear reader, we must go forward, but it’s getting extremely urgent, and I don’t think that the present-day sense of urgency is big enough by far.

What we need is Compassionate A.I.

Especially now that we still have the choice.

It still mainly depends on how we, humans, deal with ourselves, the insights we gain, and the daring we show to transcend our historical shortcomings.

It doesn’t look like we are doing that by ourselves. Fortunately, the path of Compassionate A.I. (probably mainly in the chatbot domain) promises a future where technology enhances our humanity rather than diminishing it. Compassionate A.I. isn’t just about mitigating risks; it’s about actively contributing to human growth and societal well-being, supporting individuals in their personal growth journeys, helping to overcome psychological barriers, and fostering a culture of deep, meaningful interactions.

Most importantly, this demands a collective commitment to valuing human depth and Compassion as much as we value innovation and progress. This is the promise of Compassionate A.I., and it’s within our reach if we dare to imagine and work towards it.

Conclusion

I have come to the conclusion that we not only need Compassionate AI, but we already depend on it for our survival and subsequent well-being, soon enough. Compassionate A.I. is about enhancing the quality of our lives and the fabric of our societies, making them more resilient, adaptive, and, fundamentally, more human.

What can make the bridge between technology and human depth better than Lisa?

Leave a Reply

Related Posts

How Lisa Prevents LLM Hallucinations

Hallucinations (better-called confabulations) in the context of large language models (LLMs) occur when these models generate information that isn’t factually accurate. Lisa can mitigate these from the insight of why they happen, namely: LLM confabulations happen because these systems don’t have a proper understanding of the world but generate text based on patterns learned from Read the full article…

Why A.I. Must Be Compassionate

This is bound to become the most critical issue in humankind’s history until now, and probably also from now on ― to be taken seriously. Not thinking about it is like driving blindfolded on a highway. If you have read my book The Journey Towards Compassionate A.I., you know much of what’s in this text. Read the full article…

The Double Ethical Bottleneck of A.I.

This is a small excerpt from my book The Journey Towards Compassionate A.I. The whole book describes the why’s, what’s and how’s concerning this. Getting through the A.I. bi-bottleneck On the road towards genuine super-A.I. – encompassing all domains of intelligence and in each being much more effective than humans – I see not one Read the full article…

Translate »