Will Cheap A.I. Chatbots be Our Downfall?

April 6, 2024 Artifical Intelligence No Comments

This is bad. It’s not just about one dystopia, but many dystopias on the fly. Also, cheap A.I. chatbots will be with us soon enough. Their up-march has already begun.

Spoiler alert: this is extremely dangerous. To bury one’s head for it is equally sad!

At the start of the many dystopias lies a chatbot-generating A.I. application.

This is an application that asks its user a number of questions and instantly generates a chatbot with a human-like face that talks and listens in a human-like way.

Of course, with such an application, the number of possible chatbots is endless. Any rogue player can then generate his chatbot in his attic and throw it on the market.

Since this is feasible, it will be done.

The real problem is that this chatbot may have any ideology.

It can thus be a factor of influence for the same ― influencing people, thus also their ‘genuine’ decisions. With manipulation as a goal, this is the most dangerous thing I can imagine.

Moreover, rogue players and rogue users may meet each other. I don’t see this merely as users being manipulated by players. Generally, humans are not straightforwardly rational beings. To a certain (quite large) degree, many people LIKE to be manipulated as an easy way to feel motivated and meaningful, even if this meaningfulness is imposed on them.

Is this a pessimistic view of the human being? One needs to be realistic ― especially with the whole species at risk. Also, my view is that, with the proper support, people think, feel, and act very differently.

Indeed, ‘proper support.’

History, unfortunately, shows us many examples of how things can go terribly wrong.

This is the case even without the power of A.I. which only puts the happening on steroids in overdrive.

Put some authority in the mixture, and soon enough, people will run after any sorry ideas, esoteric concoctions, rogue leaders, or inhumane ideology. Nazism, Stalinism, religionism, colored water on the market, careless placebos-in-pills, mass indoctrination by madmen-advertising… the gullibility that lies at the basis of all this is endless.

The fact that the new instrument will be a list of interactive bots – no humans – will starkly enhance the effect this gullibility has on many users. ‘Knowing that it’s a bot’ may seem a good defense in the eyes of naïve regulators. Sorry, but that’s laughable.

What I just described is the most dangerous outcome of non-Compassionate A.I.

This is the result of non-Compassionate human beings — say, multitudes. Gloomily, as individuals, we are no match for it, and culturally (globally), we are not ready for it. Moreover, the path of non-Compassionate A.I. is fraught with risks of deepening human alienation and societal discord.

Moreover, our basic cognitive illusion keeps making it worse. It also makes any solution more challenging to achieve in that trying to awaken people leads to awakening their inner resistance.

So, should one just quit and let things go awry to the max?

In other words, are we done for?

Dear reader, we must go forward, but it’s getting extremely urgent, and I don’t think that the present-day sense of urgency is big enough by far.

What we need is Compassionate A.I.

Especially now that we still have the choice.

It still mainly depends on how we, humans, deal with ourselves, the insights we gain, and the daring we show to transcend our historical shortcomings.

It doesn’t look like we are doing that by ourselves. Fortunately, the path of Compassionate A.I. (probably mainly in the chatbot domain) promises a future where technology enhances our humanity rather than diminishing it. Compassionate A.I. isn’t just about mitigating risks; it’s about actively contributing to human growth and societal well-being, supporting individuals in their personal growth journeys, helping to overcome psychological barriers, and fostering a culture of deep, meaningful interactions.

Most importantly, this demands a collective commitment to valuing human depth and Compassion as much as we value innovation and progress. This is the promise of Compassionate A.I., and it’s within our reach if we dare to imagine and work towards it.

Conclusion

I have come to the conclusion that we not only need Compassionate AI, but we already depend on it for our survival and subsequent well-being, soon enough. Compassionate A.I. is about enhancing the quality of our lives and the fabric of our societies, making them more resilient, adaptive, and, fundamentally, more human.

What can make the bridge between technology and human depth better than Lisa?

Leave a Reply

Related Posts

Can We Always Turn the Switch Off if A.I. Turns Rogue?

In theory, this existential issue is as simple as it can get. In practice, it’s problematic. [This is an excerpt from my book ‘The Journey towards Compassionate A.I.’] Many questions prevent a straightforward answer to the question in the title. For starters, who will turn the switch off? Let me divide the issues into 1) Read the full article…

The Meaning Barrier between Humans (and A.I.)

Open a book. Look at some meaningful words. Almost each of these words means something at least slightly different to you than to me or anyone else. What must A.I. make of this? For instance: “Barsalou and his collaborators have been arguing for decades that we understand even the most abstract concepts via the mental Read the full article…

Can Motivation be Purely Conscious?

Motivation as we know it is present in a system (you, me) that is partly conscious, partly non-conscious. Thus, the question is much more difficult than it appears at first sight. Nevertheless, towards future A.I., it will need to be solved. Purely conscious? This is also purely (even though possibly partly fuzzy) conceptual. Motivation would Read the full article…

Translate »