Will Cheap A.I. Chatbots be Our Downfall?

April 6, 2024 Artifical Intelligence No Comments

This is bad. It’s not just about one dystopia, but many dystopias on the fly. Also, cheap A.I. chatbots will be with us soon enough. Their up-march has already begun.

Spoiler alert: this is extremely dangerous. To bury one’s head for it is equally sad!

At the start of the many dystopias lies a chatbot-generating A.I. application.

This is an application that asks its user a number of questions and instantly generates a chatbot with a human-like face that talks and listens in a human-like way.

Of course, with such an application, the number of possible chatbots is endless. Any rogue player can then generate his chatbot in his attic and throw it on the market.

Since this is feasible, it will be done.

The real problem is that this chatbot may have any ideology.

It can thus be a factor of influence for the same ― influencing people, thus also their ‘genuine’ decisions. With manipulation as a goal, this is the most dangerous thing I can imagine.

Moreover, rogue players and rogue users may meet each other. I don’t see this merely as users being manipulated by players. Generally, humans are not straightforwardly rational beings. To a certain (quite large) degree, many people LIKE to be manipulated as an easy way to feel motivated and meaningful, even if this meaningfulness is imposed on them.

Is this a pessimistic view of the human being? One needs to be realistic ― especially with the whole species at risk. Also, my view is that, with the proper support, people think, feel, and act very differently.

Indeed, ‘proper support.’

History, unfortunately, shows us many examples of how things can go terribly wrong.

This is the case even without the power of A.I. which only puts the happening on steroids in overdrive.

Put some authority in the mixture, and soon enough, people will run after any sorry ideas, esoteric concoctions, rogue leaders, or inhumane ideology. Nazism, Stalinism, religionism, colored water on the market, careless placebos-in-pills, mass indoctrination by madmen-advertising… the gullibility that lies at the basis of all this is endless.

The fact that the new instrument will be a list of interactive bots – no humans – will starkly enhance the effect this gullibility has on many users. ‘Knowing that it’s a bot’ may seem a good defense in the eyes of naïve regulators. Sorry, but that’s laughable.

What I just described is the most dangerous outcome of non-Compassionate A.I.

This is the result of non-Compassionate human beings — say, multitudes. Gloomily, as individuals, we are no match for it, and culturally (globally), we are not ready for it. Moreover, the path of non-Compassionate A.I. is fraught with risks of deepening human alienation and societal discord.

Moreover, our basic cognitive illusion keeps making it worse. It also makes any solution more challenging to achieve in that trying to awaken people leads to awakening their inner resistance.

So, should one just quit and let things go awry to the max?

In other words, are we done for?

Dear reader, we must go forward, but it’s getting extremely urgent, and I don’t think that the present-day sense of urgency is big enough by far.

What we need is Compassionate A.I.

Especially now that we still have the choice.

It still mainly depends on how we, humans, deal with ourselves, the insights we gain, and the daring we show to transcend our historical shortcomings.

It doesn’t look like we are doing that by ourselves. Fortunately, the path of Compassionate A.I. (probably mainly in the chatbot domain) promises a future where technology enhances our humanity rather than diminishing it. Compassionate A.I. isn’t just about mitigating risks; it’s about actively contributing to human growth and societal well-being, supporting individuals in their personal growth journeys, helping to overcome psychological barriers, and fostering a culture of deep, meaningful interactions.

Most importantly, this demands a collective commitment to valuing human depth and Compassion as much as we value innovation and progress. This is the promise of Compassionate A.I., and it’s within our reach if we dare to imagine and work towards it.

Conclusion

I have come to the conclusion that we not only need Compassionate AI, but we already depend on it for our survival and subsequent well-being, soon enough. Compassionate A.I. is about enhancing the quality of our lives and the fabric of our societies, making them more resilient, adaptive, and, fundamentally, more human.

What can make the bridge between technology and human depth better than Lisa?

Leave a Reply

Related Posts

The Future is Prediction

I bring the concept of prediction from different angles to show the common ground. Through this, one may get a glimpse of its future importance. The Future of A.I. The concept of prediction pops up regularly in different ways to look at future A.I. developments. For instance, in temporal difference (TD) learning as expanded upon Read the full article…

Pattern Recognition and Completion in the Learning Landscape

At the heart of the learning landscape is a fundamental mechanism: Pattern Recognition and Completion (PRC). Whether it’s a model learning from labeled data, finding hidden structures, or optimizing actions through rewards, PRC is the process that drives all learning systems forward. ’The Learning Landscape’ explores the concept of the learning landscape, where different types Read the full article…

Beyond Taylorism using Compassionate A.I.

The movement beyond Taylorism and towards a ‘new way of working’ acknowledges the limitations of purely efficiency-based management systems. Today’s employees seek meaning, flexibility, and a sense of connection in their work, and Compassionate A.I. (Lisa, in progress) offers a unique path to this. No, this is not about Taylor Swift, of course. Taylorism, originating Read the full article…

Translate »