Will Cheap A.I. Chatbots be Our Downfall?

April 6, 2024 Artifical Intelligence No Comments

This is bad. It’s not just about one dystopia, but many dystopias on the fly. Also, cheap A.I. chatbots will be with us soon enough. Their up-march has already begun.

Spoiler alert: this is extremely dangerous. To bury one’s head for it is equally sad!

At the start of the many dystopias lies a chatbot-generating A.I. application.

This is an application that asks its user a number of questions and instantly generates a chatbot with a human-like face that talks and listens in a human-like way.

Of course, with such an application, the number of possible chatbots is endless. Any rogue player can then generate his chatbot in his attic and throw it on the market.

Since this is feasible, it will be done.

The real problem is that this chatbot may have any ideology.

It can thus be a factor of influence for the same ― influencing people, thus also their ‘genuine’ decisions. With manipulation as a goal, this is the most dangerous thing I can imagine.

Moreover, rogue players and rogue users may meet each other. I don’t see this merely as users being manipulated by players. Generally, humans are not straightforwardly rational beings. To a certain (quite large) degree, many people LIKE to be manipulated as an easy way to feel motivated and meaningful, even if this meaningfulness is imposed on them.

Is this a pessimistic view of the human being? One needs to be realistic ― especially with the whole species at risk. Also, my view is that, with the proper support, people think, feel, and act very differently.

Indeed, ‘proper support.’

History, unfortunately, shows us many examples of how things can go terribly wrong.

This is the case even without the power of A.I. which only puts the happening on steroids in overdrive.

Put some authority in the mixture, and soon enough, people will run after any sorry ideas, esoteric concoctions, rogue leaders, or inhumane ideology. Nazism, Stalinism, religionism, colored water on the market, careless placebos-in-pills, mass indoctrination by madmen-advertising… the gullibility that lies at the basis of all this is endless.

The fact that the new instrument will be a list of interactive bots – no humans – will starkly enhance the effect this gullibility has on many users. ‘Knowing that it’s a bot’ may seem a good defense in the eyes of naïve regulators. Sorry, but that’s laughable.

What I just described is the most dangerous outcome of non-Compassionate A.I.

This is the result of non-Compassionate human beings — say, multitudes. Gloomily, as individuals, we are no match for it, and culturally (globally), we are not ready for it. Moreover, the path of non-Compassionate A.I. is fraught with risks of deepening human alienation and societal discord.

Moreover, our basic cognitive illusion keeps making it worse. It also makes any solution more challenging to achieve in that trying to awaken people leads to awakening their inner resistance.

So, should one just quit and let things go awry to the max?

In other words, are we done for?

Dear reader, we must go forward, but it’s getting extremely urgent, and I don’t think that the present-day sense of urgency is big enough by far.

What we need is Compassionate A.I.

Especially now that we still have the choice.

It still mainly depends on how we, humans, deal with ourselves, the insights we gain, and the daring we show to transcend our historical shortcomings.

It doesn’t look like we are doing that by ourselves. Fortunately, the path of Compassionate A.I. (probably mainly in the chatbot domain) promises a future where technology enhances our humanity rather than diminishing it. Compassionate A.I. isn’t just about mitigating risks; it’s about actively contributing to human growth and societal well-being, supporting individuals in their personal growth journeys, helping to overcome psychological barriers, and fostering a culture of deep, meaningful interactions.

Most importantly, this demands a collective commitment to valuing human depth and Compassion as much as we value innovation and progress. This is the promise of Compassionate A.I., and it’s within our reach if we dare to imagine and work towards it.

Conclusion

I have come to the conclusion that we not only need Compassionate AI, but we already depend on it for our survival and subsequent well-being, soon enough. Compassionate A.I. is about enhancing the quality of our lives and the fabric of our societies, making them more resilient, adaptive, and, fundamentally, more human.

What can make the bridge between technology and human depth better than Lisa?

Leave a Reply

Related Posts

Forward-Forward Neur(on)al Networks

Rest assured, I don’t stuff technical details into this blog. Nevertheless, this new framework lies closer to how the brain works, which is interesting enough to go somewhat into it. Backprop In Artificial Neural Networks (ANN) – the subfield that sways a big scepter in A.I. nowadays – backpropagation (backprop) is one of the main Read the full article…

Human-Centered A.I.

Human-centered A.I. (HAI) emphasizes human strength, health, and well-being. To be durably so, it must be Compassionate, basically, ― properly taking into account human complexity; this is: the total person. The total person comprises the conceptual and subconceptual mind ― way beyond classical humanism and a lingering body-mind divide. From the inside out As neurocognitive Read the full article…

Better A.I. for Better Humans

While we need to be afraid of non-Compassionate A.I., the Compassionate kind is necessary for a humane future ― starting as soon as possible. Please read about why we NEED Compassionate A.I. (C.A.I.) in general. In this text, I pass concretely along some fields. In each, the primary focus naturally lies on the human complexity Read the full article…

Translate »