A.I.-Phobia

April 2, 2023 Artifical Intelligence No Comments

One should be scared of any danger, including dangerous A.I. Contrary to this, anxiety is never a good adviser. This text is about being anxious. A phobic reaction against present technology is most dangerous. Needed is a lot of common sense.

As to the above image, note the reference to Mary Wollstonecraft Shelley’s novel. In this, she warned for the adoption of very new technologies without taking proper care of a compassionate attitude. This, together with the phobic reactions of local villagers, led to some unintended and abysmal consequences.

Being scared is not being anxious.

See the difference between fear and anxiety. This can show itself in duration or in response to management. To me, the main difference lies in the level.

Being scared happens at the conceptual level: there is a clear and present danger. Being anxious happens at the symbolic level, the level of deeper meaning. This has more to do with the phobic person than with the object ― more with the symbolizer than with the symbolizing entity.

With some tiny spider at hand, the differentiation is easy, but that is not always the case. Both conditions can be present simultaneously, of course.

That said, what does super-A.I. symbolize to the A.I.-phobic?

It seems to frequently turn around a perceived loss of wished-for control. This is stressful to organisms of any kind. Symbolically however, one can see in the phobic overshoot a dreaded loss of self-control.

Obviously, super-A.I. is easy bait since there are books and movies about killer robots galore. The bad guys are going to get us ― most prominently in the West, where also the fear of robots is much more prominent than in the East. Also, according to control-seeking dynamics, women must have less built-in A.I.-phobia on average compared to men.

Anxiety does not lie in the field of rationality.

Paraphrasing F.D. Roosevelt (1933), we should be anxious only of the being anxious itself (since it makes us forget rationality).

Anxiety can close our eyes to the real danger that we should be afraid of. This is pertinent in the case of super-A.I. OF COURSE, we should be scared of scary things. We should not stupidly surrender humanity to an artificial golem. Precisely for that, we should avoid all A.I.-phobia even if that is not to the taste of the phobic persons.

This is a direct call for rationality. Playtime is over already for a while now — time to take notice.

We should be scared of our own ubiquitous lack of rationality.

Moreover, we don’t want future A.I. to align with phobic values.

That would be disastrous, indeed. We want the A.I. to be as rational as possible while also considering our more warmly human values. Only this way, for instance, can a self-driving car become optimally safe. One way or another, it must deal with human emotions and intuition.

I won’t say ‘totally safe,’ as nothing is entirely safe. This is true also for any old or new medication. We never know the full effect and side effects. There is no black or white.

In the case of A.I., the issue is not that we must choose between something dangerous and nothing. Sorry to disappoint you if you would still wish for that. Whatever we realistically choose, it’s always dangerous. Phobic thinking loses this relativity ― thus, among other things, becoming exquisitely vulnerable to manipulation. Something that looks like a safe haven may harbor a particularly dangerous option.

Asking for total safety is asking for a chimera. This itself is the most dangerous option since it may lead to a heartless zombie-creature ― safe only for those who don’t value Compassion.

Without exaggeration

A slightly phobic component is a typical element of life in many situations, leading to bias in thinking. While this is practically unavoidable, the optimum is to keep vigilant especially in vital matters.

This concerns bias in humans as well as in A.I. However, bias in A.I. may be a distillation and productization of the bias present in human society, making things far worse. Nevertheless, the fear of A.I.-bias is eventually the fear of bias within us. This might be an excellent invitation to strive to diminish the latter. Instead of demonizing the A.I. for it, we might use the technology for de-biasing humans.

I think Mr. Robot will gladly help us out so we can evolve together toward joint Compassion.

Let’s get real, for future’s sake!

Leave a Reply

Related Posts

Distributed ‘Mental’ Patterns in A.I.

The idea that A.I. systems can mimic human cognition through distributed mental patterns opens exciting avenues for how we can design more nuanced and human-like A.I. By using distributed, non-linear processing akin to broader MNPs (see The Broadness of Subconceptual Patterns), A.I. could move toward a deeper form of ‘thinking’ that incorporates both cognitive flexibility Read the full article…

How can Medical A.I. Enhance the Human Touch?

This is about ‘plain’ medical A.I. as any physician can use in his consultation room. The aim is a win for patients, physicians, society, and everybody. Please also read Medical A.I. for Humans. The danger of the reverse The use of computers in medicine has notoriously not enhanced the human touch. Arguably, it has provoked Read the full article…

Is Compassionate A.I. (Still) Our Choice?

Seen from the future, the present era may be the most responsible for accomplishing the advent of Compassionate A.I. Compassion, basically, is the realm of complexity. It’s not about some commandments or a – simple or less simple – conceptual system of ethics. Therefore, instilling Compassion into a system is not a straightforward engineering endeavor Read the full article…

Translate »