A.I.-Phobia

April 2, 2023 Artifical Intelligence No Comments

One should be scared of any danger, including dangerous A.I. Contrary to this, anxiety is never a good adviser. This text is about being anxious. A phobic reaction against present technology is most dangerous. Needed is a lot of common sense.

As to the above image, note the reference to Mary Wollstonecraft Shelley’s novel. In this, she warned for the adoption of very new technologies without taking proper care of a compassionate attitude. This, together with the phobic reactions of local villagers, led to some unintended and abysmal consequences.

Being scared is not being anxious.

See the difference between fear and anxiety. This can show itself in duration or in response to management. To me, the main difference lies in the level.

Being scared happens at the conceptual level: there is a clear and present danger. Being anxious happens at the symbolic level, the level of deeper meaning. This has more to do with the phobic person than with the object ― more with the symbolizer than with the symbolizing entity.

With some tiny spider at hand, the differentiation is easy, but that is not always the case. Both conditions can be present simultaneously, of course.

That said, what does super-A.I. symbolize to the A.I.-phobic?

It seems to frequently turn around a perceived loss of wished-for control. This is stressful to organisms of any kind. Symbolically however, one can see in the phobic overshoot a dreaded loss of self-control.

Obviously, super-A.I. is easy bait since there are books and movies about killer robots galore. The bad guys are going to get us ― most prominently in the West, where also the fear of robots is much more prominent than in the East. Also, according to control-seeking dynamics, women must have less built-in A.I.-phobia on average compared to men.

Anxiety does not lie in the field of rationality.

Paraphrasing F.D. Roosevelt (1933), we should be anxious only of the being anxious itself (since it makes us forget rationality).

Anxiety can close our eyes to the real danger that we should be afraid of. This is pertinent in the case of super-A.I. OF COURSE, we should be scared of scary things. We should not stupidly surrender humanity to an artificial golem. Precisely for that, we should avoid all A.I.-phobia even if that is not to the taste of the phobic persons.

This is a direct call for rationality. Playtime is over already for a while now — time to take notice.

We should be scared of our own ubiquitous lack of rationality.

Moreover, we don’t want future A.I. to align with phobic values.

That would be disastrous, indeed. We want the A.I. to be as rational as possible while also considering our more warmly human values. Only this way, for instance, can a self-driving car become optimally safe. One way or another, it must deal with human emotions and intuition.

I won’t say ‘totally safe,’ as nothing is entirely safe. This is true also for any old or new medication. We never know the full effect and side effects. There is no black or white.

In the case of A.I., the issue is not that we must choose between something dangerous and nothing. Sorry to disappoint you if you would still wish for that. Whatever we realistically choose, it’s always dangerous. Phobic thinking loses this relativity ― thus, among other things, becoming exquisitely vulnerable to manipulation. Something that looks like a safe haven may harbor a particularly dangerous option.

Asking for total safety is asking for a chimera. This itself is the most dangerous option since it may lead to a heartless zombie-creature ― safe only for those who don’t value Compassion.

Without exaggeration

A slightly phobic component is a typical element of life in many situations, leading to bias in thinking. While this is practically unavoidable, the optimum is to keep vigilant especially in vital matters.

This concerns bias in humans as well as in A.I. However, bias in A.I. may be a distillation and productization of the bias present in human society, making things far worse. Nevertheless, the fear of A.I.-bias is eventually the fear of bias within us. This might be an excellent invitation to strive to diminish the latter. Instead of demonizing the A.I. for it, we might use the technology for de-biasing humans.

I think Mr. Robot will gladly help us out so we can evolve together toward joint Compassion.

Let’s get real, for future’s sake!

Leave a Reply

Related Posts

The Next Breakthrough in A.I.

will not be technological, but philosophical. Of course, technology will be necessary to realize the philosophical. It will not be one more technological breakthrough, but rather a combination of new and old technologies. “Present-day A.I. = sophisticated perception” These are the words of Yann LeCun, a leading A.I. scientist, founding father of convolutional nets, which Read the full article…

Shall we Put A.I. on Hold?

Now and then, the admonition arises to put A.I. – or part of it – on hold to take some breath and think about possible dangers. There are pros and cons to this pause button. Doubtlessly, A.I. is challenging, as is any disruptive technology. A.I. can disrupt on steroids. It’s not just about automation ― Read the full article…

Deep Semantics

In a semantic network, concepts are interconnected through conceptual links. Deep semantics takes this a step further, exploring connections at deeper levels. This can still be conceptual or go deeper-than-conceptual. The notion that deeper connections between concepts may hold more significance than direct superficial links is key to grasping human cognition. Imagine two non-linked concepts Read the full article…

Translate »