A.I.-Phobia

April 2, 2023 Artifical Intelligence No Comments

One should be scared of any danger, including dangerous A.I. Contrary to this, anxiety is never a good adviser. This text is about being anxious. A phobic reaction against present technology is most dangerous. Needed is a lot of common sense.

As to the above image, note the reference to Mary Wollstonecraft Shelley’s novel. In this, she warned for the adoption of very new technologies without taking proper care of a compassionate attitude. This, together with the phobic reactions of local villagers, led to some unintended and abysmal consequences.

Being scared is not being anxious.

See the difference between fear and anxiety. This can show itself in duration or in response to management. To me, the main difference lies in the level.

Being scared happens at the conceptual level: there is a clear and present danger. Being anxious happens at the symbolic level, the level of deeper meaning. This has more to do with the phobic person than with the object ― more with the symbolizer than with the symbolizing entity.

With some tiny spider at hand, the differentiation is easy, but that is not always the case. Both conditions can be present simultaneously, of course.

That said, what does super-A.I. symbolize to the A.I.-phobic?

It seems to frequently turn around a perceived loss of wished-for control. This is stressful to organisms of any kind. Symbolically however, one can see in the phobic overshoot a dreaded loss of self-control.

Obviously, super-A.I. is easy bait since there are books and movies about killer robots galore. The bad guys are going to get us ― most prominently in the West, where also the fear of robots is much more prominent than in the East. Also, according to control-seeking dynamics, women must have less built-in A.I.-phobia on average compared to men.

Anxiety does not lie in the field of rationality.

Paraphrasing F.D. Roosevelt (1933), we should be anxious only of the being anxious itself (since it makes us forget rationality).

Anxiety can close our eyes to the real danger that we should be afraid of. This is pertinent in the case of super-A.I. OF COURSE, we should be scared of scary things. We should not stupidly surrender humanity to an artificial golem. Precisely for that, we should avoid all A.I.-phobia even if that is not to the taste of the phobic persons.

This is a direct call for rationality. Playtime is over already for a while now — time to take notice.

We should be scared of our own ubiquitous lack of rationality.

Moreover, we don’t want future A.I. to align with phobic values.

That would be disastrous, indeed. We want the A.I. to be as rational as possible while also considering our more warmly human values. Only this way, for instance, can a self-driving car become optimally safe. One way or another, it must deal with human emotions and intuition.

I won’t say ‘totally safe,’ as nothing is entirely safe. This is true also for any old or new medication. We never know the full effect and side effects. There is no black or white.

In the case of A.I., the issue is not that we must choose between something dangerous and nothing. Sorry to disappoint you if you would still wish for that. Whatever we realistically choose, it’s always dangerous. Phobic thinking loses this relativity ― thus, among other things, becoming exquisitely vulnerable to manipulation. Something that looks like a safe haven may harbor a particularly dangerous option.

Asking for total safety is asking for a chimera. This itself is the most dangerous option since it may lead to a heartless zombie-creature ― safe only for those who don’t value Compassion.

Without exaggeration

A slightly phobic component is a typical element of life in many situations, leading to bias in thinking. While this is practically unavoidable, the optimum is to keep vigilant especially in vital matters.

This concerns bias in humans as well as in A.I. However, bias in A.I. may be a distillation and productization of the bias present in human society, making things far worse. Nevertheless, the fear of A.I.-bias is eventually the fear of bias within us. This might be an excellent invitation to strive to diminish the latter. Instead of demonizing the A.I. for it, we might use the technology for de-biasing humans.

I think Mr. Robot will gladly help us out so we can evolve together toward joint Compassion.

Let’s get real, for future’s sake!

Leave a Reply

Related Posts

Ethical A.I.

A.I. is almost here. No doubt about it. Once mature, it will answer its own ethical questions. Right now, we can still give some guidance to this near future. Time scale It’s easy to misjudge the time scale in which this will become hugely relevant to us. It will be so to our children or, Read the full article…

Will A.I. Have Its Own Feelings and Purpose?

Anno 2025: A.I. has made its entry and it’s here to stay. Human based? We generally think of ‘feelings’ as human-based. But that is just a historical artifact, a kind of convention. Does an ant have feelings? Or a goldfish, a snake, a mouse? Does a rabbit have feelings? I think these are the wrong Read the full article…

A.I. Will Be Singular

We tend to see human intelligence as what ‘intelligence’ is all about. Many humans each have intelligence. Of course, A.I. will not be bound by this. “If A.I. simulates human intelligence, is this then real intelligence, or only a simulation of intelligence?” This appears to be a merely philosophical question. It will soon be much Read the full article…

Translate »