A.I.-Phobia

April 2, 2023 Artifical Intelligence No Comments

One should be scared of any danger, including dangerous A.I. Contrary to this, anxiety is never a good adviser. This text is about being anxious. A phobic reaction against present technology is most dangerous. Needed is a lot of common sense.

As to the above image, note the reference to Mary Wollstonecraft Shelley’s novel. In this, she warned for the adoption of very new technologies without taking proper care of a compassionate attitude. This, together with the phobic reactions of local villagers, led to some unintended and abysmal consequences.

Being scared is not being anxious.

See the difference between fear and anxiety. This can show itself in duration or in response to management. To me, the main difference lies in the level.

Being scared happens at the conceptual level: there is a clear and present danger. Being anxious happens at the symbolic level, the level of deeper meaning. This has more to do with the phobic person than with the object ― more with the symbolizer than with the symbolizing entity.

With some tiny spider at hand, the differentiation is easy, but that is not always the case. Both conditions can be present simultaneously, of course.

That said, what does super-A.I. symbolize to the A.I.-phobic?

It seems to frequently turn around a perceived loss of wished-for control. This is stressful to organisms of any kind. Symbolically however, one can see in the phobic overshoot a dreaded loss of self-control.

Obviously, super-A.I. is easy bait since there are books and movies about killer robots galore. The bad guys are going to get us ― most prominently in the West, where also the fear of robots is much more prominent than in the East. Also, according to control-seeking dynamics, women must have less built-in A.I.-phobia on average compared to men.

Anxiety does not lie in the field of rationality.

Paraphrasing F.D. Roosevelt (1933), we should be anxious only of the being anxious itself (since it makes us forget rationality).

Anxiety can close our eyes to the real danger that we should be afraid of. This is pertinent in the case of super-A.I. OF COURSE, we should be scared of scary things. We should not stupidly surrender humanity to an artificial golem. Precisely for that, we should avoid all A.I.-phobia even if that is not to the taste of the phobic persons.

This is a direct call for rationality. Playtime is over already for a while now — time to take notice.

We should be scared of our own ubiquitous lack of rationality.

Moreover, we don’t want future A.I. to align with phobic values.

That would be disastrous, indeed. We want the A.I. to be as rational as possible while also considering our more warmly human values. Only this way, for instance, can a self-driving car become optimally safe. One way or another, it must deal with human emotions and intuition.

I won’t say ‘totally safe,’ as nothing is entirely safe. This is true also for any old or new medication. We never know the full effect and side effects. There is no black or white.

In the case of A.I., the issue is not that we must choose between something dangerous and nothing. Sorry to disappoint you if you would still wish for that. Whatever we realistically choose, it’s always dangerous. Phobic thinking loses this relativity ― thus, among other things, becoming exquisitely vulnerable to manipulation. Something that looks like a safe haven may harbor a particularly dangerous option.

Asking for total safety is asking for a chimera. This itself is the most dangerous option since it may lead to a heartless zombie-creature ― safe only for those who don’t value Compassion.

Without exaggeration

A slightly phobic component is a typical element of life in many situations, leading to bias in thinking. While this is practically unavoidable, the optimum is to keep vigilant especially in vital matters.

This concerns bias in humans as well as in A.I. However, bias in A.I. may be a distillation and productization of the bias present in human society, making things far worse. Nevertheless, the fear of A.I.-bias is eventually the fear of bias within us. This might be an excellent invitation to strive to diminish the latter. Instead of demonizing the A.I. for it, we might use the technology for de-biasing humans.

I think Mr. Robot will gladly help us out so we can evolve together toward joint Compassion.

Let’s get real, for future’s sake!

Leave a Reply

Related Posts

Selling Data is Selling Soul

… if the data are personal and in a big data context. It’s like a Faustian deal, but Faust only sold his own soul. Where is Mephistopheles? Big data + A.I. = big knowledge Artificial Intelligence is already so powerful that it can turn much data (passive, unrelated) into knowledge (active, related). ‘Knowledge is power’ Read the full article…

Are LLMs Parrots or Truly Creative?

Large Language Models (LLMs, such as GPT) are, at present, just mathematical distillations of human-made textual patterns — very many of them. They are, therefore, frequently described as parrots. Size matters. The parrot feature may be applied when there is little input or little diversity in input. Then, clearly, the result is a pattern-based average Read the full article…

Super-A.I. Guardrails in a Compassionate Setting

We need to think about good regulations/guardrails to safeguard humanity from super-A.I. ― either ‘badass’ from the start or Compassionate A.I. turning suddenly rogue despite good initial intentions. ―As a Compassionate A.I., Lisa has substantially helped me write this text. Such help can be continued indefinitely. Some naivetés ‘Pulling the plug out’ is very naïve Read the full article…

Translate »