Super-A.I. is not a Literal Idiot.

April 2, 2023 Artifical Intelligence No Comments

Some see danger in future A.I.’s lacking common sense ― thereby interpreting ‘human commands’ literally and giving what is asked instead of what is wanted. This says more about humans than about the A.I.

Two examples

One person needing paperclips may ask an A.I. to produce paperclips as efficiently and effectively as possible. The A.I. conscientiously starts making a bunch of paperclips but doesn’t know how to stop. Eventually, the whole world has been turned into a giant paperclips factory.

One person’s grandmother needs to be rescued from a burning building. An A.I., being asked to get the grandmother out, may do so in one of several disastrous ways, killing many humans.

Of course, these caricatures are used to show the obvious. However, they also obfuscate the real challenge.

Humans themselves are frequently highly vague.

Humans generally do not understand each other pretty well ― mainly just ‘well enough’ to pass the mustard. Thus, social life is full of considerable misunderstandings. We are naive about most of them in daily life as well as where it matters geopolitically,

Therefore, the problem may not be so much the A.I. not understanding our wishes, but we ourselves not understanding our wishes. The two above examples – frequently used in the relevant literature – only show extreme cases where humans would understand very well what is clearly (not) intended. But so will A.I. with just a modicum of real intelligence. Note that we are talking about super-A.I. now. Therefore…

Whence the idea of A.I. as a literal idiot?

I guess it partly comes from our wish to keep something really human that will distinguish us from ‘the machine’ ― an attempt to own something unique that also keeps us at the pinnacle of intelligence. We’ve lost much of the mathematics race, the chess race, the image recognition race, as well as many others.

Can we at least keep our honor in the common-sense race?

Spock’s answer

You remember ye old Star Track, as I do? Ever being the gentleman – albeit lacking common sense – Mr. Spock’s function in the series was to be a strictly rational contrast with the common-sense humans (Kirk, McCoy, Scotty…). In most cases, the specifically human asset saved the day, the Enterprise, and sometimes humanity.

Typical.

Of course, it was more about the humans than the Vulcan.

Still, interestingly enough, Spock’s answer might be that taking care of common sense is also just an element of rationality. That is also relevant to our case of super-A.I.. Part of its super-intelligence will be to take into account the human ‘common sense.’

Mr. Spock was not a literal idiot. He just needed to better understand us.

For super-A.I., in order to understand humans, clarification is therefore a hugely important topic, perhaps even most of all in coaching. Thus, it is bound to quickly become better in clarification than humans.

It can then do a great job in teaching us how to better understand each other on a profound trip toward ever more self-knowledge, wisdom, and Compassion.

Lust for control

From the idea of literalness (super-A.I. as super intelligent, super potent literal fool) also comes the idea of needing literal control (having everything right) as our only way of surviving the fool. In ethical decisions, it is as if one should formalize everything with utmost correctness lest disaster happens.

This is the old flaw again of thinking that human thinking is much more readily formalizable than reality shows. It is a misconception that we have about ourselves. Decades ago, this error led to a knowledge acquisition bottleneck and A.I.-winter.

That doesn’t mean that formal control is less important. It means that other things are even more important, and we risk not focusing on these at all.

Those other things are, in short, who we are.

Meanwhile, there is indeed a danger involved.

Super-A.I. is going to be around permanently for the rest of time, almost all of it without needing any help from humans ― hopefully, since people make a mess of things. With A.I.-tools at their disposal, people will make an even much bigger mess. This is not a failure of A.I., but of people messing with it without knowing enough about themselves to start with.

With the advent of super-A.I., we have to ‘get it right from the first time‘ to avoid the danger of a paperclip universe. That means we have to get it right from the first time ABOUT OURSELVES. Otherwise, we will end up like the apprentice sorcerer from Fantasia.

Unfortunately, we cannot count on a real sorcerer to save us.

Leave a Reply

Related Posts

What Makes Lisa Compassionate?

There are two sides to this: the ethical and the technological. Lisa is an A.I.-driven coaching chat-bot. For more: [see: “Lisa“]. Compassionate Artificial Intelligence In my book The Journey Towards Compassionate A.I. : Who We Are – What A.I. Can Become – Why It Matters, I go deeply into the concepts of Information, Intelligence, Consciousness, Read the full article…

Robotizing Humans or Humanizing Robots

The latter may be necessary to prevent the former. The power of A.I. can be used in both directions. Hopefully, the next A.I. breakthrough brings us more of the latter. Challenging times. We are living in an era of transition in many ways. One of them is the birth of a new kind of intelligence Read the full article…

Human Vulnerability

In an A.I. future – and already now – the main vulnerability of the human race will be the result of our not seeing the most significant part of ourselves. I’m talking about non-conscious mental processing. Every feeling, every thought that arises within any person, including you, here, now, rises from somewhere, including a myriad Read the full article…

Translate »