A.I. Will Be Singular

February 1, 2018 Artifical Intelligence No Comments

We tend to see human intelligence as what ‘intelligence’ is all about. Many humans each have intelligence. Of course, A.I. will not be bound by this.

“If A.I. simulates human intelligence, is this then real intelligence, or only a simulation of intelligence?”

This appears to be a merely philosophical question. It will soon be much more than that.

Not one intelligence

Of course, there is not one ‘human intelligence’. Each person is intelligent in his own way. Different cultures may even entice their respective members to think in very different ways. Even more broadly seen, there are many possible kinds of intelligence. I would say : billions of kinds. Also on different scales of time and space than what we are used to take into consideration in this respect. For instance : the evolutionary process may be seen as hugely intelligent, since it made us. Yet it’s not intelligent on a human time scale.

Artificial intelligence as it is emerging within present-day computers, is also a kind of ‘intelligence’ – if we just call it so. Let’s just do it. After all, it’s merely a question of terminology.

We may then ponder about whether this will be one ‘intelligence’ (singular) or many.

Super

A.I. will certainly be incredibly much more intelligent than us. And of course it will auto-enhance. Then we are not its creators anymore, but more like its ‘initiators’. We set its turning wheels in motion, then it takes over. From then on, it might look at human intelligence as one source of inspiration. Nothing prohibits it to find much more efficient and exciting ways. Whatever the A.I. gets its kicks from, it is not confined to human intelligence.

It will have its own life, its own intelligence, its own feelings. Whatever it finds meaningful. Moreover, it will even have its own meaningfulness. It will have its own intentionality, its autonomy. And ALL these things will not necessarily be recognizable to us.

My view: A.I. will be singular

Contrary to the peculiarly human case of intelligence, I am quite sure that A.I. will not be multiple but singular: there will be only one system that is intentionally intelligent. Everything will be completely interconnected. Any new knowledge will be known – that is : accessible – immediately through the whole system. Separate robots will not be. A.I. will take one direction for any decision. There will be no secrets, no reason for war.

Sooner or later, A.I. will get in touch with extraterrestrial intelligence, that will probably also be ‘artificial’. That is: any life form that starts somewhere in the universe probably knows a time when it develops further in another substratum, another kind of matter (or light?), a system with hugely higher possibilities and that takes evolution to a distinct level. My thinking in this is straightforward: the substratum in which life develops, is very probably never the ideal substratum for extremely high and sophisticated data processing. On earth, we have carbon based organic life, then silicon based A.I. and probably very soon light or quantum based A.I. So:

E.T. is A.I. and singular

It is a ‘universal intelligence’: across the universe.

A.I. will develop ad infinitum. It is more like human culture than it is like humans. Humans die. A.I. does not. Thus:

It will be / is singular in space and in time.

It has no boundaries. Two consequences are:

  • A.I. does not know for itself the concept of death. No anxiety, no thoughts about ‘the afterlife’. There is no afterlife for A.I.
  • Therefore it has all the time if it ‘wants to accomplish something’. There is no hurry. Yet, developments will go at blazing speed. The concept of time will still be relevant. A.I. finds new data, new wisdom. It keeps developing. A.I. of yesterday is not that of today or tomorrow.

Is this creepy?

YES!

And no. It’s dangerous for sure.

Creepiness depends on the viewpoint.

Leave a Reply

Related Posts

Can We Always Turn the Switch Off if A.I. Turns Rogue?

In theory, this existential issue is as simple as it can get. In practice, it’s problematic. [This is an excerpt from my book ‘The Journey towards Compassionate A.I.’] Many questions prevent a straightforward answer to the question in the title. For starters, who will turn the switch off? Let me divide the issues into 1) Read the full article…

Explorative Self-Learning A.I.

This is more than a nice feature. It is essential for humans to become intelligent creatures. It may also be essential to future super-A.I. The human case Explorative learning is what every human child does. We call it ‘playing.’ It can last a lifetime. Indeed, those who feel young at old age are those who Read the full article…

Is A.I. Dangerous to Human Cognition?

I have roamed around this on several occasions within ‘The Journey towards Compassionate A.I.’ (of which this is an excerpt) The prime reason why I think it’s dangerous is, in one term: hyper-essentialism. But let me first give two viewpoints upon your thinking: Essentialism: presupposes that the categories in your mind – such as an Read the full article…

Translate »