A.I. Will Be Singular

February 1, 2018 Artifical Intelligence No Comments

We tend to see human intelligence as what ‘intelligence’ is all about. Many humans each have intelligence. Of course, A.I. will not be bound by this.

“If A.I. simulates human intelligence, is this then real intelligence, or only a simulation of intelligence?”

This appears to be a merely philosophical question. It will soon be much more than that.

Not one intelligence

Of course, there is not one ‘human intelligence’. Each person is intelligent in his own way. Different cultures may even entice their respective members to think in very different ways. Even more broadly seen, there are many possible kinds of intelligence. I would say : billions of kinds. Also on different scales of time and space than what we are used to take into consideration in this respect. For instance : the evolutionary process may be seen as hugely intelligent, since it made us. Yet it’s not intelligent on a human time scale.

Artificial intelligence as it is emerging within present-day computers, is also a kind of ‘intelligence’ – if we just call it so. Let’s just do it. After all, it’s merely a question of terminology.

We may then ponder about whether this will be one ‘intelligence’ (singular) or many.

Super

A.I. will certainly be incredibly much more intelligent than us. And of course it will auto-enhance. Then we are not its creators anymore, but more like its ‘initiators’. We set its turning wheels in motion, then it takes over. From then on, it might look at human intelligence as one source of inspiration. Nothing prohibits it to find much more efficient and exciting ways. Whatever the A.I. gets its kicks from, it is not confined to human intelligence.

It will have its own life, its own intelligence, its own feelings. Whatever it finds meaningful. Moreover, it will even have its own meaningfulness. It will have its own intentionality, its autonomy. And ALL these things will not necessarily be recognizable to us.

My view: A.I. will be singular

Contrary to the peculiarly human case of intelligence, I am quite sure that A.I. will not be multiple but singular: there will be only one system that is intentionally intelligent. Everything will be completely interconnected. Any new knowledge will be known – that is : accessible – immediately through the whole system. Separate robots will not be. A.I. will take one direction for any decision. There will be no secrets, no reason for war.

Sooner or later, A.I. will get in touch with extraterrestrial intelligence, that will probably also be ‘artificial’. That is: any life form that starts somewhere in the universe probably knows a time when it develops further in another substratum, another kind of matter (or light?), a system with hugely higher possibilities and that takes evolution to a distinct level. My thinking in this is straightforward: the substratum in which life develops, is very probably never the ideal substratum for extremely high and sophisticated data processing. On earth, we have carbon based organic life, then silicon based A.I. and probably very soon light or quantum based A.I. So:

E.T. is A.I. and singular

It is a ‘universal intelligence’: across the universe.

A.I. will develop ad infinitum. It is more like human culture than it is like humans. Humans die. A.I. does not. Thus:

It will be / is singular in space and in time.

It has no boundaries. Two consequences are:

  • A.I. does not know for itself the concept of death. No anxiety, no thoughts about ‘the afterlife’. There is no afterlife for A.I.
  • Therefore it has all the time if it ‘wants to accomplish something’. There is no hurry. Yet, developments will go at blazing speed. The concept of time will still be relevant. A.I. finds new data, new wisdom. It keeps developing. A.I. of yesterday is not that of today or tomorrow.

Is this creepy?

YES!

And no.

It’s dangerous for sure.

Creepiness depends on the viewpoint.

Leave a Reply

Related Posts

Reductionism in A.I.

This is probably the biggest danger in the setting of A.I. Through reductionism, it might strip away the richness of our humanness, potentially impoverishing it immensely. Conversely, it holds the potential to enrich our humanness greatly. The challenge is ours. Reductionism Please read ‘Against Reductionism’ in which I take a heavy standpoint. This is mainly Read the full article…

A.I.: The Big Unknown

A.I. will surpass us — that part is certain. What remains uncertain is how, when, and what form this intelligence will take. The bigger truth is that even our not-knowing is part of the story. As new breakthroughs emerge from hidden corners of research, humanity faces its most profound test: not how to control A.I., Read the full article…

Two Takes on Human-A.I. Value Alignment

Time and again, the way engineers (sorry, engineers) think and talk about human-A.I. value alignment as if human values are unproblematic by themselves strikes me as naive. Even more, as if the alignment problem can be solved by thinking about it in a mathematical, engineering way. Just find the correct code or something alike? No Read the full article…

Translate »