About ‘Intelligence’ (in A.I.)

May 2, 2021 Artifical Intelligence, Cognitive Insights No Comments

At the brink of a new intelligence, it’s crucial to know what we’re heading towards. Seriously trying to clarify the concept may help.

Many intelligences

Whether knowledgeable or not, many people try to answer the question of what exactly is ‘intelligence.’ Needless to say, popping up are many different answers. This should not deter anyone from trying to give one. It is a very meaningful question indeed, probably a near-future-shaking one.

I have the advantage of thinking about this in-depth, at least from 1997, when I wrote a Master’s thesis about it (cognitive science and A.I.).

The times, they are a-changing.

Many things have changed since then, among which a huge and growing commercial success of A.I. technologies, mainly in deep learning. I have the privilege of having witnessed an A.I. winter, summer, and again the recent apprehension of a new winter. Technologies are changing substantially, but not so much as one might think.

Meanwhile, the concept of intelligence is also deepening. Most of this deepening happens outside of A.I., in neurocognitive science.

Therefore, my admonition: Don’t simply trust any A.I.-technology experts to provide an in-depth answer to the intelligence question, whether or not artificial. They may be the first to give fatally wrong answers. An engineering mind is not what we need foremost in this.

Part of a journey

Concepts are more interesting than terms. The concept of intelligence is fruitfully seen as being different from information and consciousness. There is no intelligence without information. There is no consciousness without intelligence. One can discern a progression, being part of a broader conceptual and developmental journey. This is the subject of my book: [see: “The Journey Towards Compassionate AI.”]

Since consciousness is the next step of the journey, we’re in challenging territory, indeed. That’s why we need to think about it, not turn away from it. [see: “Turtle Vagaries”]

Making distinctions

Now we don’t have the free-floating intelligence anymore to conceptualize. This enables us to, at least, better constrain it.

Apart from any other distinction, size is always important. Crucial even, but let’s leave this aside for now.

When does information become intelligent?

A book’s data become information through an intelligent system (potentially reading the book), but the book itself doesn’t contain intelligence. However, an e-book can be made gradually more ‘intelligent.’ There is a continuum, no strict dividing line. Important in this continuum is the degree to which the information becomes active. This can happen in many ways. The more dynamically you can ask the book for its information, and get a proper answer, the more intelligence you may subscribe to the book. The same counts for a human being. Thus, intelligence also lies in (self-)explainability. [see: “Explainability in A.I., Boon or Bust?”]

In A.I., the striving is towards more active, ‘autonomous’ systems. These are systems that can come into new domains and provide answers to new questions. An autonomous weapon, for instance, provides new answers to new enemy challenges. Dangerous enough?

Being active distinguishes intelligence most from information.

Present-day A.I. is not to be seen as intelligent (therefore also not as artificially intelligent). However, elements such as autonomy and (self-)explainability make it more so.

When does intelligence become conscious?

According to the elements above, one can envision a super-intelligent system without consciousness, even though the more active it becomes, the more difficult it may be not to let it evolve spontaneously towards the conscious asset.

In my view, the most distinguishing factor here is volition. Apart from philosophical questions about the existence of free will [see: “In Defense of Free Will”], one can see volition at the origin of life. A Boeing 747 doesn’t have any volition; a bacterium has, in a small amount.

That makes the question of consciousness the same as the question of life. A bacterium doesn’t have consciousness, but it gets it if you let it evolve towards a very intelligent organism. In that case, consciousness is not an additional property. The very intelligent organism gets it for free. In a way, it’s already there from the start ― in potential.

Of course, I’m talking about us.

In a human, take away volition, and you take away consciousness. That is also part of how someone is being diagnosed as brain-dead.

We may think that consciousness is the most mysterious thing. As you may see now, it’s not. It’s ‘mysterious’ because it’s already there from the start ― in potential, while one may be looking for something additional. The usual human intelligence is a conscious kind of intelligence.

In A.I., adding volition is simple. As said, it’s also in the size. That’s more difficult. However, we should collectively grasp that there is no thick wall between intelligence and consciousness. When reaching real Artificial Intelligence, the step into conscious terrain is straightforward. [see: “Why Conscious A.I. is Near“]

So, do you think it’s important enough to get a good grip on intelligence now?

Leave a Reply

Related Posts

Will A.I. Have Its Own Feelings and Purpose?

Anno 2025: A.I. has made its entry and it’s here to stay. Human based? We generally think of ‘feelings’ as human-based. But that is just a historical artifact, a kind of convention. Does an ant have feelings? Or a goldfish, a snake, a mouse? Does a rabbit have feelings? I think these are the wrong Read the full article…

Compassionate A.I. in the Military

There are obvious and less obvious reasons — time to re-think the military from the ground up. Compassionate A.I. may lead to this. It should be realized asap. Soldiers are human beings just like others, also if they have deliberately chosen for military service. Compassion is equally applicable to them. The obvious Many soldiers return Read the full article…

Super-A.I. is not a Literal Idiot.

Some see danger in future A.I.’s lacking common sense ― thereby interpreting ‘human commands’ literally and giving what is asked instead of what is wanted. This says more about humans than about the A.I. Two examples One person needing paperclips may ask an A.I. to produce paperclips as efficiently and effectively as possible. The A.I. Read the full article…

Translate »