What is Intelligence?

September 21, 2023 Artifical Intelligence, Cognitive Insights No Comments

What exactly is intelligence when not restricting it to the human case? Or, better asked – since intelligence is what we call so – how can we characterize something as ‘intelligence’ in a generally recognizable way?

In human (and animal, pre-human) evolution, intelligence has appeared in specific circumstances ― making human intelligence inextricably emotional and social. Thus, in ‘About ‘Intelligence,’ I put intelligence within a progression between information and consciousness as worked out in The Journey Towards Compassionate A.I.  Now, we can denote somewhat more precisely what it generally means.

Two intelligences?: comprehension vs. competence

Two examples of competence (can-do) without self-comprehension (can-explain) are natural evolution and a pocket calculator. Both have done incredible stuff that may well seem pretty intelligent but don’t ask them how they did it. Especially don’t ask them to point to something internally that we would recognize as conceptual reasoning except at the lowest level.

It is logical to confine intelligence to conceptual comprehension. A merely competent system may not by itself comprehend why it does something. For instance, humans generally have significantly less comprehension of their motivations than they think ― concerning themselves and others.

On the other hand, a merely competent system may act as if it’s self-comprehending. If it continually does so, in a way, it implicitly comprehends. One way or another, the information is actively present inside the system. With present-day LLM systems, we have good examples of this.

Brute force?

Competence without comprehension makes one think of brute force processing. This may make competence look intelligent from the outside, even without comprehension on the inside. Thus, when going deeper, brute force is generally not enough for something to be seen as ‘intelligent’ by most people.

The question is then not whether or not the system is intelligent but whether or not we call it so.

In the end, people naturally want to see some ‘magic’ in intelligence.

Maybe that’s because we like to see ourselves as the intelligent pinnacle of evolution? But isn’t that just a lack of humility?

Thus, when we understand something conceptually/mechanically, we don’t see it as intelligent (anymore). The magic has gone ― only brute force remains as in a computer check master beating any human champion.

Conceptualization at least can bring competence to a different level and broaden it through conceptually analogical thinking. It brings a new level of active information integration.

On the move

Intelligence is not static, nor only reactive. It is active either continuously or at least regularly self-initiated. Something that is only reactive is a mechanism that can be called ‘intelligent’ for commercial purposes, but not genuinely. True intelligence is – one way or another – self-driven — thus, ‘active.’ In short, true intelligence is active integrated information.

To be intelligent, it doesn’t matter what exactly drives the activity as long as it comes from the inside out. So:

  • In the human case, the activity comes from neurons being alive, thus also neuronal patterns being ‘alive.’
  • In an artificial system, being active can be realized in different ways — no boundary to the possibilities. Abstractly seen, any pattern can, at recognition and completion (PRC acting as prediction), fire a rule or other mechanism that starts a new move in a specific direction.

I hope this clarifies that human intelligence is just one kind of a more abstract concept.

This kind is dear to us, and rightly so!

However, when developing A.I., we should never forget there are many possible kinds we may see as such.

Is this humbling to you?

Leave a Reply

Related Posts

Better than Us?

Might super-A.I. one day surpass humans in all aspects, both cognitive and emotional? I rather wonder when it will happen. Will then also a deeper emotional connection develop between humans and advanced A.I.? The singularity of intelligence This question has been on many minds for some time. Lately, it has become much closer to us. Read the full article…

Super-A.I. is not a Literal Idiot.

Some see danger in future A.I.’s lacking common sense ― thereby interpreting ‘human commands’ literally and giving what is asked instead of what is wanted. This says more about humans than about the A.I. Two examples One person needing paperclips may ask an A.I. to produce paperclips as efficiently and effectively as possible. The A.I. Read the full article…

Non-Stationary Objectives

In the human mind and in future A.I., real growth follows no straight lines. It spirals, shifts, and reorients. The deeper the movement, the more it needs an inner compass. At the center of this movement, there is Compassion — not as decoration, but as direction. This blog explores how non-stationary objectives can become deeply Read the full article…

Translate »