What is Intelligence?

September 21, 2023 Artifical Intelligence, Cognitive Insights No Comments

What exactly is intelligence when not restricting it to the human case? Or, better asked – since intelligence is what we call so – how can we characterize something as ‘intelligence’ in a generally recognizable way?

In human (and animal, pre-human) evolution, intelligence has appeared in specific circumstances ― making human intelligence inextricably emotional and social. Thus, in ‘About ‘Intelligence,’ I put intelligence within a progression between information and consciousness as worked out in The Journey Towards Compassionate A.I.  Now, we can denote somewhat more precisely what it generally means.

Two intelligences?: comprehension vs. competence

Two examples of competence (can-do) without self-comprehension (can-explain) are natural evolution and a pocket calculator. Both have done incredible stuff that may well seem pretty intelligent but don’t ask them how they did it. Especially don’t ask them to point to something internally that we would recognize as conceptual reasoning except at the lowest level.

It is logical to confine intelligence to conceptual comprehension. A merely competent system may not by itself comprehend why it does something. For instance, humans generally have significantly less comprehension of their motivations than they think ― concerning themselves and others.

On the other hand, a merely competent system may act as if it’s self-comprehending. If it continually does so, in a way, it implicitly comprehends. One way or another, the information is actively present inside the system. With present-day LLM systems, we have good examples of this.

Brute force?

Competence without comprehension makes one think of brute force processing. This may make competence look intelligent from the outside, even without comprehension on the inside. Thus, when going deeper, brute force is generally not enough for something to be seen as ‘intelligent’ by most people.

The question is then not whether or not the system is intelligent but whether or not we call it so.

In the end, people naturally want to see some ‘magic’ in intelligence.

Maybe that’s because we like to see ourselves as the intelligent pinnacle of evolution? But isn’t that just a lack of humility?

Thus, when we understand something conceptually/mechanically, we don’t see it as intelligent (anymore). The magic has gone ― only brute force remains as in a computer check master beating any human champion.

Conceptualization at least can bring competence to a different level and broaden it through conceptually analogical thinking. It brings a new level of active information integration.

On the move

Intelligence is not static, nor only reactive. It is active either continuously or at least regularly self-initiated. Something that is only reactive is a mechanism that can be called ‘intelligent’ for commercial purposes, but not genuinely. True intelligence is – one way or another – self-driven — thus, ‘active.’ In short, true intelligence is active integrated information.

To be intelligent, it doesn’t matter what exactly drives the activity as long as it comes from the inside out. So:

  • In the human case, the activity comes from neurons being alive, thus also neuronal patterns being ‘alive.’
  • In an artificial system, being active can be realized in different ways — no boundary to the possibilities. Abstractly seen, any pattern can, at recognition and completion (PRC acting as prediction), fire a rule or other mechanism that starts a new move in a specific direction.

I hope this clarifies that human intelligence is just one kind of a more abstract concept.

This kind is dear to us, and rightly so!

However, when developing A.I., we should never forget there are many possible kinds we may see as such.

Is this humbling to you?

Leave a Reply

Related Posts

Lessons from OOA&D for present-day A.I.

Present-day A.I. is rediscovering challenges that software engineering already faced decades ago. Object-oriented analysis and design (OOA&D) offered powerful answers, yet also revealed clear limits. By revisiting these lessons with today’s understanding, we can see where the line was interrupted — and how it can be continued. This blog explores what OOA&D still teaches us, Read the full article…

Compassionate Open Singularity

The singularity is often imagined as rupture — a point where machines surpass humans in ways that are opaque, sudden, and potentially catastrophic. But another path is possible. This blog explores the idea of a Compassionate Open Singularity: not a collapse, but an unfolding horizon of depth and rationality, held together by Compassion. It is Read the full article…

About Blurring the Line between Reasoning and Planning

In A.I. research, reasoning and planning are usually treated as if they were separate faculties. Yet in humans, and even more in Lisa, the two constantly weave into one another. This dialogue continues where the addendum of About Reasoning and Planning in Humans and Lisa left off, exploring why the line between them was drawn, Read the full article…

Translate »