What is Intelligence?

September 21, 2023 Artifical Intelligence, Cognitive Insights No Comments

What exactly is intelligence when not restricting it to the human case? Or, better asked – since intelligence is what we call so – how can we characterize something as ‘intelligence’ in a generally recognizable way?

In human (and animal, pre-human) evolution, intelligence has appeared in specific circumstances ― making human intelligence inextricably emotional and social. Thus, in ‘About ‘Intelligence,’ I put intelligence within a progression between information and consciousness as worked out in The Journey Towards Compassionate A.I.  Now, we can denote somewhat more precisely what it generally means.

Two intelligences?: comprehension vs. competence

Two examples of competence (can-do) without self-comprehension (can-explain) are natural evolution and a pocket calculator. Both have done incredible stuff that may well seem pretty intelligent but don’t ask them how they did it. Especially don’t ask them to point to something internally that we would recognize as conceptual reasoning except at the lowest level.

It is logical to confine intelligence to conceptual comprehension. A merely competent system may not by itself comprehend why it does something. For instance, humans generally have significantly less comprehension of their motivations than they think ― concerning themselves and others.

On the other hand, a merely competent system may act as if it’s self-comprehending. If it continually does so, in a way, it implicitly comprehends. One way or another, the information is actively present inside the system. With present-day LLM systems, we have good examples of this.

Brute force?

Competence without comprehension makes one think of brute force processing. This may make competence look intelligent from the outside, even without comprehension on the inside. Thus, when going deeper, brute force is generally not enough for something to be seen as ‘intelligent’ by most people.

The question is then not whether or not the system is intelligent but whether or not we call it so.

In the end, people naturally want to see some ‘magic’ in intelligence.

Maybe that’s because we like to see ourselves as the intelligent pinnacle of evolution? But isn’t that just a lack of humility?

Thus, when we understand something conceptually/mechanically, we don’t see it as intelligent (anymore). The magic has gone ― only brute force remains as in a computer check master beating any human champion.

Conceptualization at least can bring competence to a different level and broaden it through conceptually analogical thinking. It brings a new level of active information integration.

On the move

Intelligence is not static, nor only reactive. It is active either continuously or at least regularly self-initiated. Something that is only reactive is a mechanism that can be called ‘intelligent’ for commercial purposes, but not genuinely. True intelligence is – one way or another – self-driven — thus, ‘active.’ In short, true intelligence is active integrated information.

To be intelligent, it doesn’t matter what exactly drives the activity as long as it comes from the inside out. So:

  • In the human case, the activity comes from neurons being alive, thus also neuronal patterns being ‘alive.’
  • In an artificial system, being active can be realized in different ways — no boundary to the possibilities. Abstractly seen, any pattern can, at recognition and completion (PRC acting as prediction), fire a rule or other mechanism that starts a new move in a specific direction.

I hope this clarifies that human intelligence is just one kind of a more abstract concept.

This kind is dear to us, and rightly so!

However, when developing A.I., we should never forget there are many possible kinds we may see as such.

Is this humbling to you?

Leave a Reply

Related Posts

Should A.I. be General?

Artificial intelligence seems to be growing ever broader. The term ‘Artificial General Intelligence’ (AGI) evokes an image of an all-purpose mind, while most of today’s systems live in specialized niches. Yet the question may not be whether A.I. should be general or specialized, but what kind of generality we want. Real intelligence, as Lisa shows, Read the full article…

Ontologization in Super-A.I.

Ontologization is the process of evolving from subconceptual to conceptual – including subsequent categorization – through attentive pattern recognition and completion. This way, a subconceptual system can form its own ontology. Natural evolution is one example. Artificially, it can be realized in many ways. PRC = Pattern Recognition and Completion. See: the brain as a Read the full article…

A.I.-Phobia

One should be scared of any danger, including dangerous A.I. Contrary to this, anxiety is never a good adviser. This text is about being anxious. A phobic reaction against present technology is most dangerous. Needed is a lot of common sense. As to the above image, note the reference to Mary Wollstonecraft Shelley’s novel. In Read the full article…

Translate »