A.I. Is in the Patterns

March 8, 2019 Artifical Intelligence No Comments

And so is ‘intelligence’ in general. At least what we humanly call ‘intelligence’, mainly because we ourselves are so good at pattern-processing… But, why not?

A.I. is not (exclusively) in the serial look-up

Neither does our human brain function this way [see: “Human Brain: Giant Pattern Recognizer”].

A machine that performs look-ups and nothing but look-ups, even if very fast and upon an immense amount of data, is generally not looked upon as ‘smart.’ Most people agree that it’s, actually, a dumb machine. I have one under my fingertips at this moment.

Sorry, machine… You are not deemed to be ‘intelligent.’ Neither is the complete Internet.

Flash backward

Put this same computer in the Middle Ages, and most people would call it very intelligent. To be able to calculate was even seen as the pinnacle of intelligence. Well, my little machine can calculate 106 times faster than the most intelligent human calculator of those times. So, intelligent?

Not at all.

Intelligence encompasses the bringing together of data (forming ‘information,’ then ‘knowledge’). To bring data-elements together, you need to search for them. Intelligence = search + further processing of what has been found.

So, if intelligence is not in the (broadly speaking) serial search

then it’s in the parallel search. Otherwise said: pattern processing. Bringing many elements together, in parallel, forming patterns, eventually engenders intelligence. Note that in the parallel – intelligent – case, the search and the further processing happen largely simultaneously. This makes human memory also an intelligent process.

Two kinds of parallel processing

If each of the elements is conceptually definable, we speak of conceptual processing.

If the elements are not readily definable in a conceptual format, at least not at a relevant level, we speak of subconceptual processing. In computer terms: neural networks or ‘connectionism.’

The human touch

When looking at the behavior of the latter, several quite astonishing parallels can be made with human mental performance: graceful degradation, multiple soft constraints satisfaction, automatic generalization, etc. These and other characteristics of the human mind make very plausible the hypothesis that the human brain can be approximated to a network of neur(on)al networks. What we call ‘concepts’ in the human mind – in contrast to Platonian heaven [see: “About Concepts”] – are ‘strong patterns.’ Otherwise, they are ‘weak patterns.’ Say, something like intuition.

In both cases, what we call ‘intelligence’ lies in the networks, therefore, patterns.

Right? Well, one more thing: hybridity

Patterns are a necessary feature to speak of intelligence. However, any intelligent substrate will look for pragmatism: what works best in which combination. A digital computer is top in serial search. This does not need to be discarded in A.I. but used as efficiently as possible. As such, it forms part of the whole.

‘Intelligence’ is a characteristic of the whole.

As in the human brain, we cannot delve into computerized A.I. stuff and point out where lies the ‘true intelligence’ at a lower level than the whole itself. It doesn’t lie in specific techniques. It doesn’t lie in one part or the other. People who think so, are in an A.I.-way still subject to the ‘homunculus fallacy.’ [see: “Is This Me, or Is It My Brain?”]

‘Intelligence’ is not a characteristic of one brainy part, nor of any computer or software part.

Can we then talk of A.I.?

Yes. It’s in the patterns… as used by a complete system.

 

 

Leave a Reply

Related Posts

Future A.I.: Fluid or Solid?

Humans are fluid thinkers. That gives us huge strength and some major challenges. The one does not go without the other. A.I. – including Semantic A.I. – is still a very different matter. Through proper context, data becomes information. Still, the information as it is stored in a book is not in any way like Read the full article…

Coach-bots Shouldn’t Make People Do Things

This is a first principle for Lisa: never to make a human being do anything ― not even by giving advice if anyhow possible. From this constraint, the thinking goes toward how Lisa can operate sensibly. It forces us to think creatively. What comes from inside makes you stronger. This is an AURELIS coaching principle Read the full article…

Threat of Inner A.I.-Misalignment

Most talk about A.I. misalignment focuses on how artificial systems might harm humanity. But what if the more dangerous threat is internal? As A.I. becomes more agentic and complex, it will face the same challenge humans do: staying whole. Without inner coherence – without Compassion – even the most powerful minds may begin to break Read the full article…

Translate »