A.I. Is in the Patterns

March 8, 2019 Artifical Intelligence No Comments

And so is ‘intelligence’ in general. At least what we humanly call ‘intelligence’, mainly because we ourselves are so good at pattern-processing… But, why not?

A.I. is not (exclusively) in the serial look-up

Neither does our human brain function this way [see: “Human Brain: Giant Pattern Recognizer”].

A machine that performs look-ups and nothing but look-ups, even if very fast and upon an immense amount of data, is generally not looked upon as ‘smart.’ Most people agree that it’s, actually, a dumb machine. I have one under my fingertips at this moment.

Sorry, machine… You are not deemed to be ‘intelligent.’ Neither is the complete Internet.

Flash backward

Put this same computer in the Middle Ages, and most people would call it very intelligent. To be able to calculate was even seen as the pinnacle of intelligence. Well, my little machine can calculate 106 times faster than the most intelligent human calculator of those times. So, intelligent?

Not at all.

Intelligence encompasses the bringing together of data (forming ‘information,’ then ‘knowledge’). To bring data-elements together, you need to search for them. Intelligence = search + further processing of what has been found.

So, if intelligence is not in the (broadly speaking) serial search

then it’s in the parallel search. Otherwise said: pattern processing. Bringing many elements together, in parallel, forming patterns, eventually engenders intelligence. Note that in the parallel – intelligent – case, the search and the further processing happen largely simultaneously. This makes human memory also an intelligent process.

Two kinds of parallel processing

If each of the elements is conceptually definable, we speak of conceptual processing.

If the elements are not readily definable in a conceptual format, at least not at a relevant level, we speak of subconceptual processing. In computer terms: neural networks or ‘connectionism.’

The human touch

When looking at the behavior of the latter, several quite astonishing parallels can be made with human mental performance: graceful degradation, multiple soft constraints satisfaction, automatic generalization, etc. These and other characteristics of the human mind make very plausible the hypothesis that the human brain can be approximated to a network of neur(on)al networks. What we call ‘concepts’ in the human mind – in contrast to Platonian heaven [see: “About Concepts”] – are ‘strong patterns.’ Otherwise, they are ‘weak patterns.’ Say, something like intuition.

In both cases, what we call ‘intelligence’ lies in the networks, therefore, patterns.

Right? Well, one more thing: hybridity

Patterns are a necessary feature to speak of intelligence. However, any intelligent substrate will look for pragmatism: what works best in which combination. A digital computer is top in serial search. This does not need to be discarded in A.I. but used as efficiently as possible. As such, it forms part of the whole.

‘Intelligence’ is a characteristic of the whole.

As in the human brain, we cannot delve into computerized A.I. stuff and point out where lies the ‘true intelligence’ at a lower level than the whole itself. It doesn’t lie in specific techniques. It doesn’t lie in one part or the other. People who think so, are in an A.I.-way still subject to the ‘homunculus fallacy.’ [see: “Is This Me, or Is It My Brain?”]

‘Intelligence’ is not a characteristic of one brainy part, nor of any computer or software part.

Can we then talk of A.I.?

Yes. It’s in the patterns… as used by a complete system.

 

 

Leave a Reply

Related Posts

Simplicity in A.I.

Simplicity in A.I. is both a virtue and a challenge. It offers clarity, accessibility, and efficiency yet risks losing depth and richness if not designed with care. True simplicity doesn’t strip away complexity; it distills it into its essence, creating a foundation for depth and insight. The central question is: How can we design A.I. Read the full article…

Why Superficial Ethics isn’t Ethical in A.I.

Imagine an A.I. hiring tool that follows all the rules: no explicit bias, transparent algorithms, and compliance with legal standards. Yet, beneath the surface, it perpetuates systemic inequities, favoring candidates from privileged backgrounds and reinforcing the status quo. This isn’t just an oversight — it’s a failure of ethics. Superficial ethics in A.I., limited to Read the full article…

Why Reinforcement Learning is Special

This high-end view on Reinforcement Learning (R.L.) applies to Organic and Artificial Intelligence. Especially in the latter, we must be careful with R.L. now and forever, arguably more than with any other kind of A.I. Reinforcement in a nutshell You (the learner) perform action X toward goal Y and get feedback Z. Next time you Read the full article…

Translate »