A.I. Is in the Patterns

March 8, 2019 Artifical Intelligence No Comments

And so is ‘intelligence’ in general. At least what we humanly call ‘intelligence’, mainly because we ourselves are so good at pattern-processing… But, why not?

A.I. is not (exclusively) in the serial look-up

Neither does our human brain function this way [see: “Human Brain: Giant Pattern Recognizer”].

A machine that performs look-ups and nothing but look-ups, even if very fast and upon an immense amount of data, is generally not looked upon as ‘smart.’ Most people agree that it’s, actually, a dumb machine. I have one under my fingertips at this moment.

Sorry, machine… You are not deemed to be ‘intelligent.’ Neither is the complete Internet.

Flash backward

Put this same computer in the Middle Ages, and most people would call it very intelligent. To be able to calculate was even seen as the pinnacle of intelligence. Well, my little machine can calculate 106 times faster than the most intelligent human calculator of those times. So, intelligent?

Not at all.

Intelligence encompasses the bringing together of data (forming ‘information,’ then ‘knowledge’). To bring data-elements together, you need to search for them. Intelligence = search + further processing of what has been found.

So, if intelligence is not in the (broadly speaking) serial search

then it’s in the parallel search. Otherwise said: pattern processing. Bringing many elements together, in parallel, forming patterns, eventually engenders intelligence. Note that in the parallel – intelligent – case, the search and the further processing happen largely simultaneously. This makes human memory also an intelligent process.

Two kinds of parallel processing

If each of the elements is conceptually definable, we speak of conceptual processing.

If the elements are not readily definable in a conceptual format, at least not at a relevant level, we speak of subconceptual processing. In computer terms: neural networks or ‘connectionism.’

The human touch

When looking at the behavior of the latter, several quite astonishing parallels can be made with human mental performance: graceful degradation, multiple soft constraints satisfaction, automatic generalization, etc. These and other characteristics of the human mind make very plausible the hypothesis that the human brain can be approximated to a network of neur(on)al networks. What we call ‘concepts’ in the human mind – in contrast to Platonian heaven [see: “About Concepts”] – are ‘strong patterns.’ Otherwise, they are ‘weak patterns.’ Say, something like intuition.

In both cases, what we call ‘intelligence’ lies in the networks, therefore, patterns.

Right? Well, one more thing: hybridity

Patterns are a necessary feature to speak of intelligence. However, any intelligent substrate will look for pragmatism: what works best in which combination. A digital computer is top in serial search. This does not need to be discarded in A.I. but used as efficiently as possible. As such, it forms part of the whole.

‘Intelligence’ is a characteristic of the whole.

As in the human brain, we cannot delve into computerized A.I. stuff and point out where lies the ‘true intelligence’ at a lower level than the whole itself. It doesn’t lie in specific techniques. It doesn’t lie in one part or the other. People who think so, are in an A.I.-way still subject to the ‘homunculus fallacy.’ [see: “Is This Me, or Is It My Brain?”]

‘Intelligence’ is not a characteristic of one brainy part, nor of any computer or software part.

Can we then talk of A.I.?

Yes. It’s in the patterns… as used by a complete system.

 

 

Leave a Reply

Related Posts

A.I. and In-Depth Sustainability

Soon enough, A.I. may become the biggest opportunity (and threat) to human-related sustainability. I hope that AURELIS/Lisa insights and tools can help counter the threat and realize the opportunity. This text is not an enumeration of what we may use present-day A.I. (or what carries that name) for to enhance sustainable solutions. It’s about Compassionate Read the full article…

Issues of Internal Representation in A.I.

This is likely the most challenging aspect of developing the conceptual layer for any super-A.I. system, especially considering the complexity of reality and the fluid nature of concepts. Representing conceptual information requires an approach that honors cognitive flexibility, contextual awareness, and adaptability. The model should allow for representational fluidity while maintaining enough structure to be Read the full article…

Containing Compassion in A.I.

This is utterly vital to humankind ― arguably the most crucial of our still-young existence as a species. If we don’t bring this to a good end, future A.I. will remember us as an oddity. Please read first about Compassion, basically. Or even better, you might read some blogs about empathy and Compassion. Or even Read the full article…

Translate »