The Quest for Abstract Patterns

December 11, 2023 Artifical Intelligence, Cognitive Insights No Comments

This is about creation.

The creative process

Some see three levels of creativity: interpolation, extrapolation, and the invention of something new/out of the box. This progress in levels goes from the domain of the known toward the domain of the not-yet-known.

One can also see these levels as the possible results of an increasing discernment of abstract patterns — enabling one to go from inter to extra to very new. The most original is the source of a new direction toward what no (hu)man has thought before.

The power of abstraction lies in the patterns.

Abstraction brings together patterns from a lower level of abstraction in ways that are useful to make new inferences at a higher level. This enriched higher level can then be used to jump forward at the lower level.

Of course, too much jumping leads to biased thinking, which may be mitigated by common sense or by just accepting small biases as part of the game. The latter strategy is mainly nature’s trick. An engineer would be ashamed, but it’s powerful enough for nature’s cause.

Abstraction is itself a question of mental-neuronal patterns (MNPs) in the mind/brain.

It is what MNPs naturally do if they get the chance and are not inundated by mental clutter. For instance, the playing of children has the feature of letting them learn new patterns through new information and new combinations. Finding new similarities/abstractions gives the child pleasure and is motivating for more, especially if the abstractions are meaningful in one way or another.

If you can harness this, you can go far. All progress in intelligence can be seen as this — right, one more abstraction.

Even conceptual thinking itself – the use of concepts – is basically a question of abstraction at low level.

This way, the conceptual level is the level of first and subsequent abstractions, and abstraction is what leads to increasingly more intelligence.

This makes less relevant the question where ‘real’ intelligence starts in contrast to a simulation of the same. Eventually, there are only continua. This also includes human intelligence.

Some other extraordinary examples of the same principle

Possibly amazingly to you:

  • Matter is the product of the emergence of abstract patterns behind quantum chaos.
  • Causality is the product of the emergence of abstract patterns in space-time.
  • Life is the product of the emergence of possible abstract (and self-replicating) patterns within an entropic universe.

This shows how universal the principle is, not being only about intelligence or humans. Abstract patterns can emerge anywhere.

In A.I. as well, the quest is for abstracting patterns.

In transformer technology, at present, patterns emerge almost entirely implicitly. This goes quite far but with a glass ceiling. To break through this ceiling, ways must be found to transform patterns into more abstract ones. There are many ways to accomplish this, using almost the same technology or another.

This makes possible a leap that is possibly much bigger than the one we are witnessing presently.

Since the whole world of A.I.-research is on this quest, it will happen soon.

Are we ready?

Are we worthy?

Leave a Reply

Related Posts

“A.I. is in the Size.”

This famous quote by R.C. Schank (1991) gets new relevance with GPT technology ― in a surprisingly different way. How Shank interpreted his quote He meant that one cannot conclude ‘intelligence’ from a simple demo ― as was usual at that time of purely conceptual GOFAI (Good Old-Fashioned A.I.). At that time, many Ph.D. students Read the full article…

Confabulatory A.I.

There is a significant chance that confabulatory A.I. will be the usual A.I. of the future ― not because we intentionally design it that way but because confabulation is embedded in how intelligence operates in machines and humans. Generative A.I. generates plausible responses based on patterns, just like human memory reconstructs events rather than recalling Read the full article…

A.I.-Phobia

One should be scared of any danger, including dangerous A.I. Contrary to this, anxiety is never a good adviser. This text is about being anxious. A phobic reaction against present technology is most dangerous. Needed is a lot of common sense. As to the above image, note the reference to Mary Wollstonecraft Shelley’s novel. In Read the full article…

Translate »