Intelligence through Consistency

May 18, 2024 Artifical Intelligence, Cognitive Insights No Comments

When multiple elements collaborate consistently, they can generate intelligent behavior as an emergent property.

When these elements function within a rational environment, they exhibit rationally intelligent behavior. Consistency is key but must include diversity.

Consistent’ does not imply ‘identical.’

When elements are overly similar, intelligence fails to emerge. For instance, the human cerebellum holds over half of all the brain’s neurons, yet little intelligence emerges from it.

In contrast, the cerebrum – the brain’s seat of intelligence – contains many consistent but not identical elements. The consistency here may seem random or chaotic to a certain degree. Interestingly, a certain degree of randomness enhances intelligence — a phenomenon well illustrated by nature. Thus, human intelligence thrives amidst a delicate balance of order and chaos.

No individual small element possesses intelligence.

These elements are ultimately just data. This is necessarily the case in all intelligent systems because intelligence is not a (material or immaterial) substance by itself. Intelligence is not magic. It’s a dynamic interplay.

In a consistent relation to each other, data form information. If consistency is maintained at this level, we move towards intelligence.

The key requirement is for these elements to ‘work together.’

This can be achieved in various ways. A comprehensive system might enable communication among them, allowing enough freedom to ‘discover their own paths’ in complex interactions.

Alternatively, they can initiate themselves.

Neurons in a brain system exhibit a combination of both. At a higher level, they form patterns (and patterns of patterns), which are elements in the broader system of the brain. At a smaller level, we have dendrites and synapses working together consistently.

In summary, multiple levels with diverse elements and consistencies collectively form human intelligence.

On a higher level, humans collectively form societies.

Their consistent behavior enables society to act ‘intelligently.’ At least, as viewed from the outside, the societal behavior looks intelligent. This collective intelligence emerges from the synergy of individual actions and societal norms. Naturally, we prefer to attribute this intelligence to the elements — ourselves, as individuals.

And that’s OK; I also prefer it this way. Only, from the outside, it becomes less strictly straightforward. That also shows, for instance, why a culture wants to make its elements – the individuals – mutually consistent. We call this ‘peer pressure’ or something similar. The culture’s goal is to attain a higher level of intelligence, thereby having more chance to thrive and survive.

Here too, when the elements are overly similar, the societal intelligence fails.

This principle also holds for A.I.

Attempts to construct intelligence like a Meccano set have not been highly successful.

Conversely, neural network technology has propelled us much further. To the repeated surprise of developers, adding more randomness – to a certain degree – results in more intelligent behavior. In essence, A.I. mirrors the chaotic harmony of human intelligence.  As in our case, there seems to be an optimum distance — from equal to consistent.

In addition to consistency, intelligence is also influenced by size.

Strangely, this relationship is not entirely linear, involving jumps and overhangs. Thus, we might be nearer to an intelligence singularity than commonly believed.

As explained in ‘Better than Us?’, the Compassion singularity might also happen soon.

Hopefully!

Leave a Reply

Related Posts

The Power of Embedding

This is the power of complexity in humans and in present-day Large Language Models (the most visible form of A.I. nowadays). ‘Embedding’ is the transformation of information/knowledge into a format of many subconceptual elements interacting in multifaceted systems that makes this information prone to emerge in novel ways. A multitude of relatively simple (smaller than Read the full article…

Reinforcement Learning and AURELIS Coaching

Reinforcement Learning is a way of thinking that applies to the animal kingdom as well as A.I. Also, it is deeply related to AURELIS coaching. Please read about Reinforcement Learning (R.L.) R.L. in AURELIS coaching Such coaching is always (auto)suggestive. The coach doesn’t impose or even give plain advice. The coaching is tentative without being Read the full article…

‘Brave New World’ in an Era of Super-A.I.

Aldous Huxley’s Brave New World (1932) paints a dystopian society controlled by engineered happiness, shallow pleasures, and conditioned desires. Though written nearly a century ago, its themes resonate powerfully in today’s world, particularly as super-A.I. begins to shape human behavior and culture in profound ways. This blog explores these parallels and how we might navigate Read the full article…

Translate »