Intelligence through Consistency

May 18, 2024 Artifical Intelligence, Cognitive Insights No Comments

When multiple elements collaborate consistently, they can generate intelligent behavior as an emergent property.

When these elements function within a rational environment, they exhibit rationally intelligent behavior. Consistency is key but must include diversity.

Consistent’ does not imply ‘identical.’

When elements are overly similar, intelligence fails to emerge. For instance, the human cerebellum holds over half of all the brain’s neurons, yet little intelligence emerges from it.

In contrast, the cerebrum – the brain’s seat of intelligence – contains many consistent but not identical elements. The consistency here may seem random or chaotic to a certain degree. Interestingly, a certain degree of randomness enhances intelligence — a phenomenon well illustrated by nature. Thus, human intelligence thrives amidst a delicate balance of order and chaos.

No individual small element possesses intelligence.

These elements are ultimately just data. This is necessarily the case in all intelligent systems because intelligence is not a (material or immaterial) substance by itself. Intelligence is not magic. It’s a dynamic interplay.

In a consistent relation to each other, data form information. If consistency is maintained at this level, we move towards intelligence.

The key requirement is for these elements to ‘work together.’

This can be achieved in various ways. A comprehensive system might enable communication among them, allowing enough freedom to ‘discover their own paths’ in complex interactions.

Alternatively, they can initiate themselves.

Neurons in a brain system exhibit a combination of both. At a higher level, they form patterns (and patterns of patterns), which are elements in the broader system of the brain. At a smaller level, we have dendrites and synapses working together consistently.

In summary, multiple levels with diverse elements and consistencies collectively form human intelligence.

On a higher level, humans collectively form societies.

Their consistent behavior enables society to act ‘intelligently.’ At least, as viewed from the outside, the societal behavior looks intelligent. This collective intelligence emerges from the synergy of individual actions and societal norms. Naturally, we prefer to attribute this intelligence to the elements — ourselves, as individuals.

And that’s OK; I also prefer it this way. Only, from the outside, it becomes less strictly straightforward. That also shows, for instance, why a culture wants to make its elements – the individuals – mutually consistent. We call this ‘peer pressure’ or something similar. The culture’s goal is to attain a higher level of intelligence, thereby having more chance to thrive and survive.

Here too, when the elements are overly similar, the societal intelligence fails.

This principle also holds for A.I.

Attempts to construct intelligence like a Meccano set have not been highly successful.

Conversely, neural network technology has propelled us much further. To the repeated surprise of developers, adding more randomness – to a certain degree – results in more intelligent behavior. In essence, A.I. mirrors the chaotic harmony of human intelligence.  As in our case, there seems to be an optimum distance — from equal to consistent.

In addition to consistency, intelligence is also influenced by size.

Strangely, this relationship is not entirely linear, involving jumps and overhangs. Thus, we might be nearer to an intelligence singularity than commonly believed.

As explained in ‘Better than Us?’, the Compassion singularity might also happen soon.

Hopefully!

Leave a Reply

Related Posts

Guidelines for A.I. in Coaching

Artificial Intelligence in mental health often raises fears and expectations in equal measure. Most discussions focus on safety, regulation, and efficiency. This blog looks deeper: What kind of being should an A.I. be in order to support human growth through coaching? The following guidelines apply to any coaching system. In the second part, we turn Read the full article…

The Double Ethical Bottleneck of A.I.

This is a small excerpt from my book The Journey Towards Compassionate A.I. The whole book describes the why’s, what’s and how’s concerning this. Getting through the A.I. bi-bottleneck On the road towards genuine super-A.I. – encompassing all domains of intelligence and in each being much more effective than humans – I see not one Read the full article…

Legal vs. Deontological in A.I.

The trolley problem This is a well-known problem in A.I. A trolley driver gets into a situation where he must choose between killing one person by taking a deliberate action or letting five others get killed by not reacting to the situation. Deontologically, people tend not to choose purely logically and statistically in such situations. Read the full article…

Translate »