Intelligence through Consistency

May 18, 2024 Artifical Intelligence, Cognitive Insights No Comments

When multiple elements collaborate consistently, they can generate intelligent behavior as an emergent property.

When these elements function within a rational environment, they exhibit rationally intelligent behavior. Consistency is key but must include diversity.

Consistent’ does not imply ‘identical.’

When elements are overly similar, intelligence fails to emerge. For instance, the human cerebellum holds over half of all the brain’s neurons, yet little intelligence emerges from it.

In contrast, the cerebrum – the brain’s seat of intelligence – contains many consistent but not identical elements. The consistency here may seem random or chaotic to a certain degree. Interestingly, a certain degree of randomness enhances intelligence — a phenomenon well illustrated by nature. Thus, human intelligence thrives amidst a delicate balance of order and chaos.

No individual small element possesses intelligence.

These elements are ultimately just data. This is necessarily the case in all intelligent systems because intelligence is not a (material or immaterial) substance by itself. Intelligence is not magic. It’s a dynamic interplay.

In a consistent relation to each other, data form information. If consistency is maintained at this level, we move towards intelligence.

The key requirement is for these elements to ‘work together.’

This can be achieved in various ways. A comprehensive system might enable communication among them, allowing enough freedom to ‘discover their own paths’ in complex interactions.

Alternatively, they can initiate themselves.

Neurons in a brain system exhibit a combination of both. At a higher level, they form patterns (and patterns of patterns), which are elements in the broader system of the brain. At a smaller level, we have dendrites and synapses working together consistently.

In summary, multiple levels with diverse elements and consistencies collectively form human intelligence.

On a higher level, humans collectively form societies.

Their consistent behavior enables society to act ‘intelligently.’ At least, as viewed from the outside, the societal behavior looks intelligent. This collective intelligence emerges from the synergy of individual actions and societal norms. Naturally, we prefer to attribute this intelligence to the elements — ourselves, as individuals.

And that’s OK; I also prefer it this way. Only, from the outside, it becomes less strictly straightforward. That also shows, for instance, why a culture wants to make its elements – the individuals – mutually consistent. We call this ‘peer pressure’ or something similar. The culture’s goal is to attain a higher level of intelligence, thereby having more chance to thrive and survive.

Here too, when the elements are overly similar, the societal intelligence fails.

This principle also holds for A.I.

Attempts to construct intelligence like a Meccano set have not been highly successful.

Conversely, neural network technology has propelled us much further. To the repeated surprise of developers, adding more randomness – to a certain degree – results in more intelligent behavior. In essence, A.I. mirrors the chaotic harmony of human intelligence.  As in our case, there seems to be an optimum distance — from equal to consistent.

In addition to consistency, intelligence is also influenced by size.

Strangely, this relationship is not entirely linear, involving jumps and overhangs. Thus, we might be nearer to an intelligence singularity than commonly believed.

As explained in ‘Better than Us?’, the Compassion singularity might also happen soon.

Hopefully!

Leave a Reply

Related Posts

A.I., HR, Danger Ahead

Many categorizing HR techniques are controversial, and rightly so. There is little to no scientific background. Despite this, they keep being used. Why do people feel OK with this? In combination with A.I., it is extremely dangerous. People feel a longing for control. Naturally. Being alive is about ‘agency,’ which is about wanting control. Without Read the full article…

Is Lisa Safe?

There are two directions of safety for complex A.I.-projects: general and particular. Lisa must forever conform to the highest standards in both. Let’s assume Lisa becomes the immense success that she deserves. Lisa can then help many people in many ways and for a very long time — a millennium to start with. About Lisa Read the full article…

Better A.I. for Better Humans

While we need to be afraid of non-Compassionate A.I., the Compassionate kind is necessary for a humane future ― starting as soon as possible. Please read about why we NEED Compassionate A.I. (C.A.I.) in general. In this text, I pass concretely along some fields. In each, the primary focus naturally lies on the human complexity Read the full article…

Translate »