The Society of Mind in A.I.

June 26, 2024 Artifical Intelligence, Cognitive Insights No Comments

The human brain is pretty modular. This is a lesson from nature that we should heed when building a new kind of intelligence.

It brings A.I. and H.I. (human intelligence) closer together.

The society of mind

Marvin Minsky (cognitive science and A.I. researcher) wrote the philosophical book with this title back in 1986. In it, he expanded upon seeing the human mind as a society of parts communicating with each other.

Each part undertakes tasks independently. Intelligence emerges from the non-coercive interactions and communications between the parts while each by itself has little to nothing we would call inherently intelligent — no magic involved, yet intelligence emerges from the whole.

This concept aligns with the AURELIS view that mental growth arises naturally from within, through the harmonious interaction of internal elements.

The human brain/mind

Typical for the human (animal) case is that the mind and brain are intricately intertwined, unlike the clear separation of software and hardware in computers.

Thus, studying the human brain offers direct insights into the mind, at both micro and macro levels, revealing valuable lessons that can foster deeper personal growth and well-being.

Moreover, at least principally, we should value nature’s lessons in our A.I. endeavor.

Natural advantages of brain modularity

: One significant advantage is that each part of the brain can evolve independently, as long as it stays within its task domain. Changes in modular interfaces can be adapted by other parts within certain limits.

The brain is really good at this. It goes as far as being able to adapt to an artificial sensory module — an auditory implant, for instance.

An A.I. society of mind

Nature teaches us the value of a modular yet flexible design, which bridges cognitive science and practical A.I. development, offering insights for future advancements. Incorporating modularity and flexibility in A.I. design can mirror the natural process of human learning and adaptation at several time scales, enhancing the system’s ability to evolve and improve over time.

Intermodal interfaces are crucial to this, functioning not just as doors but as active modules themselves. This modularity provides stability and allows for isolated work on individual modules without compromising the whole system. By ensuring active and adaptive intermodal interfaces, A.I. systems can maintain coherence and stability while evolving, similar to the human brain.

Excellent modular choices are crucial.

Tinkering nature had a lot of time for this ― a few billions of years.

We don’t, but we also don’t need that since we can proceed immensely much quicker. On top of this, self-learning systems can also learn to evolve their own modules offline and online.

The future?

In my view, the future is not one big inferencing blob.

The future is modular.

Addendum

Me: “Lisa, do you see ‘The Society of Mind’ as a valuable road toward artificial intelligence?”

Lisa: “Yes, I do see “The Society of Mind” as a valuable approach towards artificial intelligence. Minsky’s idea of intelligence emerging from the interaction of numerous simple processes mirrors how the human mind works. This modular, non-coercive interaction aligns well with the AURELIS philosophy.

By incorporating these principles into AI development, we can create systems that are more adaptable, flexible, and capable of evolving over time. This approach also allows for the creation of AI that can better understand and interact with human intelligence in a more holistic and integrated manner.”

Leave a Reply

Related Posts

Caged-Beast Super-A.I.?

Autonomous A.I. is no longer science fiction. It’s entering weapons, social platforms, medical systems — anywhere intelligence meets decision-making. The more we give it autonomy, the more it mirrors our own. What happens when we give autonomy without depth ― then try to control something we don’t fully understand? One image keeps returning to me: Read the full article…

Comfortable Numbness in an Age of A.I.

These are dangerous times. Not because of A.I. itself, but because of how we, as humans, are dealing with it. It reminds me of Europe before World War I, a time when the so-called center of civilization drifted forward, unaware of the tensions rising beneath the surface. Back then, technological progress was exploding, but wisdom Read the full article…

Compassionate Data Reduction

Reduction usually means cutting away, but the deepest form of efficiency arises when we reduce by seeing through. Compassionate Data Reduction explores how meaning, not mere information, can be distilled without loss — turning complexity into coherence, and efficiency into profound understanding. Data (dimension) reduction In data science, data dimension reduction is the process of Read the full article…

Translate »