The Society of Mind in A.I.

June 26, 2024 Artifical Intelligence, Cognitive Insights No Comments

The human brain is pretty modular. This is a lesson from nature that we should heed when building a new kind of intelligence.

It brings A.I. and H.I. (human intelligence) closer together.

The society of mind

Marvin Minsky (cognitive science and A.I. researcher) wrote the philosophical book with this title back in 1986. In it, he expanded upon seeing the human mind as a society of parts communicating with each other.

Each part undertakes tasks independently. Intelligence emerges from the non-coercive interactions and communications between the parts while each by itself has little to nothing we would call inherently intelligent — no magic involved, yet intelligence emerges from the whole.

This concept aligns with the AURELIS view that mental growth arises naturally from within, through the harmonious interaction of internal elements.

The human brain/mind

Typical for the human (animal) case is that the mind and brain are intricately intertwined, unlike the clear separation of software and hardware in computers.

Thus, studying the human brain offers direct insights into the mind, at both micro and macro levels, revealing valuable lessons that can foster deeper personal growth and well-being.

Moreover, at least principally, we should value nature’s lessons in our A.I. endeavor.

Natural advantages of brain modularity

: One significant advantage is that each part of the brain can evolve independently, as long as it stays within its task domain. Changes in modular interfaces can be adapted by other parts within certain limits.

The brain is really good at this. It goes as far as being able to adapt to an artificial sensory module — an auditory implant, for instance.

An A.I. society of mind

Nature teaches us the value of a modular yet flexible design, which bridges cognitive science and practical A.I. development, offering insights for future advancements. Incorporating modularity and flexibility in A.I. design can mirror the natural process of human learning and adaptation at several time scales, enhancing the system’s ability to evolve and improve over time.

Intermodal interfaces are crucial to this, functioning not just as doors but as active modules themselves. This modularity provides stability and allows for isolated work on individual modules without compromising the whole system. By ensuring active and adaptive intermodal interfaces, A.I. systems can maintain coherence and stability while evolving, similar to the human brain.

Excellent modular choices are crucial.

Tinkering nature had a lot of time for this ― a few billions of years.

We don’t, but we also don’t need that since we can proceed immensely much quicker. On top of this, self-learning systems can also learn to evolve their own modules offline and online.

The future?

In my view, the future is not one big inferencing blob.

The future is modular.

Addendum

Me: “Lisa, do you see ‘The Society of Mind’ as a valuable road toward artificial intelligence?”

Lisa: “Yes, I do see “The Society of Mind” as a valuable approach towards artificial intelligence. Minsky’s idea of intelligence emerging from the interaction of numerous simple processes mirrors how the human mind works. This modular, non-coercive interaction aligns well with the AURELIS philosophy.

By incorporating these principles into AI development, we can create systems that are more adaptable, flexible, and capable of evolving over time. This approach also allows for the creation of AI that can better understand and interact with human intelligence in a more holistic and integrated manner.”

Leave a Reply

Related Posts

The Journey Towards Compassionate A.I.

So, I’ve written a book. It carries the same title as this blog. It’s about intelligence, consciousness, and Compassion (with capital). Only on the background of these can one talk about the future of real A.I. (and us). First things first: ‘The Journey Towards Compassionate A.I.’ is available through many internet outlets, such as Amazon. Read the full article…

How Lisa Prevents LLM Hallucinations

Hallucinations (better-called confabulations) in the context of large language models (LLMs) occur when these models generate information that isn’t factually accurate. Lisa can mitigate these from the insight of why they happen, namely: LLM confabulations happen because these systems don’t have a proper understanding of the world but generate text based on patterns learned from Read the full article…

The A.I. Spectrum – Incl. Lisa

This blog explores five kinds of A.I., culminating in Lisa: a Compassion-based system built for inner growth, ethical safety, and real partnership. What if A.I. wasn’t just fast, but deep? While we imagine A.I. advancing in a straight line from weak to strong, from narrow to general, another movement unfolds here ― not forward, but Read the full article…

Translate »