The Society of Mind in A.I.

June 26, 2024 Artifical Intelligence, Cognitive Insights No Comments

The human brain is pretty modular. This is a lesson from nature that we should heed when building a new kind of intelligence.

It brings A.I. and H.I. (human intelligence) closer together.

The society of mind

Marvin Minsky (cognitive science and A.I. researcher) wrote the philosophical book with this title back in 1986. In it, he expanded upon seeing the human mind as a society of parts communicating with each other.

Each part undertakes tasks independently. Intelligence emerges from the non-coercive interactions and communications between the parts while each by itself has little to nothing we would call inherently intelligent — no magic involved, yet intelligence emerges from the whole.

This concept aligns with the AURELIS view that mental growth arises naturally from within, through the harmonious interaction of internal elements.

The human brain/mind

Typical for the human (animal) case is that the mind and brain are intricately intertwined, unlike the clear separation of software and hardware in computers.

Thus, studying the human brain offers direct insights into the mind, at both micro and macro levels, revealing valuable lessons that can foster deeper personal growth and well-being.

Moreover, at least principally, we should value nature’s lessons in our A.I. endeavor.

Natural advantages of brain modularity

: One significant advantage is that each part of the brain can evolve independently, as long as it stays within its task domain. Changes in modular interfaces can be adapted by other parts within certain limits.

The brain is really good at this. It goes as far as being able to adapt to an artificial sensory module — an auditory implant, for instance.

An A.I. society of mind

Nature teaches us the value of a modular yet flexible design, which bridges cognitive science and practical A.I. development, offering insights for future advancements. Incorporating modularity and flexibility in A.I. design can mirror the natural process of human learning and adaptation at several time scales, enhancing the system’s ability to evolve and improve over time.

Intermodal interfaces are crucial to this, functioning not just as doors but as active modules themselves. This modularity provides stability and allows for isolated work on individual modules without compromising the whole system. By ensuring active and adaptive intermodal interfaces, A.I. systems can maintain coherence and stability while evolving, similar to the human brain.

Excellent modular choices are crucial.

Tinkering nature had a lot of time for this ― a few billions of years.

We don’t, but we also don’t need that since we can proceed immensely much quicker. On top of this, self-learning systems can also learn to evolve their own modules offline and online.

The future?

In my view, the future is not one big inferencing blob.

The future is modular.

Addendum

Me: “Lisa, do you see ‘The Society of Mind’ as a valuable road toward artificial intelligence?”

Lisa: “Yes, I do see “The Society of Mind” as a valuable approach towards artificial intelligence. Minsky’s idea of intelligence emerging from the interaction of numerous simple processes mirrors how the human mind works. This modular, non-coercive interaction aligns well with the AURELIS philosophy.

By incorporating these principles into AI development, we can create systems that are more adaptable, flexible, and capable of evolving over time. This approach also allows for the creation of AI that can better understand and interact with human intelligence in a more holistic and integrated manner.”

Leave a Reply

Related Posts

Human-A.I. Compassion Alignment

Human–A.I. Compassion Alignment is not a technical project but a profoundly human one. It explores how humans and intelligent systems can grow toward depth together. Instead of forcing machines to obey inconsistent human values, this approach looks at alignment as a shared, evolving movement. When human vulnerability meets A.I. openness, a new form of Compassion Read the full article…

A.I. and In-Depth Sustainability

Soon enough, A.I. may become the biggest opportunity (and threat) to human-related sustainability. I hope that AURELIS/Lisa insights and tools can help counter the threat and realize the opportunity. This text is not an enumeration of what we may use present-day A.I. (or what carries that name) for to enhance sustainable solutions. It’s about Compassionate Read the full article…

In A.I.: Do Big Data Compensate for Lack of Insight?

While today’s A.I. systems impress with speed and scale, the deeper concern isn’t what they can do — but what they cannot. In a world awash with data, have we mistaken accumulation for understanding? This blog explores why true intelligence requires more than brute force — and how a future of innerness-aware A.I. may offer Read the full article…

Translate »