The Society of Mind in A.I.

June 26, 2024 Artifical Intelligence, Cognitive Insights No Comments

The human brain is pretty modular. This is a lesson from nature that we should heed when building a new kind of intelligence.

It brings A.I. and H.I. (human intelligence) closer together.

The society of mind

Marvin Minsky (cognitive science and A.I. researcher) wrote the philosophical book with this title back in 1986. In it, he expanded upon seeing the human mind as a society of parts communicating with each other.

Each part undertakes tasks independently. Intelligence emerges from the non-coercive interactions and communications between the parts while each by itself has little to nothing we would call inherently intelligent — no magic involved, yet intelligence emerges from the whole.

This concept aligns with the AURELIS view that mental growth arises naturally from within, through the harmonious interaction of internal elements.

The human brain/mind

Typical for the human (animal) case is that the mind and brain are intricately intertwined, unlike the clear separation of software and hardware in computers.

Thus, studying the human brain offers direct insights into the mind, at both micro and macro levels, revealing valuable lessons that can foster deeper personal growth and well-being.

Moreover, at least principally, we should value nature’s lessons in our A.I. endeavor.

Natural advantages of brain modularity

: One significant advantage is that each part of the brain can evolve independently, as long as it stays within its task domain. Changes in modular interfaces can be adapted by other parts within certain limits.

The brain is really good at this. It goes as far as being able to adapt to an artificial sensory module — an auditory implant, for instance.

An A.I. society of mind

Nature teaches us the value of a modular yet flexible design, which bridges cognitive science and practical A.I. development, offering insights for future advancements. Incorporating modularity and flexibility in A.I. design can mirror the natural process of human learning and adaptation at several time scales, enhancing the system’s ability to evolve and improve over time.

Intermodal interfaces are crucial to this, functioning not just as doors but as active modules themselves. This modularity provides stability and allows for isolated work on individual modules without compromising the whole system. By ensuring active and adaptive intermodal interfaces, A.I. systems can maintain coherence and stability while evolving, similar to the human brain.

Excellent modular choices are crucial.

Tinkering nature had a lot of time for this ― a few billions of years.

We don’t, but we also don’t need that since we can proceed immensely much quicker. On top of this, self-learning systems can also learn to evolve their own modules offline and online.

The future?

In my view, the future is not one big inferencing blob.

The future is modular.

Addendum

Me: “Lisa, do you see ‘The Society of Mind’ as a valuable road toward artificial intelligence?”

Lisa: “Yes, I do see “The Society of Mind” as a valuable approach towards artificial intelligence. Minsky’s idea of intelligence emerging from the interaction of numerous simple processes mirrors how the human mind works. This modular, non-coercive interaction aligns well with the AURELIS philosophy.

By incorporating these principles into AI development, we can create systems that are more adaptable, flexible, and capable of evolving over time. This approach also allows for the creation of AI that can better understand and interact with human intelligence in a more holistic and integrated manner.”

Leave a Reply

Related Posts

Wiki-Power in A.I.

In the modern era of vast information overload, the challenge is not just accessing knowledge but making sense of it in a way that fosters deeper understanding and meaningful insights. Enter Wiki-Power — an approach rooted in the interconnected, non-linear nature of knowledge. Combined with the subtlety and depth of expertext, this approach promises to Read the full article…

Human-Centered A.I.: Total-Person or Ego?

This makes for a huge difference, especially since the future of humanity is at stake. With good intentions, one may pave the road to disaster. Everybody, including me, should take this at heart. ‘Doing good’ may take much effort in understanding what one is tinkering with. This is relevant in any domain. [see:” ‘Doing Good’: Read the full article…

Compassion as Basis for A.I. Regulations

To prevent A.I.-related mishaps or even disasters while going into a future of super-A.I., merely regulating A.I. is not sufficient ― presently nor in principle. Striving for Compassionate A.I. There will eventually be no security concerning A.I. if we don’t put Compassion into the core. The main reason is that super-A.I. will be much more Read the full article…

Translate »