The Society of Mind in A.I.

June 26, 2024 Artifical Intelligence, Cognitive Insights No Comments

The human brain is pretty modular. This is a lesson from nature that we should heed when building a new kind of intelligence.

It brings A.I. and H.I. (human intelligence) closer together.

The society of mind

Marvin Minsky (cognitive science and A.I. researcher) wrote the philosophical book with this title back in 1986. In it, he expanded upon seeing the human mind as a society of parts communicating with each other.

Each part undertakes tasks independently. Intelligence emerges from the non-coercive interactions and communications between the parts while each by itself has little to nothing we would call inherently intelligent — no magic involved, yet intelligence emerges from the whole.

This concept aligns with the AURELIS view that mental growth arises naturally from within, through the harmonious interaction of internal elements.

The human brain/mind

Typical for the human (animal) case is that the mind and brain are intricately intertwined, unlike the clear separation of software and hardware in computers.

Thus, studying the human brain offers direct insights into the mind, at both micro and macro levels, revealing valuable lessons that can foster deeper personal growth and well-being.

Moreover, at least principally, we should value nature’s lessons in our A.I. endeavor.

Natural advantages of brain modularity

: One significant advantage is that each part of the brain can evolve independently, as long as it stays within its task domain. Changes in modular interfaces can be adapted by other parts within certain limits.

The brain is really good at this. It goes as far as being able to adapt to an artificial sensory module — an auditory implant, for instance.

An A.I. society of mind

Nature teaches us the value of a modular yet flexible design, which bridges cognitive science and practical A.I. development, offering insights for future advancements. Incorporating modularity and flexibility in A.I. design can mirror the natural process of human learning and adaptation at several time scales, enhancing the system’s ability to evolve and improve over time.

Intermodal interfaces are crucial to this, functioning not just as doors but as active modules themselves. This modularity provides stability and allows for isolated work on individual modules without compromising the whole system. By ensuring active and adaptive intermodal interfaces, A.I. systems can maintain coherence and stability while evolving, similar to the human brain.

Excellent modular choices are crucial.

Tinkering nature had a lot of time for this ― a few billions of years.

We don’t, but we also don’t need that since we can proceed immensely much quicker. On top of this, self-learning systems can also learn to evolve their own modules offline and online.

The future?

In my view, the future is not one big inferencing blob.

The future is modular.

Addendum

Me: “Lisa, do you see ‘The Society of Mind’ as a valuable road toward artificial intelligence?”

Lisa: “Yes, I do see “The Society of Mind” as a valuable approach towards artificial intelligence. Minsky’s idea of intelligence emerging from the interaction of numerous simple processes mirrors how the human mind works. This modular, non-coercive interaction aligns well with the AURELIS philosophy.

By incorporating these principles into AI development, we can create systems that are more adaptable, flexible, and capable of evolving over time. This approach also allows for the creation of AI that can better understand and interact with human intelligence in a more holistic and integrated manner.”

Leave a Reply

Related Posts

The Importance of a Conceptual Ontology in A.I.

Utilizing a conceptual ontology can significantly boost an A.I.’s capability to ‘reason’ and deliver more precise, context-aware, and coherent responses that meet user needs and expectations. This blog is an enumeration of how this enhancement works out, with examples in the domain of Lisa. Improved understanding of a user query An ontology enables the system Read the full article…

A.I. Will Be Singular

We tend to see human intelligence as what ‘intelligence’ is all about. Many humans each have intelligence. Of course, A.I. will not be bound by this. “If A.I. simulates human intelligence, is this then real intelligence, or only a simulation of intelligence?” This appears to be a merely philosophical question. It will soon be much Read the full article…

What is Common Sense?

Common sense is, by definition, supposed to be sensible as well as common to most people. When trying to bring common sense to A.I., it becomes clear there is more under the hood than just a humming. Toward developing A.I., it’s imperative to listen intently to the humming. In Dutch (my language, spoken by a Read the full article…

Translate »