Procedural vs. Declarative Knowledge in A.I.

July 1, 2024 Artifical Intelligence, Cognitive Insights No Comments

Declarative memory is the memory of facts (semantic memory) and events (episodic memory). Procedural memory is the memory of how to do things (skills and tasks). Both complement each other and often overlap.

The distinction is not the same as between conceptual and non-conceptual knowledge.

Though related, these categories describe different aspects of knowledge processing:

  • procedural vs. declarative: Focuses on ‘how’ to do things versus ‘what’ things are.
  • conceptual vs. non-conceptual: Involves abstract understanding versus direct, such as in sensory experiences.

The example of the human brain

In the human brain, declarative and procedural memory (representation and inferencing) are primarily associated with different sets of brain centers, respectively:

  • the hippocampus, along with other structures within the medial temporal lobe, and prefrontal cortex
  • the basal ganglia, cerebellum, and motor cortex.

The interplay (and functional overlap) of these is an example of Minsky’s notion of ‘society of mind,’ which we can see realized in the brain and can be realized in A.I.

Integrating procedural and declarative knowledge in A.I. models

This synergy enables comprehensive learning: understanding factual information enhances task execution, while performing tasks reinforces factual knowledge.

The integration can facilitate A.I. systems that learn and adapt more dynamically, mimicking human learning processes ― enhancing applications across fields like coaching and robotics.

Procedural knowledge is crucial for a coach-bot during the coaching process.

It helps the bot guide users through steps, facilitating structured exercises and routines. It also adapts coaching techniques based on user responses and the bot’s learned skills, rather than just facts.

For example, in natural language processing, understanding context and executing language tasks based on patterns and sequences requires substantial procedural knowledge. This depth of understanding enhances the bot’s ability to interact more naturally and effectively with users.

Procedural knowledge is also crucial for 3-D-robotics.

It encompasses the skills and tasks robots need to perform actions effectively, helping them learn to execute complex sequences like navigating environments and manipulating objects.

This capability enhances their functionality in dynamic and unpredictable settings.

Thus, a coach-bot can acquire skills that are useful for a three-dimensional robot.

Particularly in skills that enhance interaction, adaptability, and effectiveness in both contexts, such as:

  • guiding through steps: Both can use structured sequences to assist users or navigate environments.
  • adapting techniques: Learning to modify approaches based on feedback, which is beneficial for robots in dynamic settings.

These skills enable seamless integration of AI systems across various platforms and applications.

Ethics of autonomy

Coach-bots and 3-D robots can acquire skills useful in broad and overlapping domains. This overlap can lead to innovations that benefit both fields.

Thus, AI systems capable of learning and applying both types of knowledge can potentially evolve into more autonomous decision-makers more rapidly ― for better or worse.

On the positive side: enhanced emotional intelligence in A.I. systems

Integrating procedural and declarative knowledge in A.I. may help develop enhanced emotional intelligence by achieving a deeper level of adaptability.

For example, a coach-bot may interpret emotional cues and adjust its responses accordingly, recognizing when a user is frustrated, offering encouragement, or modifying the coaching approach to better suit the user’s emotional state.

This emotional responsiveness additionally ensures that the technology remains Compassionate.

Leave a Reply

Related Posts

Are LLMs Parrots or Truly Creative?

Large Language Models (LLMs, such as GPT) are, at present, just mathematical distillations of human-made textual patterns — very many of them. They are, therefore, frequently described as parrots. Size matters. The parrot feature may be applied when there is little input or little diversity in input. Then, clearly, the result is a pattern-based average Read the full article…

(Artificial) Ethics as a Cloud?

In Compassionate A.I., of course, the first principle is Compassion, followed by an intrinsic combination of rationality and depth, etc. The following complements this foundation. The guarantee of ethical behavior eventually arises from countless insights and realizations, forming a ‘cloud.’ These blogs contribute to this process regarding Lisa. Humanly speaking The blogs reflect the authors’ Read the full article…

Societal Inner Dissociation and the Challenge of Super-A.I.

The rise of artificial intelligence, particularly super-A.I., intersects with Societal Inner Dissociation (SID), presenting significant challenges and potential opportunities. This blog is an exploration of the complex relationship between SID and super-A.I. (A.I. beyond human capabilities), examining how this might exacerbate or mitigate societal dissociation. This is part of the *SID* series. Please read the Read the full article…

Translate »