The Path from Implicit to Explicit Knowledge

December 19, 2023 Artifical Intelligence, Cognitive Insights No Comments

Implicit: It’s there, but we don’t readily know how, neither why it works. Explicit: We can readily follow each step.

This is more or less the same move as from intractable to tractable or from competence to comprehension. But how?

Emergence

If something comes out, it must have been in ― one way or another. Children know this from a very young age. It might be different at the fringes of reality, but that is not our concern here. (x)

Usually, with emergence, there’s a large part of what-seems-to-be-chaos involved, making it challenging to see underlying patterns. However, with the right tools, these patterns become visible and pragmatically available for further processing.

Finding the right tools is the path from implicit to explicit knowledge.

The right tools

If the implicit and explicit levels are far removed from each other, the right tools cannot be straightforwardly explicit. Something in them needs to be able to manage complexity.

In our brain/mind, this something is formed by many billions of neurons forming many more mental-neuronal patterns in parallel distributed processing.

That is not the only way. It’s just an example — the one nature developed (or stumbled upon?) over a very long period. BTW, not once, but at least twice; ask big octopi.

Transformer technology

Another example has recently been developed (or stumbled upon?) by researchers in the field of Artificial Neural Networks.

In this one, the tool is made up of many billions of mathematical parameters and relations. Like nature, researchers don’t know (yet) exactly how or why this works, but it surely does. The users of transformer technology (chat-GPT, etc.) encounter explicit output as a result.

More?

There are certainly more examples to be developed (or stumbled upon).

We’re at an advantage now. Within the two above examples, we can look for general characteristics of the path from implicit to explicit. Thus, more cases will undoubtedly be found.

For instance, the sheer amount of subconceptual processing units is a returning characteristic of brute force solutions. With more knowledgeable developments, this may probably be drastically diminished – but still necessary – for the good cause.

Focus

In both examples, we see something we can denote as focus or attention. Logically, this is needed to avoid being inundated by complexity. Without the focus, the chaos part is too strong to handle.

In humans, attention is created by purposefully heightening what’s inside and diminishing what’s outside the center of attention. Practically, we look for avoiding distraction when focus is needed. Thus, what’s in focus becomes temporarily explicit. Proceeding from one focus to the other, we live in an explicit mindscape while having little idea about how small our focus is at any moment unless we explicitly reflect upon that.

The intelligent lesson

In all this, we can see that ‘intelligence’ is broader than human. That may be a profoundly needed lesson in humility. Meanwhile, we’re not just creating a new intelligence but also finding out more about ours — its strenghts, limitations, and non-exclusiveness.

Will we take the lesson at heart?

__

(x) We don’t need quantum to understand intelligence, let alone consciousness. See ——soon.

Leave a Reply

Related Posts

Beyond Taylorism using Compassionate A.I.

The movement beyond Taylorism and towards a ‘new way of working’ acknowledges the limitations of purely efficiency-based management systems. Today’s employees seek meaning, flexibility, and a sense of connection in their work, and Compassionate A.I. (Lisa, in progress) offers a unique path to this. No, this is not about Taylor Swift, of course. Taylorism, originating Read the full article…

From A.I. Agents to Society of Mind

In this blog, we trace the evolution from artificial agents to emergent mind by reflecting on Marvin Minsky’s Society of Mind and integrating modern insights from both neuroscience and A.I. We uncover how modularity, structure, and pattern completion form the bedrock of both artificial and human intelligence. The blog also proposes that consciousness isn’t a Read the full article…

Global Human-A.I. Value Alignment

Human values align deeply across the globe, though they vary on the surface. Thus, striving for human-A.I. value alignment can create positive challenges for A.I. and opportunities for humanity. A.I. may make the world more pluralistic. With A.I. means, different peoples/cultures can strive for more self-efficacy, doing their thing independently and thereby floating away from Read the full article…

Translate »