“A.I. is in the Size.”

January 19, 2024 Artifical Intelligence No Comments

This famous quote by R.C. Schank (1991) gets new relevance with GPT technology ― in a surprisingly different way.

How Shank interpreted his quote

He meant that one cannot conclude ‘intelligence’ from a simple demo ― as was usual at that time of purely conceptual GOFAI (Good Old-Fashioned A.I.). At that time, many Ph.D. students showed ‘intelligence’ within a system by just, for instance, letting it translate a few sentences.

Shank taught that one has to scale the system’s performance to see whether the demo still acts intelligently. Because, he said, otherwise, it’s just a simulation of intelligence.

Anno 2024, we see scaled systems acting intelligently ― or do we?

What happened with GPT: Within a relatively simple paradigm, the sheer augmentation of parameters and data source(s) has shown an unexpected (even to the developers) emergence of the system’s competencies. Such a system is called a foundation model because it can be used as a foundation for many concrete applications ― being generally applicable.

Here are, indeed, elements of whatever one may call ‘intelligence.’

Something special happens due to size.

This is also the case for natural (our human) intelligence.

Here, too, it’s in the size, as becomes apparent by delving into evolutionary matters concerning the brain.

Are we and GPT, therefore, just two examples of the same principle?

The ‘Chinese room’ thought experiment [J. Searle, 1980]

Here also, a simple mechanism is involved, and a complex result is attained.

Imagine a conversation in Chinese without anyone or anything comprehending Chinese. The size here lies in a gigantic lookup table of Chinese<->English phrases used by a person who doesn’t know one Chinese character. Searle argued that, as in this case, something can seem intelligent by acting intelligently but without being intelligent. Because, he said, a lookup table is not intelligent. It’s just a simulation of intelligence.

Can the argument be reversed now? Something is intelligent when it acts intelligently — even when the internals are ‘just’ a gigantic number of elements (as in an immense lookup table) and a pretty simple but to-the-spot way of handling that amount.

“Is something intelligent?” is only half a question.

Thus, there is no single answer.

It is better to differentiate between implicit vs. explicit intelligence — or competence vs. comprehension, system-1 vs. system-2, or some other distinction in this direction.

Let’s stick to the first, going a bit deeper before answering any question.

Implicit — explicit

‘Explicit’ can also reside in how the implicit presents itself (interface-dependent) between many modules. These modules can be concepts, for instance, or what we call thoughts and feelings. The internals of any module may be implicit. It’s enough that a module acts explicitly at its interface because other modules only see this.

Of course, the module is also expected to act consistently the same way – more or less – each time it is called upon.

Welcome to how we think.

With our neurons and synapses continually in motion – being alive – we “never think the same thought twice,” even though we are generally little aware of this. Our modules (thoughts) are explicit only more or less and only at their interface.

Also, we don’t need to be perfect explicitly. Good enough is, well, enough to survive and thrive in a natural environment. When we want to be perfect, we have to invent mathematics ― which we did.

For implicit intelligence, size + a simple mechanism is enough.

This is what we see in present-day GPT. Thus, the answer to “Is it intelligent?” is: implicitly, yes. It is competent to a surprising degree.

But explicitly? At this time, to a much smaller degree. Although it can handle explicit knowledge, it does so in a very implicit way. It lacks comprehension. It can gain that in many ways and apparently, that’s what is going on now.

Which, of course, is dangerous! Without Compassionate input, humanity may become just one more temporary hominid all too quickly.

Two lessons to “A.I. is in the size.”

  • This quote is, in its new dress, very applicable to implicit intelligence. In this sense, we can be sure that GPT is only one example that has been stumbled upon. We may see implicit intelligence also readily emerge with other kinds. It’s in the size.
  • The other lesson is that the same principle may also be applicable to explicit intelligence. To attain this, we may need another kind of element ― probably a more formalized kind. But apart from that, here too, it’s in the size.

Size matters.

Of course, what also matters is the kind of elements involved and the way of handling them.

Note that this is ‘only’ about intelligence. How this will be used and how it will use itself appears to be another matter for now.

Will super-A.I. be ‘smarter’ than us also in its wisdom?

Leave a Reply

Related Posts

Data-Driven vs. Wisdom-Driven A.I.

In a world awash with data, wisdom is becoming the true treasure. Will wisdom-driven A.I. hold the key to a better, more human-centered world? Data may seem more objective or at least objectifiable than wisdom. Yet, data come with their own issues, often substantially arising from a lack of wisdom. For instance, it is wisdom Read the full article…

Beyond Taylorism using Compassionate A.I.

The movement beyond Taylorism and towards a ‘new way of working’ acknowledges the limitations of purely efficiency-based management systems. Today’s employees seek meaning, flexibility, and a sense of connection in their work, and Compassionate A.I. (Lisa, in progress) offers a unique path to this. No, this is not about Taylor Swift, of course. Taylorism, originating Read the full article…

Global Human-A.I. Value Alignment

Human values align deeply across the globe, though they vary on the surface. Thus, striving for human-A.I. value alignment can create positive challenges for A.I. and opportunities for humanity. A.I. may make the world more pluralistic. With A.I. means, different peoples/cultures can strive for more self-efficacy, doing their thing independently and thereby floating away from Read the full article…

Translate »