“A.I. is in the Size.”

January 19, 2024 Artifical Intelligence No Comments

This famous quote by R.C. Schank (1991) gets new relevance with GPT technology ― in a surprisingly different way.

How Shank interpreted his quote

He meant that one cannot conclude ‘intelligence’ from a simple demo ― as was usual at that time of purely conceptual GOFAI (Good Old-Fashioned A.I.). At that time, many Ph.D. students showed ‘intelligence’ within a system by just, for instance, letting it translate a few sentences.

Shank taught that one has to scale the system’s performance to see whether the demo still acts intelligently. Because, he said, otherwise, it’s just a simulation of intelligence.

Anno 2024, we see scaled systems acting intelligently ― or do we?

What happened with GPT: Within a relatively simple paradigm, the sheer augmentation of parameters and data source(s) has shown an unexpected (even to the developers) emergence of the system’s competencies. Such a system is called a foundation model because it can be used as a foundation for many concrete applications ― being generally applicable.

Here are, indeed, elements of whatever one may call ‘intelligence.’

Something special happens due to size.

This is also the case for natural (our human) intelligence.

Here, too, it’s in the size, as becomes apparent by delving into evolutionary matters concerning the brain.

Are we and GPT, therefore, just two examples of the same principle?

The ‘Chinese room’ thought experiment [J. Searle, 1980]

Here also, a simple mechanism is involved, and a complex result is attained.

Imagine a conversation in Chinese without anyone or anything comprehending Chinese. The size here lies in a gigantic lookup table of Chinese<->English phrases used by a person who doesn’t know one Chinese character. Searle argued that, as in this case, something can seem intelligent by acting intelligently but without being intelligent. Because, he said, a lookup table is not intelligent. It’s just a simulation of intelligence.

Can the argument be reversed now? Something is intelligent when it acts intelligently — even when the internals are ‘just’ a gigantic number of elements (as in an immense lookup table) and a pretty simple but to-the-spot way of handling that amount.

“Is something intelligent?” is only half a question.

Thus, there is no single answer.

It is better to differentiate between implicit vs. explicit intelligence — or competence vs. comprehension, system-1 vs. system-2, or some other distinction in this direction.

Let’s stick to the first, going a bit deeper before answering any question.

Implicit — explicit

‘Explicit’ can also reside in how the implicit presents itself (interface-dependent) between many modules. These modules can be concepts, for instance, or what we call thoughts and feelings. The internals of any module may be implicit. It’s enough that a module acts explicitly at its interface because other modules only see this.

Of course, the module is also expected to act consistently the same way – more or less – each time it is called upon.

Welcome to how we think.

With our neurons and synapses continually in motion – being alive – we “never think the same thought twice,” even though we are generally little aware of this. Our modules (thoughts) are explicit only more or less and only at their interface.

Also, we don’t need to be perfect explicitly. Good enough is, well, enough to survive and thrive in a natural environment. When we want to be perfect, we have to invent mathematics ― which we did.

For implicit intelligence, size + a simple mechanism is enough.

This is what we see in present-day GPT. Thus, the answer to “Is it intelligent?” is: implicitly, yes. It is competent to a surprising degree.

But explicitly? At this time, to a much smaller degree. Although it can handle explicit knowledge, it does so in a very implicit way. It lacks comprehension. It can gain that in many ways and apparently, that’s what is going on now.

Which, of course, is dangerous! Without Compassionate input, humanity may become just one more temporary hominid all too quickly.

Two lessons to “A.I. is in the size.”

  • This quote is, in its new dress, very applicable to implicit intelligence. In this sense, we can be sure that GPT is only one example that has been stumbled upon. We may see implicit intelligence also readily emerge with other kinds. It’s in the size.
  • The other lesson is that the same principle may also be applicable to explicit intelligence. To attain this, we may need another kind of element ― probably a more formalized kind. But apart from that, here too, it’s in the size.

Size matters.

Of course, what also matters is the kind of elements involved and the way of handling them.

Note that this is ‘only’ about intelligence. How this will be used and how it will use itself appears to be another matter for now.

Will super-A.I. be ‘smarter’ than us also in its wisdom?

Leave a Reply

Related Posts

A.I. and Constructionism

Many people, and Western culture (if not most cultures) in general, mainly live in ‘constructed reality.’ In combination with the power of A.I., this is excruciatingly dangerous. Constructionism [see: “Constructionism“] In short, humans mainly live in a ‘constructed reality’ full of group-based assumptions. On the one side, this is an asset. It makes life simpler. Read the full article…

Semantically Meaningful Chunks

A Semantically Meaningful Chunk (SMC) is any cognitive entity, big or small, that is worth contemplating. In A.I., these can serve as building blocks of intelligence. It’s what humans often reserve specific terms for. Language comes into play here, significantly contributing to how humans have rapidly advanced in intelligence through using terms, sentences, documents, and Read the full article…

A.I.-Phobia

One should be scared of any danger, including dangerous A.I. Contrary to this, anxiety is never a good adviser. This text is about being anxious. A phobic reaction against present technology is most dangerous. Needed is a lot of common sense. As to the above image, note the reference to Mary Wollstonecraft Shelley’s novel. In Read the full article…

Translate »