“A.I. is in the Size.”

January 19, 2024 Artifical Intelligence No Comments

This famous quote by R.C. Schank (1991) gets new relevance with GPT technology ― in a surprisingly different way.

How Shank interpreted his quote

He meant that one cannot conclude ‘intelligence’ from a simple demo ― as was usual at that time of purely conceptual GOFAI (Good Old-Fashioned A.I.). At that time, many Ph.D. students showed ‘intelligence’ within a system by just, for instance, letting it translate a few sentences.

Shank taught that one has to scale the system’s performance to see whether the demo still acts intelligently. Because, he said, otherwise, it’s just a simulation of intelligence.

Anno 2024, we see scaled systems acting intelligently ― or do we?

What happened with GPT: Within a relatively simple paradigm, the sheer augmentation of parameters and data source(s) has shown an unexpected (even to the developers) emergence of the system’s competencies. Such a system is called a foundation model because it can be used as a foundation for many concrete applications ― being generally applicable.

Here are, indeed, elements of whatever one may call ‘intelligence.’

Something special happens due to size.

This is also the case for natural (our human) intelligence.

Here, too, it’s in the size, as becomes apparent by delving into evolutionary matters concerning the brain.

Are we and GPT, therefore, just two examples of the same principle?

The ‘Chinese room’ thought experiment [J. Searle, 1980]

Here also, a simple mechanism is involved, and a complex result is attained.

Imagine a conversation in Chinese without anyone or anything comprehending Chinese. The size here lies in a gigantic lookup table of Chinese<->English phrases used by a person who doesn’t know one Chinese character. Searle argued that, as in this case, something can seem intelligent by acting intelligently but without being intelligent. Because, he said, a lookup table is not intelligent. It’s just a simulation of intelligence.

Can the argument be reversed now? Something is intelligent when it acts intelligently — even when the internals are ‘just’ a gigantic number of elements (as in an immense lookup table) and a pretty simple but to-the-spot way of handling that amount.

“Is something intelligent?” is only half a question.

Thus, there is no single answer.

It is better to differentiate between implicit vs. explicit intelligence — or competence vs. comprehension, system-1 vs. system-2, or some other distinction in this direction.

Let’s stick to the first, going a bit deeper before answering any question.

Implicit — explicit

‘Explicit’ can also reside in how the implicit presents itself (interface-dependent) between many modules. These modules can be concepts, for instance, or what we call thoughts and feelings. The internals of any module may be implicit. It’s enough that a module acts explicitly at its interface because other modules only see this.

Of course, the module is also expected to act consistently the same way – more or less – each time it is called upon.

Welcome to how we think.

With our neurons and synapses continually in motion – being alive – we “never think the same thought twice,” even though we are generally little aware of this. Our modules (thoughts) are explicit only more or less and only at their interface.

Also, we don’t need to be perfect explicitly. Good enough is, well, enough to survive and thrive in a natural environment. When we want to be perfect, we have to invent mathematics ― which we did.

For implicit intelligence, size + a simple mechanism is enough.

This is what we see in present-day GPT. Thus, the answer to “Is it intelligent?” is: implicitly, yes. It is competent to a surprising degree.

But explicitly? At this time, to a much smaller degree. Although it can handle explicit knowledge, it does so in a very implicit way. It lacks comprehension. It can gain that in many ways and apparently, that’s what is going on now.

Which, of course, is dangerous! Without Compassionate input, humanity may become just one more temporary hominid all too quickly.

Two lessons to “A.I. is in the size.”

  • This quote is, in its new dress, very applicable to implicit intelligence. In this sense, we can be sure that GPT is only one example that has been stumbled upon. We may see implicit intelligence also readily emerge with other kinds. It’s in the size.
  • The other lesson is that the same principle may also be applicable to explicit intelligence. To attain this, we may need another kind of element ― probably a more formalized kind. But apart from that, here too, it’s in the size.

Size matters.

Of course, what also matters is the kind of elements involved and the way of handling them.

Note that this is ‘only’ about intelligence. How this will be used and how it will use itself appears to be another matter for now.

Will super-A.I. be ‘smarter’ than us also in its wisdom?

Leave a Reply

Related Posts

Will Compassion Win the Game?

Humanity stands at a crossroads, made imminent by technological developments, mainly in the domain of A.I. One can see the last 2000 years, at least, as the era of this crossroads. In two million years from now, this will probably still be seen as such. Compassion, basically, is the defining feature of this crossroads. Do Read the full article…

Will A.I. Have Its Own Feelings and Purpose?

Anno 2025: A.I. has made its entry and it’s here to stay. Human based? We generally think of ‘feelings’ as human-based. But that is just a historical artifact, a kind of convention. Does an ant have feelings? Or a goldfish, a snake, a mouse? Does a rabbit have feelings? I think these are the wrong Read the full article…

A.I. to Benefit Humans

‘Human-oriented’ is not the same as ‘ego-oriented.’ As never before, and perhaps never after, we have with A.I. a powerful toolbox that can be used in any direction. In-depth As to AURELIS ethics, the striving – of A.I. and of any other development – should definitely be towards humanity-in-depth, the ‘total human being,’ as opposed to Read the full article…

Translate »