will not be technological, but philosophical.
Of course, technology will be necessary to realize the philosophical. It will not be one more technological breakthrough, but rather a combination of new and old technologies.
“Present-day A.I. = sophisticated perception”
These are the words of Yann LeCun, a leading A.I. scientist, founding father of convolutional nets, which at present play a major role in many deep learning applications.
Yann sees DNNs (Deep Neural Networks) as quite straightforward devices in which basically a linear input vector goes through a non-linear function towards output in weights, over and over again.
And amazingly, as counter-intuitively as anything gets, it works.
The problem before the breakthrough was that artificial neural networks were deemed to get quickly into a rut, caught in some local minimum from which they would never escape. However, with many layers (in DNNs), in (mathematically) multidimensional space, such local minima are easily avoided, especially using gradient descent.
Dear reader, never mind if you don’t fully get this, but you certainly get a feel of it.
The same Yann points out that this many-layers construct works because reality is compositional: it can be broken down into pieces, and each piece can be processed by itself. A car has wheels and doors. A wheel has rubber parts and metal parts.
This is where, at first sight, Yann and Jean-Luc (your humble writer) have different opinions.
In my view, the compositionality of reality is mainly a construct of the one who is perceiving this reality. It will fit more if the perceiver is also of the same kind as the builder of that (piece of) reality. A car is an excellent example of this, being a mechanical construct made by human beings. Its compositionality is part of the construction itself. This is also why it can readily be made en masse.
Meanwhile, we live in such a human-made world that compositionality is intrinsically dominant everywhere
except in nature
where – I perfectly agree, of course – it is also present, but to a lesser degree and much more in synthesis with non-compositionality. Another term for the latter is complexity. [see: “Complexity of Complexity”]
We have arms and legs and softer parts and harder parts. We also have a brain/mind, and within that, a lot of complexity. Especially in the cerebrum (as opposed to the cerebellum, for instance), our brain/mind is composed of a multitude of elements that act together in such ways that the result cannot be calculated from the sum of parts. In other words: It’s a complex melting pot. If Nature were an engineer, she would also be surprised that it works. Moreover, the more melting, even the better it works!
Back to Yann. He sees two major obstacles to the further development of A.I. based on present-day technologies:
- There is no reasoning (nor planning/predictability) involved.
- Learning world models appears to be very difficult to impossible.
This makes present-day A.I. nothing short of not intelligent at all. The term A.I. is, therefore, a misnomer, as is also ‘machine learning.’ The machine does not actually ‘learn’ anything in the way that we humans would call ‘learning’ as applied to ourselves. It is closer to a book getting a few more pages. One still needs an intelligent interpreter to make anything of it. ‘Deep learning’ in A.I. until now comes down mainly to additional segmentation using an immense lot of data labeled by humans: supervised learning in DNN.
I find this quite scary when applied to humanistic domains. It is highly control-based. In my view, it will not lead to ethically welcome results. Fortunately, it will probably lead to hardly any sustainable results.
The next revolution will be in real intelligence,
whether human-simulated or starting from a more abstract level.
Thus, is real A.I. ‘in the size’ (more of the same) or in a qualitatively different endeavor? Will it be enabled through sheer brute force, breaking through with some simple procedure(s) as Yann described in the recent ‘DNN revolution’ (largely engendered by himself and few others)?
It will probably be something radically different. As you know by now, my view on this is a philosophical one, not in the sense of merely an armchair philosophy but a very pragmatic one. Technologically, there will be a search for the best combination of old and new technologies to make this philosophy optimally efficient. Much of this has already been done by me and others.
What will be this next revolution is not in the box.
It will be out of the box.
Here, again, I’m evolving parallel to Yann: One of the ways to view the core will be energy-based self-supervised learning. Yann also calls this ‘unsupervised.’ If supervised learning is the sugar coating of a cake, he sees the latter as the cake itself.
An intriguing question is how much prior structure is required. A fact is that humans learn world models pretty quickly. They do this learning mainly by learning to predict, gradually filling in the gaps, dealing with uncertainty on the fly.
PLUS a lot of flexibility within a transformer (again, kind of compositional) architecture.
You may visualize this as a nicely burning fire within a circle of stones ― an ancient image with a futuristic implication.
Within a coaching setting, here comes Lisa. [see: “Lisa“] I think this is also the only way towards human-A.I. value alignment, on a journey towards Compassionate A.I. [see: “The Journey Towards Compassionate AI.”]
A species-existential problem is that I don’t see many evolving on this path.