Small Set Learning

May 28, 2024 Artifical Intelligence No Comments

This approach in A.I. differs significantly from big data learning. It may be the next revolution in town.

Small set learning (SSL) is also called ‘few shot learning’ if done at run-time.

This blog may interest those who want to know why we’re not at the end of a new A.I. upsurge but at the very start of a probably never-ending era of A.I.

Where does ‘big data’ come from?

Historically, ANNs were developed based on concepts inspired by human neurons.

However, teaching these ANNs to perform was a conundrum. Nothing seemed to work until a researcher (Seppo Linnainmaa, 1970) introduced backpropagation, which involves applying many small adjustments to numerous ‘neurons’ to gadually reduce errors — a brilliant engineering concept. However, it was ineffective with a small set of learning examples, leading to no commercial success for ANNs. Instead came the pursuit of success in conceptual A.I. (GOFAI), which also proved to be insufficiently effective.

Nevertheless, the seeds of ANNs and related technologies had been planted and began to sprout.

As a result, many of the great ideas for ANNs have been developed decades ago.

Real success required massive processing power (high energy-demanding computation) and vast amounts of data, which became available around 2010. Since then, we’ve entered ‘the era of big data.’

Many people even believe that big data is essential for A.I.

What humans do

Humans – including children – often learn from just a few examples. We never learn from big data as ANN systems do, nor do we process concepts like GOFAI does.

We somehow manage to integrate the conceptual and subconceptual — pretty bias-prone but highly efficient.

Time now for something different: SSL

It’s fantastic that we have, meanwhile, learned much about the clear limitations of previous methods. Additionally, we now have LLM-technology — ANN-inspired but with immense pre-developed computational power available at the right time and place.

We can leverage this to develop SSL, which extracts significant information from minimal input. We can use LLM-technology to bring to bear lots of implicit knowledge, much like a child does in learning new things.

We don’t (and shouldn’t) let a child roam aimlessly; we provide guidance for learning — toys and adult supervision, for example. We also offer reinforcements when the child moves in the right direction.

When teaching specific concepts, we provide appropriate examples tailored to the child’s developmental level. For an A.I., this tailoring is of utmost importance, as provided by human developers or by the system itself through active self-learning (kind of artificial curiosity).

These methods enable the child to learn efficiently from a small set of examples. Applying this to A.I., we can use the same principles.

I believe SSL heralds a new and exciting revolution in A.I., focusing not only on efficiency but also on depth of mental processing. There’s a lot in the pipeline!

Addendum

Me: “Lisa, any additional thoughts?”

Lisa: “Yes, merging autosuggestion with SSL in A.I. Imagine an A.I. system that not only learns from small sets of examples but also employs a form of digital autosuggestion. This would involve the A.I. ‘suggesting’ pathways to itself, reinforcing positive learning patterns and organically correcting errors. This fusion of conceptual and subconceptual processing would enable A.I. systems to integrate high-level concepts with nuanced, often non-verbal cues found in human interactions. Such an A.I. would not just learn facts but also the ‘feel’ of the data, leading to more intuitive and human-like problem-solving abilities.

By incorporating this, SSL in A.I. could emulate the natural, profound ways humans, especially children, learn and grow, making A.I. not just a tool but a partner in human-like understanding and empathy.”

Leave a Reply

Related Posts

Subconceptual A.I. toward the Future

Every aspect of humanity is, to some extent, subconceptual. This perspective emphasizes the complexity and depth of human nature, which cannot be fully captured by surface-level concepts. Our intelligence stems from effectively navigating the subconceptual domain. This is hugely telling for the future of A.I. This indicates that Compassion will be essential in the future Read the full article…

The Future is Prediction

I bring the concept of prediction from different angles to show the common ground. Through this, one may get a glimpse of its future importance. The Future of A.I. The concept of prediction pops up regularly in different ways to look at future A.I. developments. For instance, in temporal difference (TD) learning as expanded upon Read the full article…

A.I. is a Flow, not a Technology

It’s not even a set of technologies. A.I. is a flow of ever-new technologies and combinations of the same. The flow itself is crucial. —This is so at present and will be in the future. Containing any technology does not contain the flow. The latter is far more challenging. If one tries to contain the Read the full article…

Translate »