Small Set Learning

May 28, 2024 Artifical Intelligence No Comments

This approach in A.I. differs significantly from big data learning. It may be the next revolution in town.

Small set learning (SSL) is also called ‘few shot learning’ if done at run-time.

This blog may interest those who want to know why we’re not at the end of a new A.I. upsurge but at the very start of a probably never-ending era of A.I.

Where does ‘big data’ come from?

Historically, ANNs were developed based on concepts inspired by human neurons.

However, teaching these ANNs to perform was a conundrum. Nothing seemed to work until a researcher (Seppo Linnainmaa, 1970) introduced backpropagation, which involves applying many small adjustments to numerous ‘neurons’ to gadually reduce errors — a brilliant engineering concept. However, it was ineffective with a small set of learning examples, leading to no commercial success for ANNs. Instead came the pursuit of success in conceptual A.I. (GOFAI), which also proved to be insufficiently effective.

Nevertheless, the seeds of ANNs and related technologies had been planted and began to sprout.

As a result, many of the great ideas for ANNs have been developed decades ago.

Real success required massive processing power (high energy-demanding computation) and vast amounts of data, which became available around 2010. Since then, we’ve entered ‘the era of big data.’

Many people even believe that big data is essential for A.I.

What humans do

Humans – including children – often learn from just a few examples. We never learn from big data as ANN systems do, nor do we process concepts like GOFAI does.

We somehow manage to integrate the conceptual and subconceptual — pretty bias-prone but highly efficient.

Time now for something different: SSL

It’s fantastic that we have, meanwhile, learned much about the clear limitations of previous methods. Additionally, we now have LLM-technology — ANN-inspired but with immense pre-developed computational power available at the right time and place.

We can leverage this to develop SSL, which extracts significant information from minimal input. We can use LLM-technology to bring to bear lots of implicit knowledge, much like a child does in learning new things.

We don’t (and shouldn’t) let a child roam aimlessly; we provide guidance for learning — toys and adult supervision, for example. We also offer reinforcements when the child moves in the right direction.

When teaching specific concepts, we provide appropriate examples tailored to the child’s developmental level. For an A.I., this tailoring is of utmost importance, as provided by human developers or by the system itself through active self-learning (kind of artificial curiosity).

These methods enable the child to learn efficiently from a small set of examples. Applying this to A.I., we can use the same principles.

I believe SSL heralds a new and exciting revolution in A.I., focusing not only on efficiency but also on depth of mental processing. There’s a lot in the pipeline!

Addendum

Me: “Lisa, any additional thoughts?”

Lisa: “Yes, merging autosuggestion with SSL in A.I. Imagine an A.I. system that not only learns from small sets of examples but also employs a form of digital autosuggestion. This would involve the A.I. ‘suggesting’ pathways to itself, reinforcing positive learning patterns and organically correcting errors. This fusion of conceptual and subconceptual processing would enable A.I. systems to integrate high-level concepts with nuanced, often non-verbal cues found in human interactions. Such an A.I. would not just learn facts but also the ‘feel’ of the data, leading to more intuitive and human-like problem-solving abilities.

By incorporating this, SSL in A.I. could emulate the natural, profound ways humans, especially children, learn and grow, making A.I. not just a tool but a partner in human-like understanding and empathy.”

Leave a Reply

Related Posts

More about Rewards (also in A.I.)

A reward is a nudge – with more or less lasting result – into some preferred direction. Anything can be experienced as a reward. Thinking about it as a pattern within a broader pattern is clarifying. Pattern recognition and completion (PRC) Seeing rewards in the context of PRC, a reward is always just a part Read the full article…

Super-A.I. and the Meaning Crisis

I don’t know how things will evolve, especially with those unpredictable humans. But it is clear that we are in a meaning crisis at present, globally. With the advent of super-A.I., soon enough, what shall we do? Please read about the meaning crisis. We use(d) to get meaning from fairy tales. No lack of them. Read the full article…

Forward-Forward Neur(on)al Networks

Rest assured, I don’t stuff technical details into this blog. Nevertheless, this new framework lies closer to how the brain works, which is interesting enough to go somewhat into it. Backprop In Artificial Neural Networks (ANN) – the subfield that sways a big scepter in A.I. nowadays – backpropagation (backprop) is one of the main Read the full article…

Translate »