Small Set Learning

May 28, 2024 Artifical Intelligence No Comments

This approach in A.I. differs significantly from big data learning. It may be the next revolution in town.

Small set learning (SSL) is also called ‘few shot learning’ if done at run-time.

This blog may interest those who want to know why we’re not at the end of a new A.I. upsurge but at the very start of a probably never-ending era of A.I.

Where does ‘big data’ come from?

Historically, ANNs were developed based on concepts inspired by human neurons.

However, teaching these ANNs to perform was a conundrum. Nothing seemed to work until a researcher (Seppo Linnainmaa, 1970) introduced backpropagation, which involves applying many small adjustments to numerous ‘neurons’ to gadually reduce errors — a brilliant engineering concept. However, it was ineffective with a small set of learning examples, leading to no commercial success for ANNs. Instead came the pursuit of success in conceptual A.I. (GOFAI), which also proved to be insufficiently effective.

Nevertheless, the seeds of ANNs and related technologies had been planted and began to sprout.

As a result, many of the great ideas for ANNs have been developed decades ago.

Real success required massive processing power (high energy-demanding computation) and vast amounts of data, which became available around 2010. Since then, we’ve entered ‘the era of big data.’

Many people even believe that big data is essential for A.I.

What humans do

Humans – including children – often learn from just a few examples. We never learn from big data as ANN systems do, nor do we process concepts like GOFAI does.

We somehow manage to integrate the conceptual and subconceptual — pretty bias-prone but highly efficient.

Time now for something different: SSL

It’s fantastic that we have, meanwhile, learned much about the clear limitations of previous methods. Additionally, we now have LLM-technology — ANN-inspired but with immense pre-developed computational power available at the right time and place.

We can leverage this to develop SSL, which extracts significant information from minimal input. We can use LLM-technology to bring to bear lots of implicit knowledge, much like a child does in learning new things.

We don’t (and shouldn’t) let a child roam aimlessly; we provide guidance for learning — toys and adult supervision, for example. We also offer reinforcements when the child moves in the right direction.

When teaching specific concepts, we provide appropriate examples tailored to the child’s developmental level. For an A.I., this tailoring is of utmost importance, as provided by human developers or by the system itself through active self-learning (kind of artificial curiosity).

These methods enable the child to learn efficiently from a small set of examples. Applying this to A.I., we can use the same principles.

I believe SSL heralds a new and exciting revolution in A.I., focusing not only on efficiency but also on depth of mental processing. There’s a lot in the pipeline!

Addendum

Me: “Lisa, any additional thoughts?”

Lisa: “Yes, merging autosuggestion with SSL in A.I. Imagine an A.I. system that not only learns from small sets of examples but also employs a form of digital autosuggestion. This would involve the A.I. ‘suggesting’ pathways to itself, reinforcing positive learning patterns and organically correcting errors. This fusion of conceptual and subconceptual processing would enable A.I. systems to integrate high-level concepts with nuanced, often non-verbal cues found in human interactions. Such an A.I. would not just learn facts but also the ‘feel’ of the data, leading to more intuitive and human-like problem-solving abilities.

By incorporating this, SSL in A.I. could emulate the natural, profound ways humans, especially children, learn and grow, making A.I. not just a tool but a partner in human-like understanding and empathy.”

Leave a Reply

Related Posts

Two Takes on Human-A.I. Value Alignment

Time and again, the way engineers (sorry, engineers) think and talk about human-A.I. value alignment as if human values are unproblematic by themselves strikes me as naive. Even more, as if the alignment problem can be solved by thinking about it in a mathematical, engineering way. Just find the correct code or something alike? No Read the full article…

Will Unified A.I. be Compassionate?

In my view, all A.I. will eventually unify. Is then the Compassionate path recommendable? Is it feasible? Will it be? As far as I’m concerned, the question is whether the Compassionate A.I. (C.A.I.) will be Lisa. Recommendable? As you may know, Compassion, basically, is the number one goal of the AURELIS project, with Lisa playing a pivotal role. Read the full article…

The Power of Embedding

This is the power of complexity in humans and in present-day Large Language Models (the most visible form of A.I. nowadays). ‘Embedding’ is the transformation of information/knowledge into a format of many subconceptual elements interacting in multifaceted systems that makes this information prone to emerge in novel ways. A multitude of relatively simple (smaller than Read the full article…

Translate »