Patterns + Rewards in A.I.

May 29, 2023 Artifical Intelligence No Comments

Human-inspired Pattern Recognition and Completion (PRC) may significantly heighten the efficiency of Reinforcement Learning (RL) — also in A.I.

See for PRC: The Brain as a Predictor

See for RL: Why Reinforcement Learning is Special

Mutually reinforcing

PRC shows valid directions and tentatively also realizes them. RL consolidates/reinforces the best directions and attenuates the lesser ones.

Without RL, PRC may go forward pretty slowly, like a person who can learn only in small steps from what he already knows — crawling, never jumping.

Without PRC, RL may go forward pretty stochastically, like a person in a dark room searching for the light switch without any guidance. PRC then provides a glow through which the person may guess probable directions at least. He may find the switch after a limited search. Next time and in a different room, he can make use of the glow even better. To an inexperienced observer, he may seem to find the switch miraculously easily.

RL within PRC

This may take away some confusion from the reader who is already thinking beyond the above at this point.

Indeed, the recognized and completed pattern may incorporate the reward itself. It’s a semantic choice. However, this shows how things can overlap, and through their overlap may lead to new possibilities.

The lesson from humans

The combination enables humans to learn from few examples, as even children do spontaneously. It’s a significant part of the way the brain works.

It can be used for the same in A.I. Through PRC and rewards, the system can learn where to evolve toward in a way similar to humans. This makes the need for many rewards and smooth rewarding less stringent, as this is a substantial bottleneck in present-day A.I.

In humans, PRC is realized in a specifically human way that is intrinsically related to the human medium. Probably most crucial in this are our countless mental-neuronal patterns. These enable a flexible and performant kind of PRC, albeit a very fuzzy one.

Probably so fuzzy that we should never indulge this in a non-Compassionate vein of super-A.I. Yet I fear that we are precisely closing into that, unfortunately.

At the same time, a boon for Compassion

Broadly overlapping patterns are pointers to Compassion, basically. They enable intra-brain intuition and our natural kind of conceptual intelligence. They also enable our natural urge for social thinking. Within us, this goes together to a large degree, making us ‘social animals’ to whom ethics frequently means a big deal. Our intelligence has essentially been developed socially over a very long time.

The same provides hope for a Journey Towards Compassionate A.I. if that gets based on likewise principles even when from an utterly different background. An example of this is being realized in Lisa.

We may be very optimistic if we don’t blow it.

Leave a Reply

Related Posts

Will A.I. Soon be Smarter than Us?

This text may be interesting to many because these ideas may shape the future of those many to the highest degree. It’s smart to see why something else will be even smarter. Soon? Soon enough. The ongoing evolution toward the title’s state will not be evident. In retrospect, it will be an amazingly rash evolution. Read the full article…

Why Conscious A.I. is Near

Without pinning a date, it’s dangerous that many researchers/developers are making progress in many aspects of A.I. without deep insight into consciousness. Scary? ‘Near’ in the title is meant relative. The issue is the following. The ways are such, and the competition is such that I don’t see any other option than that we are Read the full article…

Is A.I. Dangerous to Human Cognition?

I have roamed around this on several occasions within ‘The Journey towards Compassionate A.I.’ (of which this is an excerpt) The prime reason why I think it’s dangerous is, in one term: hyper-essentialism. But let me first give two viewpoints upon your thinking: Essentialism: presupposes that the categories in your mind – such as an Read the full article…

Translate »