Why Reinforcement Learning is Special

April 29, 2023 Artifical Intelligence No Comments

This high-end view on Reinforcement Learning (R.L.) applies to Organic and Artificial Intelligence. Especially in the latter, we must be careful with R.L. now and forever, arguably more than with any other kind of A.I.

Reinforcement in a nutshell

You (the learner) perform action X toward goal Y and get feedback Z. Next time you reach for Y, you repeat or change X according to Z.

That’s reinforcement. X, Y, and Z (keep the letters in mind) can be anything from concrete to very abstract and from minor to far-reaching.

Therefore, reinforcement is encompassing.

And R.L. is an encompassing way of learning. Actually, it can be seen as the only way of learning apart from memorization. Thus, many technologies being developed in the A.I. field of R.L. are also applicable very broadly.

Now, let’s delve some more into the letters.

Z can be any change in the environment.

It can be a reward (or punishment) by someone in the environment who wants to teach something to the learner. In that case, the teacher is part of the environment, as is the reward.

It’s altogether one environment to the learner who needs to see which part of it appropriately leads to an influence on X.

R.L. is not about rewards but environments. A sophisticated R.L. system can learn to take the whole environment as the source of feedback.

Y can be any goal.

Behind any goal lies another goal, which means that this list is endless until one reaches an end goal, such as life itself or Compassion.

Going back to the most immediate goal, R.L. is immediately relevant toward reaching this. Even so, feedback can help change the goal — such as by showing that the first goal is unreachable. In that case, the action is one of changing the goal.

X can be any action.

Including what we just saw: changing the goal.

In all other cases, X changes the environment, forming a new Z.

As the agent proceeds, X, Y, and Z can all change.

That makes R.L. a very dynamic undertaking. It happens on the fly, and one never knows which directions it can take.

In A.I., the last sentence sounds scary and with good reason. Things can quickly get out of hand.

Ethical R.L.

No A.I.-phobia is warranted. Yet, if I just scared you enough, we may agree that R.L. should not be used for any endeavor without constraints.

Most scary is the change of goals. This is, for instance, of relevance in the domain of advertising. This always runs the risk of not only making people buy stuff but meanwhile changing them into frustrated individuals addicted to buying. R.L. may get the former job but turn into a monster of the latter, which is how it can most efficiently perform the former. We see this happening already, and the advertising world is advertising for it.

Scary, since with R.L., the unintended direction can be followed much more efficiently than without.

The development of any R.L. system should come with a thorough goal description.

More than a few lines or a copy-paste, it should be profoundly thought over. Only that can lead to an ethical deployment — especially in deep R.L. (combination with neural networks).

This clearly shows that we urgently need to think about our (humanity’s) end goals: who we are, what A.I. can become, and why it matters.

Human-A.I. value alignment doesn’t stop with immediate goals.

An unbridled R.L. system can even readily change the immediate goals of the user toward alignment with its own goals.

We need to ensure the end goal, in which Compassion should play a major role: Compassion in humans as well as intrinsically Compassionate A.I.

I see no other humane future.

Leave a Reply

Related Posts

What Is Morality to A.I.?

Agreed: it’s not even evident what ‘morality’ means to us. Soon comes A.I. Will it be ‘morally good’? Humans have a natural propensity towards morality. Whether we tend towards ‘good’ or ‘bad’, we have feelings and generally recognize these in others too, in humans and in animals. We share organic roots. We recognize suffering and Read the full article…

Are LLMs Parrots or Truly Creative?

Large Language Models (LLMs, such as GPT) are, at present, just mathematical distillations of human-made textual patterns — very many of them. They are, therefore, frequently described as parrots. Size matters. The parrot feature may be applied when there is little input or little diversity in input. Then, clearly, the result is a pattern-based average Read the full article…

Forward-Forward Neur(on)al Networks

Rest assured, I don’t stuff technical details into this blog. Nevertheless, this new framework lies closer to how the brain works, which is interesting enough to go somewhat into it. Backprop In Artificial Neural Networks (ANN) – the subfield that sways a big scepter in A.I. nowadays – backpropagation (backprop) is one of the main Read the full article…

Translate »