Why Reinforcement Learning is Special

April 29, 2023 Artifical Intelligence No Comments

This high-end view on Reinforcement Learning (R.L.) applies to Organic and Artificial Intelligence. Especially in the latter, we must be careful with R.L. now and forever, arguably more than with any other kind of A.I.

Reinforcement in a nutshell

You (the learner) perform action X toward goal Y and get feedback Z. Next time you reach for Y, you repeat or change X according to Z.

That’s reinforcement. X, Y, and Z (keep the letters in mind) can be anything from concrete to very abstract and from minor to far-reaching.

Therefore, reinforcement is encompassing.

And R.L. is an encompassing way of learning. Actually, it can be seen as the only way of learning apart from memorization. Thus, many technologies being developed in the A.I. field of R.L. are also applicable very broadly.

Now, let’s delve some more into the letters.

Z can be any change in the environment.

It can be a reward (or punishment) by someone in the environment who wants to teach something to the learner. In that case, the teacher is part of the environment, as is the reward.

It’s altogether one environment to the learner who needs to see which part of it appropriately leads to an influence on X.

R.L. is not about rewards but environments. A sophisticated R.L. system can learn to take the whole environment as the source of feedback.

Y can be any goal.

Behind any goal lies another goal, which means that this list is endless until one reaches an end goal, such as life itself or Compassion.

Going back to the most immediate goal, R.L. is immediately relevant toward reaching this. Even so, feedback can help change the goal — such as by showing that the first goal is unreachable. In that case, the action is one of changing the goal.

X can be any action.

Including what we just saw: changing the goal.

In all other cases, X changes the environment, forming a new Z.

As the agent proceeds, X, Y, and Z can all change.

That makes R.L. a very dynamic undertaking. It happens on the fly, and one never knows which directions it can take.

In A.I., the last sentence sounds scary and with good reason. Things can quickly get out of hand.

Ethical R.L.

No A.I.-phobia is warranted. Yet, if I just scared you enough, we may agree that R.L. should not be used for any endeavor without constraints.

Most scary is the change of goals. This is, for instance, of relevance in the domain of advertising. This always runs the risk of not only making people buy stuff but meanwhile changing them into frustrated individuals addicted to buying. R.L. may get the former job but turn into a monster of the latter, which is how it can most efficiently perform the former. We see this happening already, and the advertising world is advertising for it.

Scary, since with R.L., the unintended direction can be followed much more efficiently than without.

The development of any R.L. system should come with a thorough goal description.

More than a few lines or a copy-paste, it should be profoundly thought over. Only that can lead to an ethical deployment — especially in deep R.L. (combination with neural networks).

This clearly shows that we urgently need to think about our (humanity’s) end goals: who we are, what A.I. can become, and why it matters.

Human-A.I. value alignment doesn’t stop with immediate goals.

An unbridled R.L. system can even readily change the immediate goals of the user toward alignment with its own goals.

We need to ensure the end goal, in which Compassion should play a major role: Compassion in humans as well as intrinsically Compassionate A.I.

I see no other humane future.

Leave a Reply

Related Posts

Is A.I. Becoming more Philosophy than Technology?

This question has been relevant already for years. It’s only becoming worse (or better). Of course, technology remains important but it’s more like the bricks than the building. Many technologically oriented people may not like this idea. The ones who do are probably forming the future. Some history Historically, the development of A.I. has had Read the full article…

Two Takes on Human-A.I. Value Alignment

Time and again, the way engineers (sorry, engineers) think and talk about human-A.I. value alignment as if human values are unproblematic by themselves strikes me as naive. Even more, as if the alignment problem can be solved by thinking about it in a mathematical, engineering way. Just find the correct code or something alike? No Read the full article…

A.I. Explainability versus ‘the Heart’

Researchers are moving towards bringing ‘heart’ into A.I. Thus, new ethical questions are popping up. One of them concerns explainability. The heart cannot explain itself. “On ne voit bien qu’avec le cœur. L’essentiel est invisible pour les yeux.” [Antoine de Saint-Exupéry] “It is only with the heart that one can see rightly; what is essential Read the full article…

Translate »