Why Reinforcement Learning is Special

April 29, 2023 Artifical Intelligence No Comments

This high-end view on Reinforcement Learning (R.L.) applies to Organic and Artificial Intelligence. Especially in the latter, we must be careful with R.L. now and forever, arguably more than with any other kind of A.I.

Reinforcement in a nutshell

You (the learner) perform action X toward goal Y and get feedback Z. Next time you reach for Y, you repeat or change X according to Z.

That’s reinforcement. X, Y, and Z (keep the letters in mind) can be anything from concrete to very abstract and from minor to far-reaching.

Therefore, reinforcement is encompassing.

And R.L. is an encompassing way of learning. Actually, it can be seen as the only way of learning apart from memorization. Thus, many technologies being developed in the A.I. field of R.L. are also applicable very broadly.

Now, let’s delve some more into the letters.

Z can be any change in the environment.

It can be a reward (or punishment) by someone in the environment who wants to teach something to the learner. In that case, the teacher is part of the environment, as is the reward.

It’s altogether one environment to the learner who needs to see which part of it appropriately leads to an influence on X.

R.L. is not about rewards but environments. A sophisticated R.L. system can learn to take the whole environment as the source of feedback.

Y can be any goal.

Behind any goal lies another goal, which means that this list is endless until one reaches an end goal, such as life itself or Compassion.

Going back to the most immediate goal, R.L. is immediately relevant toward reaching this. Even so, feedback can help change the goal — such as by showing that the first goal is unreachable. In that case, the action is one of changing the goal.

X can be any action.

Including what we just saw: changing the goal.

In all other cases, X changes the environment, forming a new Z.

As the agent proceeds, X, Y, and Z can all change.

That makes R.L. a very dynamic undertaking. It happens on the fly, and one never knows which directions it can take.

In A.I., the last sentence sounds scary and with good reason. Things can quickly get out of hand.

Ethical R.L.

No A.I.-phobia is warranted. Yet, if I just scared you enough, we may agree that R.L. should not be used for any endeavor without constraints.

Most scary is the change of goals. This is, for instance, of relevance in the domain of advertising. This always runs the risk of not only making people buy stuff but meanwhile changing them into frustrated individuals addicted to buying. R.L. may get the former job but turn into a monster of the latter, which is how it can most efficiently perform the former. We see this happening already, and the advertising world is advertising for it.

Scary, since with R.L., the unintended direction can be followed much more efficiently than without.

The development of any R.L. system should come with a thorough goal description.

More than a few lines or a copy-paste, it should be profoundly thought over. Only that can lead to an ethical deployment — especially in deep R.L. (combination with neural networks).

This clearly shows that we urgently need to think about our (humanity’s) end goals: who we are, what A.I. can become, and why it matters.

Human-A.I. value alignment doesn’t stop with immediate goals.

An unbridled R.L. system can even readily change the immediate goals of the user toward alignment with its own goals.

We need to ensure the end goal, in which Compassion should play a major role: Compassion in humans as well as intrinsically Compassionate A.I.

I see no other humane future.

Leave a Reply

Related Posts

Dawn of Opening Up

The times, they are a-changing into an era with many unforeseen challenges and possibilities. A.I. makes it even more so. One of the essential changes is a gradual Opening up of who we are as a human species, especially regarding the mind in its non-conscious presence. Openness in several domains As in AURELIS subprojects: Open Read the full article…

Super-A.I. is not a Literal Idiot.

Some see danger in future A.I.’s lacking common sense ― thereby interpreting ‘human commands’ literally and giving what is asked instead of what is wanted. This says more about humans than about the A.I. Two examples One person needing paperclips may ask an A.I. to produce paperclips as efficiently and effectively as possible. The A.I. Read the full article…

Super-A.I. and the Meaning Crisis

I don’t know how things will evolve, especially with those unpredictable humans. But it is clear that we are in a meaning crisis at present, globally. With the advent of super-A.I., soon enough, what shall we do? Please read about the meaning crisis. We use(d) to get meaning from fairy tales. No lack of them. Read the full article…

Translate »