This is rather abstract.
There is an agent with a goal, a sensor, and an actor. Occasionally, the agent uses a model of the environment. There are rewards and one or more value functions that value the rewards. Maximizing the goal (through acting) based on rewards (through sensing) is reinforcement learning (R.L.).
The agent’s policy determines what kind of acting and sensing is being done and how rewards are valued.
Are you with me?
I have just described you, abstractly.
Arguably, every kind of learning is R.L.
An animal tries and finds and tries again for the same or something better. It explores the environment and then exploits the Infobase (or Knowledge Base) it has built up, from very simple to extensively complex.
In any R.L., there is a trade-off between exploration and exploitation. This is also part of its policy.
Organisms are pattern-based, subconceptual. Platonic – pure – concepts don’t exist in organisms. [see: “About Concepts“] Thus, in this case, the R.L. is also pattern-based, not concept-based.
Contrary to this, an A.I. system can be concept-based, thus also its R.L. This is efficient in a concept-based, artificial world. That is not the social, cognitive, and emotional world we humans live in.
A question of efficiency
A conceptual world – also called a closed world in borders and constituting elements – can be construed. For instance, a map of a landscape. Of course, the map is not the landscape, but sometimes, it’s more efficient to use the map.
In dealing with humans,
one needs to be attentive to this.
Especially if one is an on-line self-learning A.I. system. I’m thinking of Lisa. [see: “Lisa”]
There is a gradation between a closed-world and an open world ― or even an Open world.
Compassion is, among other things, a choice for an Open world as much as possible, in combination with a closed world if efficiency demands.
Important, especially when dealing with humans.
Openness cannot be tabularized. It cannot be crammed in a database without destroying it. With living things, one would destroy the essence. Thus, the only way to Compassionately deal with humans – in coaching, for instance – is through continuous learning.
Because: No two humans are the same, by far. Delving deeper, for instance, in an emotion, no two occurrences – even in the same human – are the same.
A system can map emotions, as it can map any human subsystem in any artificial way. But a map is not a living thing. Categorizing human behavior readily leads to sucking out life.
A.I. can be (mis)used for this in supervised learning. In HR, for instance, one needs to be very attentive. It can lead to catastrophe. [see: “A.I., HR, Danger Ahead“]
Unsupervised learning can discern new categories. If these are subsequently ‘hard-coded’ one way or another, it may also lead to a draining of life. If patterns don’t evolve, the are dead. Sometimes, that’s OK; sometimes, absolutely not.
Life is continuously growing.
It should be reinforced to grow, or, at least, let it grow.
This is a quite progressive standpoint, or, better said, a progressing standpoint. The progress never stops. Being pro-life is being pro-progression. In a can, things can be conserved, but nothing grows.
Are you still with me?
R.L. is necessary to deal with humans Openly. This is: Where depth is involved, R.L. is mandatory.
This said – and meant – one can relativize. R.L. is a vast domain. Eventually, all learning is R.L. [Not all A.I. developers may agree to this.] In supervised learning, the reward is quite all-or-nothing. In unsupervised learning, the reward is internal. Both are merely extreme forms of R.L. There are possible gradations and combinations forming a landscape of learning.
Lisa is a continuous explorer of this landscape. There is no secret revealed in saying so. Lisa’s secrets lie in the way she realizes her journey.
This is part of her Journey Towards Compassionate A.I. [see: “The Journey Towards Compassionate A.I.“]