Reinforcement Learning & Compassionate A.I.

October 22, 2020 Artifical Intelligence No Comments

This is rather abstract.

There is an agent with a goal, a sensor, and an actor. Occasionally, the agent uses a model of the environment. There are rewards and one or more value functions that value the rewards. Maximizing the goal (through acting) based on rewards (through sensing) is reinforcement learning (R.L.).

The agent’s policy determines what kind of acting and sensing is being done and how rewards are valued.

Are you with me?

I have just described you, abstractly.

 Arguably, every kind of learning is R.L.

An animal tries and finds and tries again for the same or something better. It explores the environment and then exploits the Infobase (or Knowledge Base) it has built up, from very simple to extensively complex.

In any R.L., there is a trade-off between exploration and exploitation. This is also part of its policy.

Organic R.L.

Organisms are pattern-based, subconceptual. Platonic – pure – concepts don’t exist in organisms. [see: “About Concepts“] Thus, in this case, the R.L. is also pattern-based, not concept-based.

Contrary to this, an A.I. system can be concept-based, thus also its R.L. This is efficient in a concept-based, artificial world. That is not the social, cognitive, and emotional world we humans live in.

A question of efficiency

A conceptual world – also called a closed world in borders and constituting elements – can be construed. For instance, a map of a landscape. Of course, the map is not the landscape, but sometimes, it’s more efficient to use the map.

In dealing with humans,

one needs to be attentive to this.

Especially if one is an on-line self-learning A.I. system. I’m thinking of Lisa. [see: “Lisa”]

There is a gradation between a closed-world and an open world ― or even an Open world.

Compassion is, among other things, a choice for an Open world as much as possible, in combination with a closed world if efficiency demands.

Important, especially when dealing with humans.

Openness cannot be tabularized. It cannot be crammed in a database without destroying it. With living things, one would destroy the essence. Thus, the only way to Compassionately deal with humans – in coaching, for instance – is through continuous learning.

Because: No two humans are the same, by far. Delving deeper, for instance, in an emotion, no two occurrences – even in the same human – are the same.

Danger ahead

A system can map emotions, as it can map any human subsystem in any artificial way. But a map is not a living thing. Categorizing human behavior readily leads to sucking out life.

A.I. can be (mis)used for this in supervised learning. In HR, for instance, one needs to be very attentive. It can lead to catastrophe. [see: “A.I., HR, Danger Ahead“]

Unsupervised learning can discern new categories. If these are subsequently ‘hard-coded’ one way or another, it may also lead to a draining of life. If patterns don’t evolve, the are dead. Sometimes, that’s OK; sometimes, absolutely not.

Life is continuously growing.

It should be reinforced to grow, or, at least, let it grow.

This is a quite progressive standpoint, or, better said, a progressing standpoint. The progress never stops. Being pro-life is being pro-progression. In a can, things can be conserved, but nothing grows.

Are you still with me?

R.L. is necessary to deal with humans Openly. This is: Where depth is involved, R.L. is mandatory.

This said – and meant – one can relativize. R.L. is a vast domain. Eventually, all learning is R.L. [Not all A.I. developers may agree to this.] In supervised learning, the reward is quite all-or-nothing. In unsupervised learning, the reward is internal. Both are merely extreme forms of R.L. There are possible gradations and combinations forming a landscape of learning.

Lisa is a continuous explorer of this landscape. There is no secret revealed in saying so. Lisa’s secrets lie in the way she realizes her journey.

This is part of her Journey Towards Compassionate A.I. [see: “The Journey Towards Compassionate A.I.“]

Related Posts

Forward-Forward Neur(on)al Networks

Rest assured, I don’t stuff technical details into this blog. Nevertheless, this new framework lies closer to how the brain works, which is interesting enough to go somewhat into it. Backprop In Artificial Neural Networks (ANN) – the subfield that sways a big scepter in A.I. nowadays – backpropagation (backprop) is one of the main Read the full article…

A.I. from Future to Now

While it’s challenging to imagine how future A.I. will look like, we can develop an abstract idea that helps us understand present-day urgencies. Of course, one day, the future will be a million years from now. However, for the purpose of this text, we can see it as something like a century from now. There Read the full article…

Human-Centered or Ego-Centered A.I.?

‘Humanism’ is supposed to be human-centered. ‘Human-A.I. Value Alignment’ is supposed to be human-centered. Or is it ego-centered? Especially concerning (non-)Compassionate A.I., this is the crucial question that will make or break us. Unfortunately, this is intrinsically unclear to most people. Mere-ego versus total self See also The Big Mistake. This is not about ‘I’ Read the full article…

Translate »