Reinforcement Learning & Compassionate A.I.

October 22, 2020 Artifical Intelligence No Comments

This is rather abstract.

There is an agent with a goal, a sensor, and an actor. Occasionally, the agent uses a model of the environment. There are rewards and one or more value functions that value the rewards. Maximizing the goal (through acting) based on rewards (through sensing) is reinforcement learning (R.L.).

The agent’s policy determines what kind of acting and sensing is being done and how rewards are valued.

Are you with me?

I have just described you, abstractly.

 Arguably, every kind of learning is R.L.

An animal tries and finds and tries again for the same or something better. It explores the environment and then exploits the Infobase (or Knowledge Base) it has built up, from very simple to extensively complex.

In any R.L., there is a trade-off between exploration and exploitation. This is also part of its policy.

Organic R.L.

Organisms are pattern-based, subconceptual. Platonic – pure – concepts don’t exist in organisms. [see: “About Concepts“] Thus, in this case, the R.L. is also pattern-based, not concept-based.

Contrary to this, an A.I. system can be concept-based, thus also its R.L. This is efficient in a concept-based, artificial world. That is not the social, cognitive, and emotional world we humans live in.

A question of efficiency

A conceptual world – also called a closed world in borders and constituting elements – can be construed. For instance, a map of a landscape. Of course, the map is not the landscape, but sometimes, it’s more efficient to use the map.

In dealing with humans,

one needs to be attentive to this.

Especially if one is an on-line self-learning A.I. system. I’m thinking of Lisa. [see: “Lisa”]

There is a gradation between a closed-world and an open world ― or even an Open world.

Compassion is, among other things, a choice for an Open world as much as possible, in combination with a closed world if efficiency demands.

Important, especially when dealing with humans.

Openness cannot be tabularized. It cannot be crammed in a database without destroying it. With living things, one would destroy the essence. Thus, the only way to Compassionately deal with humans – in coaching, for instance – is through continuous learning.

Because: No two humans are the same, by far. Delving deeper, for instance, in an emotion, no two occurrences – even in the same human – are the same.

Danger ahead

A system can map emotions, as it can map any human subsystem in any artificial way. But a map is not a living thing. Categorizing human behavior readily leads to sucking out life.

A.I. can be (mis)used for this in supervised learning. In HR, for instance, one needs to be very attentive. It can lead to catastrophe. [see: “A.I., HR, Danger Ahead“]

Unsupervised learning can discern new categories. If these are subsequently ‘hard-coded’ one way or another, it may also lead to a draining of life. If patterns don’t evolve, the are dead. Sometimes, that’s OK; sometimes, absolutely not.

Life is continuously growing.

It should be reinforced to grow, or, at least, let it grow.

This is a quite progressive standpoint, or, better said, a progressing standpoint. The progress never stops. Being pro-life is being pro-progression. In a can, things can be conserved, but nothing grows.

Are you still with me?

R.L. is necessary to deal with humans Openly. This is: Where depth is involved, R.L. is mandatory.

This said – and meant – one can relativize. R.L. is a vast domain. Eventually, all learning is R.L. [Not all A.I. developers may agree to this.] In supervised learning, the reward is quite all-or-nothing. In unsupervised learning, the reward is internal. Both are merely extreme forms of R.L. There are possible gradations and combinations forming a landscape of learning.

Lisa is a continuous explorer of this landscape. There is no secret revealed in saying so. Lisa’s secrets lie in the way she realizes her journey.

This is part of her Journey Towards Compassionate A.I. [see: “The Journey Towards Compassionate A.I.“]

Related Posts

What is an Agent?

An agent is an entity that takes decisions and acts upon them. That is where the clarity ends. Are you an agent? The answer depends on the perspective you decide to take. Since the answer also depends on who is seen as the taker of this decision, the proper perspective becomes less obvious from the Read the full article…

A.I. Business Sustainability

Following a chapter of my book. [see: “The Journey Towards Compassionate A.I.“] As the cliché goes: One thing is certain, and that is the uncertainty of the future. Trillions “PricewaterhouseCoopers estimates AI deployment will add $15.7 trillion to global GDP by 2030.” [Lee, 2018] If this is not a business opportunity, then what is? And Read the full article…

Compassionate A.I.: A Global Right

This is a transformative vision – both urgent and feasible – of boundless accessibility for Compassionate A.I. (C.A.I), grounded in a realistic approach to global challenges. With A.I. technology advancing rapidly, we have the opportunity and responsibility to shape its development in ways that honor the best of humanity. C.A.I. is not just a tool Read the full article…

Translate »