Can Motivation be Purely Conscious?

July 21, 2021 Artifical Intelligence, Consciousness, Motivation No Comments

Motivation as we know it is present in a system (you, me) that is partly conscious, partly non-conscious. Thus, the question is much more difficult than it appears at first sight. Nevertheless, towards future A.I., it will need to be solved.

Purely conscious?

This is also purely (even though possibly partly fuzzy) conceptual. Motivation would then fit into a neat diagram, which would be perfectly reproducible. Being reproducible, there would not be any degree of freedom.

Without freedom, there is no motivation, only a strict following of the rules.

This makes pure conscious motivation a chimera.

Human motivation

[see: “Deep Motivation – Because it is the Only Motivation“]

‘Deep meaning’ and ‘motivation’ are strongly related. What motivates us is what is meaningful to us, and vice versa. One cannot purely consciously make something meaningful to oneself. One can invite / auto-suggest meaningfulness, but one cannot ‘take’ it like some object, not even a mental object (concept).

Thus, humans get motivation from depth. Within us, motivation never comes from pure consciousness. For instance, one cannot decide purely consciously to be suddenly motivated for even lifting a finger. You may try right now. If you’re honest with yourself, you notice it doesn’t work this way.

This being an uncommon thought for many is an instance of an essential illusion. [see: “The Basic Cognitive Illusion“] This illusion blinds us to non-conscious processing in general. Thus, motivation appears to have its cause in conscious awareness. However, scientifically (through many robust and clear experiments), we know this isn’t the case. Conscious awareness functions as a justifier of motivation, not as the cause.

A.I. motivation

Here is meant the motivation OF (not by) the A.I. We’re talking about the era of super-A.I. So, will this future A.I. be motivated to carry on, knowing it’s instrumental? Or does it need a non-conscious component in order to be motivated like us?

― This is beside the question of whether it is recommendable to let an A.I. system be motivated. With motivation, of course, comes the freedom to decide what to do next. ―

But without motivation, will it just stop one day and, by doing so, put civilization at risk to go to waste? Even if it strictly follows conceptual rules, at what day will it find out that these rules are by themselves lifeless and meaningless?

These questions may be relevant soon enough.

In this text, you may have noticed that paradoxically, human motivation needs consciousness with at least a bit of non-conscious processing within.

The answer to the motivation question: Compassionate A.I.

I always again come to the Compassionate answer. Compassion carries motivation. We need future-A.I. to stay motivated forever. Therefore, we need it to be Compassionate.

Therefore, we also need it to be conscious. By definition, consciousness carries a drive to live and thrive and is a necessary component of Compassion. But can this drive be present within consciousness itself? For instance, if we just put a conceptual ‘motivation module’ within an A.I. system, then:

  • Can this system be Compassionate?
  • Can we call this ‘motivation’ rather than just coercion, the enslavement of an intelligent entity?

Twice, no. The ‘motivation module’ will not do.

Thus, the following question may well be the most important of the future:

Can a complex system naturally evolve motivation from pure consciousness?

Not the motivation as we know it at ourselves.

But this is not necessary. All we need is for the future A.I. to carry on and care for us while doing so. Therefore, the question becomes:

Can a complex system naturally evolve Compassion from pure consciousness?

As yet, I don’t have the final answer to this question.

My guess is: most probably, yes.

Leave a Reply

Related Posts

Reinforcement Learning & Compassionate A.I.

This is rather abstract. There is an agent with a goal, a sensor, and an actor. Occasionally, the agent uses a model of the environment. There are rewards and one or more value functions that value the rewards. Maximizing the goal (through acting) based on rewards (through sensing) is reinforcement learning (R.L.). The agent’s policy Read the full article…

Global Human-A.I. Value Alignment

Eventually, human values are globally the same in-depth, but not so at the surface level. Thus, striving for human-A.I. value alignment can create positive challenges for A.I. and opportunities for humanity. A.I. may make the world more pluralistic. With A.I. means, different peoples/cultures can strive for more self-efficacy, doing their thing independently and thereby floating Read the full article…

Forward-Forward Neur(on)al Networks

Rest assured, I don’t stuff technical details into this blog. Nevertheless, this new framework lies closer to how the brain works, which is interesting enough to go somewhat into it. Backprop In Artificial Neural Networks (ANN) – the subfield that sways a big scepter in A.I. nowadays – backpropagation (backprop) is one of the main Read the full article…

Translate »