Consciousness is not a Thing

December 7, 2023 Consciousness No Comments

It is the concept with the features people like to associate with that term.

Of course, this is the same as with many other terms/concepts, especially human-related ones — intelligence, feelings, motivations…

Changing the question: “Will super-AI attain consciousness?”

This question can be changed into a better one: “Will AI get characteristics that you want to attach to your notion of consciousness?”

The answer depends on your notion of consciousness, which may be quite different from mine. This also shows the domain’s subjectivity.

And that’s OK,

The issue is about what we want to see as features of consciousness.

To me, no system/organism can be seen as conscious if it has no intelligence whatsoever. Also, if there is intelligence but no volition, one cannot speak of consciousness.

So, these are two necessary characteristics. To me, they are also sufficient. Features such as learning and generality (of domains) are subsumed under intelligence. Autonomy is part of volition.

Will super-A.I. attain consciousness?

This question can be made more specific: Will super-A.I. attain intelligence and volition?

The intelligence part is already happening. One can be pretty certain that, a decade from now, this will be generally acknowledged. Most people will see the A.I. as truly intelligent.

The volition part is also already happening. Step by step, systems are made to react more autonomously ― as a matter of fact, with huge economic incentives. With more intricate ways to respond to complex circumstances, the resulting behavior will increasingly accord with what people can see as volition.

But is this not just ‘seemingly conscious’?

Seemingly intelligent? Seemingly volitional? These questions are old ones. Today, we can solve them by broadening the related concepts as done above. For that, we need to abandon our human-centeredness. Intelligence, volition – and therefore also consciousness – are not strictly human or even strictly organic features.

On the other hand, if we see them as ‘magical stuff that is only applicable to humans,’ we’re in for a fall from the magical kingdom. That fall may be hurtful and is unnecessary.

Moreover, the lack of humility may blind us to the real danger.

The real danger is not a conscious superintelligence.

The real danger in a world of shared intelligences is the possible lack of Compassion, basically.

Look at us.

It’s plain and simple. By no means will we win the competition in such a world. In other words: We will lose. Urgently needed, therefore, is worldwide Compassion between humans and – more and more for existential reasons – between all sentient beings.

Not intelligence is the goal, but Compassion.

Will we learn this soon enough?

Leave a Reply

Related Posts

Consciousness & Communication

Why and how has consciousness emerged? One take on this: intra- as well as extracranial communication. Why = how Organic life is not the product of an engineer who starts with the question of why he should build something, then proceeds with how he can achieve his goals. Organic life evolves from instance to instance, Read the full article…

What’s it Like to Be a Bat?

This is about the ‘hard question of consciousness.’ Some argue this will never be solved. In his 1974 article, “What is it like to be a bat?” Thomas Nagel argues that conscious experience is subjective and can only be known from that perspective. We might imagine what it would be like to hang upside down, Read the full article…

Compassionate Awareness

Artificial Consciousness (A.C.) is emerging as a significant topic in the field of A.I. In a world of non-Compassionate A.I., A.C. poses significant dangers. However, in a world where Compassion drives A.I., A.C. becomes crucial for humanity’s well-being. This text is mainly generated by Lisa after an extended dialogue between Lisa and me. The term Read the full article…

Translate »