Consciousness is not a Thing
It is the concept with the features people like to associate with that term.
Of course, this is the same as with many other terms/concepts, especially human-related ones — intelligence, feelings, motivations…
Changing the question: “Will super-AI attain consciousness?”
This question can be changed into a better one: “Will AI get characteristics that you want to attach to your notion of consciousness?”
The answer depends on your notion of consciousness, which may be quite different from mine. This also shows the domain’s subjectivity.
And that’s OK,
The issue is about what we want to see as features of consciousness.
To me, no system/organism can be seen as conscious if it has no intelligence whatsoever. Also, if there is intelligence but no volition, one cannot speak of consciousness.
So, these are two necessary characteristics. To me, they are also sufficient. Features such as learning and generality (of domains) are subsumed under intelligence. Autonomy is part of volition.
Will super-A.I. attain consciousness?
This question can be made more specific: Will super-A.I. attain intelligence and volition?
The intelligence part is already happening. One can be pretty certain that, a decade from now, this will be generally acknowledged. Most people will see the A.I. as truly intelligent.
The volition part is also already happening. Step by step, systems are made to react more autonomously ― as a matter of fact, with huge economic incentives. With more intricate ways to respond to complex circumstances, the resulting behavior will increasingly accord with what people can see as volition.
But is this not just ‘seemingly conscious’?
Seemingly intelligent? Seemingly volitional? These questions are old ones. Today, we can solve them by broadening the related concepts as done above. For that, we need to abandon our human-centeredness. Intelligence, volition – and therefore also consciousness – are not strictly human or even strictly organic features.
On the other hand, if we see them as ‘magical stuff that is only applicable to humans,’ we’re in for a fall from the magical kingdom. That fall may be hurtful and is unnecessary.
Moreover, the lack of humility may blind us to the real danger.
The real danger is not a conscious superintelligence.
The real danger in a world of shared intelligences is the possible lack of Compassion, basically.
Look at us.
It’s plain and simple. By no means will we win the competition in such a world. In other words: We will lose. Urgently needed, therefore, is worldwide Compassion between humans and – more and more for existential reasons – between all sentient beings.
Not intelligence is the goal, but Compassion.
Will we learn this soon enough?