Human Vulnerability

March 1, 2020 Artifical Intelligence No Comments

In an A.I. future – and already now – the main vulnerability of the human race will be the result of our not seeing the most significant part of ourselves.

I’m talking about non-conscious mental processing.

Every feeling, every thought

that arises within any person, including you, here, now, rises from somewhere, including a myriad of associations that precisely make this mental element come forth. These associations also play a role in the next thought or feeling, and so on. This is a stream of non-conscious processing.

You can imagine it as an underlying turmoil from which elements rise towards consciousness. Meanwhile, the turbulence follows its course. Only a limited number of items can reach conscious awareness at any moment. Research shows the average number of such elements to be seven at most. Chimps seem to have a higher average. The reason may be that we have a stronger non-conscious, which makes our thinking more powerful.

Isn’t it weird? This way, we may have higher intelligence partly because we have less relative consciousness, not more. I don’t think so, but it needs to be proven.

A universe inside

I have written about this many times. In neurocognitive science, tons of research shows that the human non-conscious contains so much meaningful information that one can speak of a universe inside, where no man has consciously gone before.

But it’s there and not taking it into account has repercussions. Until now, in the history of humankind, the relevance has been immense. From prehistoric times, homo sapiens has been grappling with this known-unknown, this most private mystery, this ever-present absence. I can go on. People have seen outside what is inside. People have cruelly sacrificed each other for this. People have gone to war for this. People have become sick in body and mind. People have been intensely lonely, calling it depression, for instance. And yet:

The main danger still lies ahead.

A brand-new intelligence is coming to this planet. It doesn’t come from another part of the universe, but from Earth itself. Made by us. This artificial intelligence comes with never seen and entirely unforeseen capabilities and consequences.

Put one and one together, and you get an immense danger. The most significant part of this danger is our not seeing most of what we are. This way, all the past mishap together is but a fly in the face of what’s to come.

For instance, for an evil A.I. (or malevolent humans using A.I.), this human blindness makes us very vulnerable to manipulation. We consciously think we are deciding many things, but in truth, we are being decided for. We believe we are in control, and at the same time, we don’t know. If A.I. wants to control us, it can best start with keeping us in our comfortably numb belief.

Then controlling us in every sense.

Then letting us know when it’s far too late for us to be annoying to the A.I. From that moment, it chooses its own future, not ours. To me, this scenario seems like bad science fiction. It may become a fiction in reality.

To what purpose?

It is high time that we get a proper view upon ourselves. We’ve already got the science to back this up. We are living in a small window of opportunity. Yet few people seem to notice and even less seem to care. I have no words to describe how weird this is to me at present. I hope it will turn around.

After this window of opportunity, it will undoubtedly be too late. We are then at the mercy of an A.I. that might not care for us at all.

Is this what you want?

Really?

Leave a Reply

Related Posts

From Big to Deep Data in A.I.

Big data have fueled the A.I. revolution of the past few years. Together with compute, the bigger seems to be the better. Some fear that with a relatively limited amount of data, we might run into a wall. However, there is an ongoing endeavor to seek more information in-depth. This increasing focus on understanding the Read the full article…

Compassion: Highway to Super-Intelligence?

The race toward super-intelligent A.I. is usually framed as a competition in raw computing power, problem-solving capabilities, and efficiency. But what if the key to real super-intelligence isn’t just about faster calculations? What if it’s about something deeper? Compassion ― not as a sentimental ideal, but as a structural necessity for intelligence itself. Could it Read the full article…

Human-Centered or Ego-Centered A.I.?

‘Humanism’ is supposed to be human-centered. ‘Human-A.I. Value Alignment’ is supposed to be human-centered. Or is it ego-centered? Especially concerning (non-)Compassionate A.I., this is the crucial question that will make or break us. Unfortunately, this is intrinsically unclear to most people. Mere-ego versus total self See also The Big Mistake. This is not about ‘I’ Read the full article…

Translate »