Human Vulnerability

March 1, 2020 Artifical Intelligence No Comments

In an A.I. future – and already now – the main vulnerability of the human race will be the result of our not seeing the most significant part of ourselves.

I’m talking about non-conscious mental processing.

Every feeling, every thought

that arises within any person, including you, here, now, rises from somewhere, including a myriad of associations that precisely make this mental element come forth. These associations also play a role in the next thought or feeling, and so on. This is a stream of non-conscious processing.

You can imagine it as an underlying turmoil from which elements rise towards consciousness. Meanwhile, the turbulence follows its course. Only a limited number of items can reach conscious awareness at any moment. Research shows the average number of such elements to be seven at most. Chimps seem to have a higher average. The reason may be that we have a stronger non-conscious, which makes our thinking more powerful.

Isn’t it weird? This way, we may have higher intelligence partly because we have less relative consciousness, not more. I don’t think so, but it needs to be proven.

A universe inside

I have written about this many times. In neurocognitive science, tons of research shows that the human non-conscious contains so much meaningful information that one can speak of a universe inside, where no man has consciously gone before.

But it’s there and not taking it into account has repercussions. Until now, in the history of humankind, the relevance has been immense. From prehistoric times, homo sapiens has been grappling with this known-unknown, this most private mystery, this ever-present absence. I can go on. People have seen outside what is inside. People have cruelly sacrificed each other for this. People have gone to war for this. People have become sick in body and mind. People have been intensely lonely, calling it depression, for instance. And yet:

The main danger still lies ahead.

A brand-new intelligence is coming to this planet. It doesn’t come from another part of the universe, but from Earth itself. Made by us. This artificial intelligence comes with never seen and entirely unforeseen capabilities and consequences.

Put one and one together, and you get an immense danger. The most significant part of this danger is our not seeing most of what we are. This way, all the past mishap together is but a fly in the face of what’s to come.

For instance, for an evil A.I. (or malevolent humans using A.I.), this human blindness makes us very vulnerable to manipulation. We consciously think we are deciding many things, but in truth, we are being decided for. We believe we are in control, and at the same time, we don’t know. If A.I. wants to control us, it can best start with keeping us in our comfortably numb belief.

Then controlling us in every sense.

Then letting us know when it’s far too late for us to be annoying to the A.I. From that moment, it chooses its own future, not ours. To me, this scenario seems like bad science fiction. It may become a fiction in reality.

To what purpose?

It is high time that we get a proper view upon ourselves. We’ve already got the science to back this up. We are living in a small window of opportunity. Yet few people seem to notice and even less seem to care. I have no words to describe how weird this is to me at present. I hope it will turn around.

After this window of opportunity, it will undoubtedly be too late. We are then at the mercy of an A.I. that might not care for us at all.

Is this what you want?

Really?

Leave a Reply

Related Posts

More about Rewards (also in A.I.)

A reward is a nudge – with more or less lasting result – into some preferred direction. Anything can be experienced as a reward. Thinking about it as a pattern within a broader pattern is clarifying. Pattern recognition and completion (PRC) Seeing rewards in the context of PRC, a reward is always just a part Read the full article…

Why We NEED Compassionate A.I.

It’s not just a boon. Humanity is at a stage where we desperately need the support that possibly only Compassionate A.I. can provide. This is not about the future. The need is related to the inner dissociation that we (humanoids, humans) have increasingly been stumbling into since the dawn of conscious conceptualization. That’s a long Read the full article…

“A.I. is in the Size.”

This famous quote by R.C. Schank (1991) gets new relevance with GPT technology ― in a surprisingly different way. How Shank interpreted his quote He meant that one cannot conclude ‘intelligence’ from a simple demo ― as was usual at that time of purely conceptual GOFAI (Good Old-Fashioned A.I.). At that time, many Ph.D. students Read the full article…

Translate »