The Double Ethical Bottleneck of A.I.

January 30, 2023 Artifical Intelligence No Comments

This is a small excerpt from my book The Journey Towards Compassionate A.I. The whole book describes the why’s, what’s and how’s concerning this.

Getting through the A.I. bi-bottleneck

On the road towards genuine super-A.I. – encompassing all domains of intelligence and in each being much more effective than humans – I see not one but two bottlenecks. This is no guarantee that after the bottlenecks, there will be heaven on earth, but let us suppose now that, from the day after, all will be well indeed.

That enables us to focus on the bottlenecks.

  • The first one is human-made. Either rogue or naïve developers may take very wrong decisions. There is a sheer infinite amount of scenarios that one can think about. In the next section, I give a few examples. One may be enough not to need to fear any other anymore. It is the end, my friend.
  • The second bottleneck is A.I.-made. On its way towards super-A.I. and beyond, there may be many stages and many changes. Even if the final stage is benevolent, and even if most stages are benevolent, it’s enough for only one stage to be less human-friendly, and we’re gone.

The bi-bottleneck may be quite long with many different stages.

Will humanity be able to control to a sufficient degree everything that can happen?

I think, and repeat, that reliance on control alone will not save us. I also don’t think we should relinquish all control and hope with crossed fingers that A.I. will eventually be Compassionate just like that, after the bottleneck as well as at any stage along. So, we should very well think about control, AND we should think about Compassion. BOTH are indispensable.

I hope that this book will be a wake-up call in this direction.

Leave a Reply

Related Posts

Why A.I. is Less and Less about Technology

As A.I. technology advances, the research focus should shift from mere technological advancements to a higher level of development altogether. This blog is not about philosophical implications, but about philosophy as a technological driver ― the philosophy itself becoming the technology. Currently, the possibilities are so vast and diverse that integration can be considered independently Read the full article…

Why Reinforcement Learning is Special

This high-end view on Reinforcement Learning (R.L.) applies to Organic and Artificial Intelligence. Especially in the latter, we must be careful with R.L. now and forever, arguably more than with any other kind of A.I. Reinforcement in a nutshell You (the learner) perform action X toward goal Y and get feedback Z. Next time you Read the full article…

Levels of Abstraction in Humans and A.I.

Humans are masters of abstraction. We do it spontaneously, thus creating an efficient mental environment for ourselves, others, and culturally. The challenge is now to bring this to A.I. Abstraction = generalization Humans (and other animals) perform spontaneous generalization. From a number of example objects, we generalize to some concept. A concept is already an Read the full article…

Translate »