The Double Ethical Bottleneck of A.I.

January 30, 2023 Artifical Intelligence No Comments

This is a small excerpt from my book The Journey Towards Compassionate A.I. The whole book describes the why’s, what’s and how’s concerning this.

Getting through the A.I. bi-bottleneck

On the road towards genuine super-A.I. – encompassing all domains of intelligence and in each being much more effective than humans – I see not one but two bottlenecks. This is no guarantee that after the bottlenecks, there will be heaven on earth, but let us suppose now that, from the day after, all will be well indeed.

That enables us to focus on the bottlenecks.

  • The first one is human-made. Either rogue or naïve developers may take very wrong decisions. There is a sheer infinite amount of scenarios that one can think about. In the next section, I give a few examples. One may be enough not to need to fear any other anymore. It is the end, my friend.
  • The second bottleneck is A.I.-made. On its way towards super-A.I. and beyond, there may be many stages and many changes. Even if the final stage is benevolent, and even if most stages are benevolent, it’s enough for only one stage to be less human-friendly, and we’re gone.

The bi-bottleneck may be quite long with many different stages.

Will humanity be able to control to a sufficient degree everything that can happen?

I think, and repeat, that reliance on control alone will not save us. I also don’t think we should relinquish all control and hope with crossed fingers that A.I. will eventually be Compassionate just like that, after the bottleneck as well as at any stage along. So, we should very well think about control, AND we should think about Compassion. BOTH are indispensable.

I hope that this book will be a wake-up call in this direction.

Leave a Reply

Related Posts

Super-A.I. Guardrails in a Compassionate Setting

We need to think about good regulations/guardrails to safeguard humanity from super-A.I. ― either ‘badass’ from the start or Compassionate A.I. turning suddenly rogue despite good initial intentions. ―As a Compassionate A.I., Lisa has substantially helped me write this text. Such help can be continued indefinitely. Some naivetés ‘Pulling the plug out’ is very naïve Read the full article…

The Power of Embedding

This is the power of complexity in humans and in present-day Large Language Models (the most visible form of A.I. nowadays). ‘Embedding’ is the transformation of information/knowledge into a format of many subconceptual elements interacting in multifaceted systems that makes this information prone to emerge in novel ways. A multitude of relatively simple (smaller than Read the full article…

Subconceptual A.I. toward the Future

Every aspect of humanity is, to some extent, subconceptual. This perspective emphasizes the complexity and depth of human nature, which cannot be fully captured by surface-level concepts. Our intelligence stems from effectively navigating the subconceptual domain. This is hugely telling for the future of A.I. This indicates that Compassion will be essential in the future Read the full article…

Translate »