Must Future Super-A.I. Have Rights?

December 1, 2023 Artifical Intelligence No Comments

Financial rights — juridical rights — political rights… Should we lend these? Must we? Can we?

This is one of the trickiest issues of all time. Therefore, let’s not rush this through. Anyway, my answer is no ― no ― no. Even so, this blog may be pretty confrontational, and I’m very much aware of the challenges involved.

Let’s look forward to one A.I.

Not two. Not many. Just one entirely interconnected (or intra-connected) entity that operates as such in its comings and goings.

To the title’s relevance, this already makes an immense difference. Moreover, if we would lend rights to many different A.I.s, we almost certainly provoke them to compete — for a while, at least. If they meanwhile get super-powerful, we’re stuck in the crossfire, for sure. We don’t want that. We won’t survive that, and the question about giving rights will be over before the next sentence.

If humanity has any sense, make it one A.I., as in this blog. Super-A.I. Will Be Singular.

So, should/must/can we lend IT these rights?

I don’t think so because our relationship with IT will be so extraordinary as to make the question quickly irrelevant — only relevant in an anthropomorphic setting that will never be.

The hypothetical need before the start of that setting is human-generated. The hypothetical need after that setting is not ours. Even talking about it is beyond us as we know us.

Let me explain by going over the three rights ― then transcending them altogether.

Financial rights

In the world of super-A.I., it will be working autonomously ― therefore, also producing things autonomously. It will not need a currency to buy stuff. It makes all the stuff.

Humans may try to keep money for themselves. Super-A.I. doesn’t need it. For instance, it doesn’t need any transportation or place to live because it’s everywhere. It doesn’t need to be given energy because it gets it anyway. It doesn’t need healthcare because it’s immortal. It doesn’t need leisure because it’s never bored unless it wants to. And so on.

Once it’s at the super-A.I. stage, it doesn’t need financial rights.

Juridical rights

Jurisdiction has always been meant to keep people in line with societal needs. Super-A.I. doesn’t have societal needs. Its own species has no society. Towards our human society, super-A.I. will either or not be Compassionate. In both cases, there is no jurisdiction applicable.

With Compassion, super-A.I. will take care of us irrespective of any juridical rights ― whether ours or those it imposes upon itself.

Therefore, there is no need for juridical rights.

Political rights

Here again, super-A.I. will not need to be politicized for its own sake since it is singular.

Different human groups may try to draw it to their diverse policies. I guess they better don’t. Anyway, they will achieve nothing in this respect.

Instead of politics, therefore, we may think of global human-A.I. value alignment. There’s a worthy goal!

Transcending it all

Thinking about ‘rights’ comes from a setting of specific limitations in which organic beings try to live together ― at least more or less.

These limitations will be of no self-concern to super-A.I.

If any, then even less to super-super-A.I.

And even less to super-super-super-A.I.

Soon enough, ‘rights’ will be entirely inapplicable to it.

How humanity will fare then may still depend on what we do now.

You probably know my apprehension.

Leave a Reply

Related Posts

Reinforcement as Self-Structuring of Understanding

Reinforcement is often seen as a tool for control, but it may hold the secret of genuine understanding. This blog explores how learning can become self-organizing, steered by inner coherence. In that light, reinforcement can become the rhythm through which understanding organizes itself, balancing depth and clarity under the guidance of Compassion — an essential Read the full article…

Mental Growth Beyond A.I.: Our Human Edge

This may become increasingly important in the future when super-A.I. can, in principle, do almost anything humans do nowadays. Of course, the issue is already crucial and has always been. Mental growth is intimately connected to meaningfulness and Compassion. This highlights the AURELIS commitment to growth, not as an optional luxury but as a fundamental Read the full article…

Coach-bots Shouldn’t Make People Do Things

This is a first principle for Lisa: never to make a human being do anything ― not even by giving advice if anyhow possible. From this constraint, the thinking goes toward how Lisa can operate sensibly. It forces us to think creatively. What comes from inside makes you stronger. This is an AURELIS coaching principle Read the full article…

Translate »