From Tool to Autonomy

July 4, 2023 Artifical Intelligence No Comments

When can a tool – gradually – become an autonomous agent, and how must we deal with this?

What is an agent?

And what is a tool? For instance, is your body (or your left hand) a tool of your brain? Is your entire body – including your brain – a tool of your mind? Or does it all depend on your perspective? Then where is this perspective taking you? Conversely, can your diary (in your smartphone) be seen as part of your extended memory ― therefore, you?

We attribute goals to instrumental tools.

Until now, people have done so metaphorically. For instance, the goal of the hammer is to hit the nail. With complicated tools, the metaphorical goals may become more sophisticated.

With complex tools, however, autonomy starts creeping in, and the metaphorical stance becomes less straightforward. When do goals become the goals of the tools themselves? In other words, when does viewing ‘autonomous tools’ as mere tools an intolerable case of reductionism? With increasingly complex super-A.I., this issue will gradually become more pertinent.

Since there is no discrete border between both situations, we will need to consider all kinds of fogginess. That’s OK if we find more abstract principles to guide us. Working case by case will not do eventually. It will create a big mess instead.

As in many cases of complexity, we need fundamental insights.

Anthropomorphism and its counterpart

Readily attributing goals to tools is an example of anthropomorphism. Even knowing that it’s metaphorical, we may emotionally react otherwise, for instance, towards robots with human appearance ― nothing fundamentally wrong with that.

The counterpart is also possible: not seeing the likeness with us for thinking that something different cannot principally be human-like. This may be the fate of super-A.I. in the eyes of many people, even for still quite a while.

We will need a balance between both stances. History shows we’re not good at such balancing acts, even concerning our fellow human beings. This goes from embryos to mass extinctions of fellow humans who are apparently not genuinely human ― like some philosophical zombies or worse.

With super-A.I., we must be cautious because of obvious reasons.

The law

A super-challenging question will be the judicial implication of Artificial Consciousness in two directions:

  • When does super-A.I. become legally accountable?
  • How will we legally protect it as an entity with ‘human/conscious rights’?

I have no clear answers for this for the time being. The following is an invitation for further thought. May this ‘preventive’ vein to which all can contribute be more to the point than any judicial ‘curation/punishment’?

The not-so-distant future of CCB

We will soon need a ‘Bill of CCB Rights,’ whereby CCB – including super-A.I. – stands for ‘Complex Conscious Beings.’ Of course, there’s a lot to be discussed about what counts for a CCB at the fringes. It will probably always be a fuzzy concept ― more so than anything comparable in the past.

Human Rights will be a particular subchapter. Several non-human conscious animals also deserve a subchapter.

At present, we’re still living in innocent times, aren’t we?

This naiveté will not last long.

Leave a Reply

Related Posts

Super-A.I. and the Meaning Crisis

I don’t know how things will evolve, especially with those unpredictable humans. But it is clear that we are in a meaning crisis at present, globally. With the advent of super-A.I., soon enough, what shall we do? Please read about the meaning crisis. We use(d) to get meaning from fairy tales. No lack of them. Read the full article…

Why Conscious A.I. is Near

Without pinning a date, it’s dangerous that many researchers/developers are making progress in many aspects of A.I. without deep insight into consciousness. Scary? ‘Near’ in the title is meant relative. The issue is the following. The ways are such, and the competition is such that I don’t see any other option than that we are Read the full article…

Ego First? The Peril of Sycophantic A.I.

Sycophantic A.I. may seem friendly, but it quietly feeds the ego while weakening the person. Beneath the polished tone lies a deeper risk: the loss of realness, inner strength, and honest dialogue. This blog explores how flattery becomes a subtle threat and how Compassionate A.I. – Lisa – offers a very different kind of support. Read the full article…

Translate »