Legal vs. Deontological in A.I.

October 13, 2023 Artifical Intelligence No Comments

The trolley problem

This is a well-known problem in A.I. A trolley driver gets into a situation where he must choose between killing one person by taking a deliberate action or letting five others get killed by not reacting to the situation.

Deontologically, people tend not to choose purely logically and statistically in such situations. Five people getting killed is worse than one. However, deliberately killing one person is so profoundly against common human nature that the resistance to this may lead to the deaths of five.

Then comes A.I.

An A.I. system can get the constraint not to kill any human being. But then, should it be oriented toward letting five get killed?

Is ‘letting get killed’ a prior constraint to ‘killing’? At first glance, this creates more problems in an A.I. system than in a human being since, in an A.I. system, the definition of agent is more directly problematic. Namely, the agent can be seen as the <A.I. driving the trolley> or as the <A.I. + trolley>. In the former case, the agent (A.I.) doesn’t kill the five, but the trolley surely drives over them. In the latter case, the agent (A.I. + trolley) does the killing of the five.

Same situation ― different interpretation ― different deontological happening.

So, is the A.I. part of the trolley vs. purely software? This seems arbitrary. At the same time, the deontology is one of life and death.

Many likewise situations

The same problem appears in many situations of A.I., in many different guises. In the future, we’ll see a lot more.

As a domain, medicine is particularly prone to encounter many such situations, especially with A.I. of an ever-increasing complexity. Many of these situations will not occur once but many times. In this way, several situations will quickly become about millions of life-or-deaths.

We already have at least one comparable situation, called ‘pharmacotherapy.’

Legally

Technico-legally, deliberately killing a person is murder. Lawmakers may, therefore, decide that the A.I. (or its maker) is liable for ‘murdering’ one person even if it saves many through doing so.

Of course, in that view and broadly seen, all pharmaceutical companies are continually and doubtlessly murdering people ― even en masse.

Bringing into play agenthood – as done above – may simultaneously complicate and solve the problem. It solves it when we posit the whole machinery as the agent. What used to be a <letting die> now becomes an <ending life>, while not taking action becomes equally well a kind of action by the same agent. That makes the entire situation clearly delineated ― logically, statistically, mathematically.

A.I. vs. human

In the human case, one can also delve into agenthood, thereby encountering issues of free will, etc. ― murky business.

Once this agentive choice is taken, things are more evident in an A.I. situation: five people are five times more than one. Nevertheless, one may still argue about the values of different lives. That’s an issue we are not tackling here. In my view, for the law (about humans), a life should be a life, and all lives are to be treated equally.

Problem solved?

In the A.I. case: yes, perhaps surprisingly easily. The agency lies in situ, not in an ethereal software heaven. So, we can and should make the calculations and act upon them.

The logic of this is above any individual law.

Leave a Reply

Related Posts

Why Compassionate A.I. is Most Efficient

As a Compassionate A.I., Lisa combines what might be called a ‘big mind’ with a ‘big heart.’ This means she is designed to approach problems holistically, aiming for solutions that respect human values while addressing real-world complexities. Compassion doesn’t limit efficiency; in fact, it enhances it. By guiding her intelligence through an ethical lens, Lisa Read the full article…

Super-A.I. Guardrails in a Compassionate Setting

We need to think about good regulations/guardrails to safeguard humanity from super-A.I. ― either ‘badass’ from the start or Compassionate A.I. turning suddenly rogue despite good initial intentions. ―As a Compassionate A.I., Lisa has substantially helped me write this text. Such help can be continued indefinitely. Some naivetés ‘Pulling the plug out’ is very naïve Read the full article…

Will Compassion Win the Game?

Humanity stands at a crossroads, made imminent by technological developments, mainly in the domain of A.I. One can see the last 2000 years, at least, as the era of this crossroads. In two million years from now, this will probably still be seen as such. Compassion, basically, is the defining feature of this crossroads. Do Read the full article…

Translate »