Compassionate A.I. in the Military

April 29, 2022 Artifical Intelligence, Sociocultural Issues, War and Peace No Comments

There are obvious and less obvious reasons — time to re-think the military from the ground up. Compassionate A.I. may lead to this. It should be realized asap.

Soldiers are human beings just like others, also if they have deliberately chosen for military service. Compassion is equally applicable to them.

The obvious

Many soldiers return from the battlefield with PTS(D) symptoms that may linger for a long time. For instance, due to the Vietnam War, 60.000 returned soldiers were diagnosed with Post-Traumatic Stress.

It was shown afterward that how these men were standardly treated resulted in an average worsening of symptoms. There should be no surprise in this.

Nevertheless, or even more than anything else, this indicates the need for proper management before or after. Before parting to serve, many soldiers can use Compassionate coaching preventively to attain what AURELIS is all about in the last two letters: Inner Strength. This helps to retain one’s balance in difficult situations — thus not mentally breaking apart at some point. Compassionate A.I.-based coaching (see Lisa) can roll this out to the many.

Compassion is, of course, not just about being friendly.

It is way more far-reaching, eventually delving into the depth of the human being. One consequence is daring to be vulnerable — so important (and heroic) to every soldier.

In due time, killing without Compassion will be seen as utterly primitive, but that’s a different story.

Still obvious

Still obviously to me, this accords with being Compassionate to the enemy. Of course, one can immediately feel this is challenging. Can one Compassionately kill?

Challenging, but yes, indeed. It may be the best way to kill, whether an animal to eat or an enemy who is otherwise about to kill one’s friends. Compassion lies in ‘knowing’ that the specific killing is the best thing to do in view of circumstances. A crucial element in this is the genuine respect one feels for the killed.

This mental work needs to be done beforehand. It will then aid the soldier-in-action to view the situation from a more substantial vantage point. Thereby also, it relieves the moral doubt that may otherwise lead to PTS/moral injury.

Hating the enemy is not needed for doing a soldier’s job. Besides, being able to kill hatelessly is more efficient. It relieves the inner hesitation at the decisive moment. If this is counterintuitive to you, dear army general, then think again. Scientific proof shows that inner hesitation plays a significant role in battle. If you want efficient soldiers, let them be Compassionate. Compassionate A.I.-based coaching can help. At the same time, it also supports the following points.

The less obvious

The goal of the military is to defend the own people.

The goal of some inside and around the military can go from hardly to pretty corrupt, unfortunately. Still, we start from defense and proceed with the same.

A Compassionate stance frees us to also look at peace wholeheartedly. I’m convinced that almost all people on the planet prefer peace — including you. Not peace at any cost, but nevertheless.

Least obvious, perhaps

Non-Compassionate A.I. poses an existential threat to all, including the own people. The only way to let one’s people survive lies in going for Compassionate A.I. One may turn oneself inside out; it remains entirely clear.

If ‘defense of the own people’ is truly the aim, there is only one way to go.

That’s it, you see?

That’s it.

No war anymore

Eventually, looking at it comprehensively, no war is worth it.

With certainty, either humanity leaves the ground for another organic species to survive intelligently (and maybe read this text someday?), or we go to an era in which no war will ever be waged. That’s the Compassionate culmination point. Compassionate A.I. may help in bringing this closer.

Meanwhile, every Compassionate soldier can push this evolution forward by his mere frame of mind. Additionally, back home, his war experience may embolden him to keep fighting lifelong for the good cause.

This does not lie in pacifism but in a future in which war is no longer needed. Isn’t this congruent with the military’s final aim?

Besides, Compassion also holds technological keys, but that’s a different story.

Leave a Reply

Related Posts

Ethical A.I.

A.I. is almost here. No doubt about it. Once mature, it will answer its own ethical questions. Right now, we can still give some guidance to this near future. Time scale It’s easy to misjudge the time scale in which this will become hugely relevant to us. It will be so to our children or, Read the full article…

A.I. to Benefit Humans

‘Human-oriented’ is not the same as ‘ego-oriented.’ As never before, and perhaps never after, we have with A.I. a powerful toolbox that can be used in any direction. In-depth As to AURELIS ethics, the striving – of A.I. and of any other development – should definitely be towards humanity-in-depth, the ‘total human being,’ as opposed to Read the full article…

Human-Centered or Ego-Centered A.I.?

‘Humanism’ is supposed to be human-centered. ‘Human-A.I. Value Alignment’ is supposed to be human-centered. Or is it ego-centered? Especially concerning (non-)Compassionate A.I., this is the crucial question that will make or break us. Unfortunately, this is intrinsically unclear to most people. Mere-ego versus total self See also The Big Mistake. This is not about ‘I’ Read the full article…

Translate »