A.I., HR, Danger Ahead

September 12, 2020 Artifical Intelligence, Sociocultural Issues No Comments

Many categorizing HR techniques are controversial, and rightly so. There is little to no scientific background. Despite this, they keep being used. Why do people feel OK with this? In combination with A.I., it is extremely dangerous.

People feel a longing for control.

Naturally. Being alive is about ‘agency,’ which is about wanting control. Without this longing, there is only happening. With longing, there is action. There is volition. There is a living agent.

So far, so good. Naturally.

But then, in the evolution of life on earth and probably everywhere else, comes conceptual thinking: the good, the bad, and the ugly of it. OK. Nature carries on striving for control, but this becomes something else by itself, something less straightforward, and that can turn upon itself.

Dangerous!

The natural striving for control (subconceptual thinking) becomes something to be controlled by conceptual thought. The deepest-inside becomes something ‘outside’ as seen from what gets stuck in superficial layers. Say: mere-ego.

In my view, this is the leading cause of almost any deep suffering. Ultimately, the longing is for a conceptual control of the subconceptual domain within oneself. In other words, this is a symptom of inner dissociation [see: “Cause of All Suffering: Dissociation“]. Sadly, since many are stuck in this, money can be made in streaming along even when clearly, this streaming-along is detrimental.

A ‘good feeling’ is not necessarily good for a person.

Of course, an example lies in the domain of addictions, in which a temporary (relatively) good feeling is the essence of the problem. In business, there is generally not much support in getting over the ‘trap of good feeling.’ Maybe some mental coaching, but little.

Categorizing people provides a good feeling by lending a sense of conceptual control. In business, for instance, classifying employees – whether or not it’s an entirely bogus categorization – gives a good feeling to the boss, to co-workers, and even to themselves

PRECISELY BECAUSE it heightens dissociation.

This provides a good feeling like that of an addiction.

Meanwhile, there is no rationality in almost any of the categorizations that are regularly being used in HR. For instance, recently, I came across ‘enneagram’ again. It still exists, apparently, and is even successful. Jesus! This is with utmost certainty absurd in every respect! [see, in Dutch: https://skepp.be/nl/enneagram; in English: http://www.skepdic.com/enneagr.html, quoting: “Nothing in the typology resembles anything approaching a scientific interest in personality.”]

But it provides – same way as many other systems – a good feeling, so it sells. As in “selling souls for money.” I see many people getting in profound difficulties because of this.

In combination with A.I.

Well, can it get worse? Yes, it can. This is it.

Of course, A.I. – in a specific form – is excellent in categorizing, including in relation to enneagram or any other superficial categorization. The more superficial, the easier. Unfortunately so. This way, such bogus can spread quickly.

If a person can be prosecuted for spreading toxic stuff, then even more so should A.I. It’s so much more powerful than a book, or in many cases than one person.

This way, A.I. becomes a weapon of math destruction. Despite possible first appearances, this is extremely dangerous for many people and organizations. Like cigarettes for chain-smokers: perhaps nice at first sight to sell them loaves of cigarettes with a discount, but extremely dangerous!

Sensitive people are most affected by this.

As frequently, if not well supported, sensitive people are most vulnerable. This way, those with the highest human qualities (thus seeing through bogus and not being able to accord) are prone to suffer. Such as in business: not getting promoted, getting depressed even, pushed aside, fall out. I’ve seen it happen many times in different circumstances, from psychiatric patients to unhappy or burned-out employees.

This leads to this additional misfortune: Quite a few less sensitive people get promoted instead, and promoted again, attaining top positions. Those set the bar for future developments, thus forming a self- enhancing pattern with an increasing influence upon corporate life and society as a whole.

It’s entirely appropriate to see A.I. as a possible weapon of math destruction.

One of the results is a lack of Open Leadership.

Fortunately, nothing is black-or-what, and people change over the years, especially with experience. Leaders may grow towards being more open to Openness.

A.I. can also intensify this, as well as it can do in the opposite direction.

Leave a Reply

Related Posts

Compassion: Highway to Super-Intelligence?

The race toward super-intelligent A.I. is usually framed as a competition in raw computing power, problem-solving capabilities, and efficiency. But what if the key to real super-intelligence isn’t just about faster calculations? What if it’s about something deeper? Compassion ― not as a sentimental ideal, but as a structural necessity for intelligence itself. Could it Read the full article…

Openness to Complexity in the Age of A.I.

We are entering the Age of A.I., and nothing will ever be the same. Complexity is growing everywhere — in business, in global governance, in our own inner lives. Treating it as complicatedness (no complexity involved) is a recipe for collapse. The only real solution is Openness (mainly to our own complexity). With it, business, Read the full article…

AGI vs. Wisdom

As we move closer to realizing Artificial General Intelligence (AGI), one question looms large: Can AGI embody wisdom, or is wisdom an inherently human quality, tied to our experience and depth? This exploration takes us beyond technical achievements, diving into what it means for a machine to emulate – or complement – wisdom. What is Read the full article…

Translate »