A.I., HR, Danger Ahead

September 12, 2020 Artifical Intelligence, Sociocultural Issues No Comments

Many categorizing HR techniques are controversial, and rightly so. There is little to no scientific background. Despite this, they keep being used. Why do people feel OK with this? In combination with A.I., it is extremely dangerous.

People feel a longing for control.

Naturally. Being alive is about ‘agency,’ which is about wanting control. Without this longing, there is only happening. With longing, there is action. There is volition. There is a living agent.

So far, so good. Naturally.

But then, in the evolution of life on earth and probably everywhere else, comes conceptual thinking: the good, the bad, and the ugly of it. OK. Nature carries on striving for control, but this becomes something else by itself, something less straightforward, and that can turn upon itself.


The natural striving for control (subconceptual thinking) becomes something to be controlled by conceptual thought. The deepest-inside becomes something ‘outside’ as seen from what gets stuck in superficial layers. Say: mere-ego.

In my view, this is the leading cause of almost any deep suffering. Ultimately, the longing is for a conceptual control of the subconceptual domain within oneself. In other words, this is a symptom of inner dissociation [see: “Cause of All Suffering: Dissociation“]. Sadly, since many are stuck in this, money can be made in streaming along even when clearly, this streaming-along is detrimental.

A ‘good feeling’ is not necessarily good for a person.

Of course, an example lies in the domain of addictions, in which a temporary (relatively) good feeling is the essence of the problem. In business, there is generally not much support in getting over the ‘trap of good feeling.’ Maybe some mental coaching, but little.

Categorizing people provides a good feeling by lending a sense of conceptual control. In business, for instance, classifying employees – whether or not it’s an entirely bogus categorization – gives a good feeling to the boss, to co-workers, and even to themselves

PRECISELY BECAUSE it heightens dissociation.

This provides a good feeling like that of an addiction.

Meanwhile, there is no rationality in almost any of the categorizations that are regularly being used in HR. For instance, recently, I came across ‘enneagram’ again. It still exists, apparently, and is even successful. Jesus! This is with utmost certainty absurd in every respect! [see, in Dutch: https://skepp.be/nl/enneagram; in English: http://www.skepdic.com/enneagr.html, quoting: “Nothing in the typology resembles anything approaching a scientific interest in personality.”]

But it provides – same way as many other systems – a good feeling, so it sells. As in “selling souls for money.” I see many people getting in profound difficulties because of this.

In combination with A.I.

Well, can it get worse? Yes, it can. This is it.

Of course, A.I. – in a specific form – is excellent in categorizing, including in relation to enneagram or any other superficial categorization. The more superficial, the easier. Unfortunately so. This way, such bogus can spread quickly.

If a person can be prosecuted for spreading toxic stuff, then even more so should A.I. It’s so much more powerful than a book, or in many cases than one person.

This way, A.I. becomes a weapon of math destruction. Despite possible first appearances, this is extremely dangerous for many people and organizations. Like cigarettes for chain-smokers: perhaps nice at first sight to sell them loaves of cigarettes with a discount, but extremely dangerous!

Sensitive people are most affected by this.

As frequently, if not well supported, sensitive people are most vulnerable. This way, those with the highest human qualities (thus seeing through bogus and not being able to accord) are prone to suffer. Such as in business: not getting promoted, getting depressed even, pushed aside, fall out. I’ve seen it happen many times in different circumstances, from psychiatric patients to unhappy or burned-out employees.

This leads to this additional misfortune: Quite a few less sensitive people get promoted instead, and promoted again, attaining top positions. Those set the bar for future developments, thus forming a self- enhancing pattern with an increasing influence upon corporate life and society as a whole.

It’s entirely appropriate to see A.I. as a possible weapon of math destruction.

One of the results is a lack of Open Leadership.

Fortunately, nothing is black-or-what, and people change over the years, especially with experience. Leaders may grow towards being more open to Openness.

A.I. can also intensify this, as well as it can do in the opposite direction.

Leave a Reply

Related Posts

Ethical A.I.

A.I. is almost here. No doubt about it. Once mature, it will answer its own ethical questions. Right now, we can still give some guidance to this near future. Time scale It’s easy to misjudge the time scale in which this will become hugely relevant to us. It will be so to our children or, Read the full article…

Artificial Intentionality

Intentionality – the fact of being deliberate or purposive (Oxford dictionary) – originates in the complexity of integrated information. Will A.I. ever show intentionality? According to me, A.I. will show intentionality rather soon Twenty years ago, I thought it would be around now (2020). Right now, I think it will be in 20 years from Read the full article…

Reinforcement Learning & Compassionate A.I.

This is rather abstract. There is an agent with a goal, a sensor, and an actor. Occasionally, the agent uses a model of the environment. There are rewards and one or more value functions that value the rewards. Maximizing the goal (through acting) based on rewards (through sensing) is reinforcement learning (R.L.). The agent’s policy Read the full article…

Translate »