Is A.I. Dangerous to Human Cognition?

February 1, 2022 Artifical Intelligence, Cognitive Insights No Comments

I have roamed around this on several occasions withinThe Journey towards Compassionate A.I.’ (of which this is an excerpt) The prime reason why I think it’s dangerous is, in one term: hyper-essentialism.

But let me first give two viewpoints upon your thinking:

  • Essentialism: presupposes that the categories in your mind – such as an emotion or a concept – each have a ‘true reality’ in that its instances share an underlying ‘essence’ that causes them to be similar, even if they superficially differ. Each such category is supposed to correspond with a specific tiny circuit in the brain, like some prefabricated construct, which is triggered at the moment that you feel it or think it. Note in this a purely conceptual processing. To an essentialist, ‘meaning’ – as well as ‘existential essence’ – lies in the concepts and the links between concepts.
  • Constructionist viewpoint: holds that there are no universal constructs, only ad hoc constructions. The categories in mind vary from culture to culture and from individual to individual. They are not triggered but constructed each time again, influenced by past experiences, cultural upbringing… Not each type, but each instance of an emotion or a concept comes from a unique brain state or even total bodily state. So, for instance, you “never think the same thought twice.” Let alone you and your neighbor. This does not make your thoughts or emotions less real. Note in this a mainly subconceptual processing. To a constructionist, ‘meaning’ – as well as ‘existential essence’ – lies in the act of construction.

For the sake of argument, I’ve just torn apart the two viewpoints, like a good essentialist. Like a proper constructionist, I must add that there are many continua and a lot of overlap possible, even within one individual. Sometimes, one thinks more conceptual-essentialistic, and sometimes, more subconceptual-constructionistic. In my view, the ideal is to master both ways well enough and to be able to switch between them while knowing what one is doing. If you get this, you are ready for the following.

Essentialism is hardly new.

It has been around in the West for at least 2500 years. Plato partitioned the human psyche into three ‘essences’:

  • Rationality
  • The passions (nowadays: ‘emotions’)
  • The appetites (hunger, thirst, sex)

Rationality was deemed the worthiest level of the human mind. In the best case, it was seen as controlling the passions and the appetites. In Plato’s metaphor, rationality was a kind of charioteer commanding the other two, who were like two winged horses. This idea trickled down over the ages, continuously carving culture and, for instance, reappearing in Platonic A.I. (my term for ‘symbolic A.I.,’ also my preferred connotation).

Plato (approximately 428 – 348 BC) was a man of clear and distinct concepts. A ‘Platonic concept’ (still an existing term) is deemed to be pure. No murkiness. It’s quirkily peculiar that Plato was the most prominent disciple of Socrates, who was, in many ways, a different bloke. Socrates continually strived for and urged others towards making the correct distinctions in the striving towards self-knowledge, but his method was never-ending doubt. No eternally pure concepts but, again and again, human contemplation in a critical, ‘Socratic’ dialogue. In a way, Plato took his mental teacher’s urge for optimal clarity and pushed it through the roof — with enormous consequences to us.

Categories and more categories

In algorithmic machine learning (for instance, decision trees or Support Vector Machines) as well as in Artificial Neural Networks, the aim is, in many cases, to ‘categorize.’ As a crude distinction, the objective is either to find new categories (clusters in unsupervised learning) or to find ways to put concrete real-world instances in predefined categories (classes in supervised learning).

‘To categorize’ eventually runs parallel with ‘to find the essence,’ broadly put, again, in two ways:

1) by finding the necessary and sufficient characteristics to delineate a category (or concept), or

2) by finding the essence more directly, as if by definition of the ‘core’ of a category.

This is dangerous in the cognitive domain of thoughts, feelings, and motivations.

Do you recognize, in these, Plato’s three essences to some degree? Plato’s ideas were intuitive and fully acknowledged in the West until recently (around a century ago). Nevertheless, since this time, scientists gradually came to see, better and more precise than ever by now, how deadly wrong this was, how devoid of living experience with or without A.I. ― only, A.I. can hugely aggrandize this issue and take the zest out of it all.

Meanwhile, the classical ‘essentialist’ view has made way for a modern perspective, called ‘constructionism.’ As said, constructionism posits that concepts – in the human mind, if not universally – are constructed each time again, in your mind, when you mentally process one. Lisa Feldman Barrett has accomplished this convincingly for feelings and emotions [Feldman Barrett, 2017], Daniel L. Schacter, for memory [Schacter, 2002]. Others have done so for perception, brain development, etc.

Healthy?

In medicine, one can see this, for instance, in that a significant amount of research points in the direction of a unifying disease mechanism for functional pain syndromes (more or less coinciding with psychosomatic pain syndromes). This unifying disease mechanism – seeing one ‘essence’ for all functional pain syndromes, from chronic tension headaches to chronic nonbacterial prostatitis – is related to how the mind constructs its own experiences in an open-complex manner. This stands in contrast to specific diseases (in this case ‘pain syndromes’) that each would have a distinct ‘essence’ distinguishing it from others [Mayer et al., 2009].

A diagnostic A.I. system risks pinning patients down into untoward over-categorization: through the categorization, one gets categorized patients. Consequently, the A.I. may further interpret this evolving situation towards even more categorization. As a result, people get robotified; human depth vanishes. That’s what I call dangerous.

The truth most probably lies somewhere between a purely essentialist and a purely constructionist view.

Leaning too much towards the purely constructionist view, one risks hyper-relativism. Going too much towards the purely essentialist view, one risks hyper-conceptualism. You may recognize the latter in second-wave. I am writing a lot to show this and to avert even more disaster than we are experiencing already. In two words, both in their broadest sense: aggression and depression. In one term: ‘deep suffering.’ On the other hand, hyper-relativism leads to loss of Inner Strength. I confess that I grapple with this sometimes.

But all in all, it is hyper-conceptualism (hyper-essentialism) that I see as most rampant at present in Western societies. It’s probably also the case in Eastern cultures, but less apparent to me. The urge to over-categorize cognition is an immense problem, including in the ways that I see machine learning technologies being forwarded and used. This way, they become ‘weapons of math destruction,’ especially in the cognitive domain.

(Lack of) explainability

An additional problem with neural networks, particularly important in the cognitive domain, is their lack of explainability. Concrete categorization is done with no insight into how it’s done.

To non-experts, this may give it even more of an air of seriousness. People may feel helpless and complacent towards this, particularly when already in mental need. Being categorized – even if unwarranted – at least provides some relief from the ‘danger of chaos,’ especially in a culture that oozes the fear for such danger. Being A.I.-categorized and with no explanation, people may feel helped and cling to this.

But is it sustainable? Not at all. Respectful? Nope.

Human values?

The categories may become elements in self-perpetuating patterns for the simple sake of superficial relief. Where are the ‘true human values’ in this, to be aligned with A.I. so that we may be certain of A.I.’s continual beneficence?

I fear over-categorization when Deep Neural Networks are going to be used for medical psycho-diagnostic purposes, especially where over-categorization is already the case at present. On the other hand, well-chosen A.I. technology can be used positively: to open up the same categories (diagnostic constructs, for instance) wherever appropriate. This may lead to a critical view upon what really ails people, and how they can be helped in the most Compassionate way: relief of deep suffering and the fostering of inner growth.

We should not be complacent in this and only hope for the better in A.I.’s evolution since we may then fear for the worst: human values getting entirely lost.

Past and present

We are in this at a crossroads, again, of what historically has been an almost continuous issue. For example, in the Western Middle Ages, a vast and at times aggressive battle raged between what were then more or less the equivalents of the two viewpoints. After centuries, essentialism won (in the end, and after losing at first) and has led to many worldly successes, at a considerable human cost.

As said however, since recently, and through tons of science at its side, the constructionist view is back, at least in theory (but much more seldom in practice). This is the beauty of science: it doesn’t stop at the borders of ‘intuitive truths.’ It asks us to deal with the falling of the house of Usher.

Humanity for Compassion!

If we do so, huge rewards are waiting. In this case: a Compassionate human society, human-friendly A.I., and a future in which people discover more and more their Inner Strength and health. But will we go this way indeed, and can we deal with things to come? There is no certainty on this journey.

The way we use A.I. – right now already – plays a huge role in this.

Bibliography

[Feldman Barrett, 2017] Lisa Feldman Barrett. How Emotions Are Made: The Secret Life of the Brain – Mariner Books, 2017

[Mayer et al., 2009] EA Mayer, MC Bushnell, editors. Functional pain syndromes – IASP Press, 2009

[Schacter, 2002] Daniel L. Schacter. The Seven Sins of Memory: How the Mind Forgets and Remembers – Mariner Books, 2002

Leave a Reply

Related Posts

Let Us Go for Science with a Heart

Human-related science has, until now, not always taken the view from the heart. A.I.-based science may profoundly change this picture. If you are an avid reader of my blogs, this one has little new. Yet by putting things together from a different perspective, a new picture may emerge. Reproducibility Science is about many people being Read the full article…

Artificially Intelligent Creativity

It’s all about associative patterns — ideally broadly distributed and combining both conceptual and subconceptual levels. In the same pattern, different levels This is very natural in humans, making spontaneous associations of any sort in daily life. When inspired, we go deeper and broader — nothing entirely new occurs since all concepts in our mind Read the full article…

At the Brink of Robotics?

Soon enough, we will see a revolution in robotics development on the scale of the present and pending A.I.-in-knowledge revolution. Together, these revolutions will bring eu-topia or dystopia. We don’t know, but we should not remain idle. Pretty much the same basic technologies This is: apart from some translation that makes the analogies less obvious Read the full article…

Translate »