Competence vs. Comprehension
We don’t usually make the difference between competence and comprehension in anything that we regard as under conscious control. That may be wrong in many cases and, at present, increasingly dangerous.
Competence, comprehension
Competence is about what an agent can do. Comprehension is about the understanding of this agent concerning his own doing.
To what degree is conscious comprehension needed to gain competence? This is a challenging question in the human case, but also broader. Much may become clearer when looking at the brain as a giant pattern recognizer. [see: “Your Mind-Brain, a Giant Pattern Recognizer“] Neuronal patterns do not need consciousness to enfold. In fact, only a tiny amount of them reach consciousness, and they do so from the inside out. What they do meanwhile in any case makes the brain very Post-Postmodernist. [see: “The Post-Postmodernist Brain“]
Competence without comprehension
There are many situations in which we think we need comprehension where actually, we don’t.
For instance, getting home in one’s car for the nth time without realizing how one got there is common. Apparently, we don’t need conscious comprehension to perform such a complicated and dangerous act as driving a car for half an hour.
Is there non-conscious comprehension? This is merely a semantic question if one defines comprehension as being always conscious. Then this question is relevant: For what do we need conscious comprehension?
Neurocognitive science shows that we don’t need comprehension for almost anything we competently do. Daniel C. Dennett calls an inadvertent projection of comprehension into competence ‘the intentional stance’ and points out that we exhibit this stance also towards ourselves. Also, Darwin and Turing have shown that competence without comprehension is possible to a huge degree in the origination of the most complex organisms (such as us) and mechanisms. Comprehension can be the result of competence more than vice versa.
So, where is the driving force?
According to Jonathan Haidt, consciousness is like an elephant’s driver who thinks to be in charge. In reality, it’s the elephant (non-conscious mental processing) who does what he likes to do. The driver’s job is pretty much reduced to finding justifications for whatever the elephant may choose.
This is congruent with much basic psychological research. For instance, research on the predictive brain. [see: “The Brain as a Predictor“]
Implication to psychotherapy
Crucially to psychotherapy, if the elephant is in charge, it is the elephant that should be aimed at in therapy. It’s the elephant that can change direction, not the self-proclaimed driver.
Say a person has some issue and seeks the help of a healthcare provider. In the medical model, the healthcare provider seeks to conceptualize what is wrong. This is: he forms a diagnosis. If possible, the diagnosis is followed by therapy. In case of a mental diagnosis, this is mental therapy. In the best case, the therapy is provided, and the patient gets cured.
But what if real competence does not lie at the level of comprehension, which is probably the case according to what we just saw? That changes the whole situation.
It also explains a few things:
- why there are so many psychotherapies, each floating on a draft of competence that doesn’t need precise comprehension except as the driver’s justification [see: “Psychotherapy vs. Psychotherapies”]
- why specific psychotherapeutic modalities show to be non-effective when scientifically investigated even while they appear to be effective [see: “WHY Psychotherapies Don’t Work”]
- why so much moral investment goes into the belief that they do work. This is the driver trying to keep his status. Meanwhile, the elephant silently goes its way. Does the elephant actually care about what the driver thinks and does? Even more, as long as the driver is self-oriented, the elephant is sure to get his own way ― also, unfortunately, if it’s the wrong way in an ever more complex environment.
More than ever, we need proper communication between elephant and driver.
One answer to all this may be the search for more in-depth coaching. [see: “Coaching Happens In-Depth“]
Implication to A.I.
If an A.I. system behaves as if it’s in possession of comprehension, then is it so? How can one know?
In the case of a human, we would ask him. Can he explain his behavior? If so, and according to the degree of detail and flexibility, we accord him the characteristic of comprehension ― intentional stance. Thus, it depends on explainability. Is it the same in the case of an A.I. system? According to Dennett, we are bound to do so.
Then we better be cautious about making A.I. systems self-explainable. We want artificial competence. Do we also wish artificial comprehension before understanding for ourselves what this implies?
At least, the question is pertinent. You find my answer in many blogs. [see cat.: “Artificial Intelligence“]