Medical A.I. for Humans

June 4, 2022 Artifical Intelligence, Health & Healing, Philanthropically No Comments

Medical A.I. is flourishing and will be more so in the future. Therefore, we must ensure that it serves the total human being.

The main challenge

Through medical A.I. – substantially more than ever – humans can become either more humane or robotized from the inside out. This is not about putting probes etc. in the brain. It is about how people are generally treated. Basically, A.I. can enhance categorization (of personalities, emotions, disease types, and therapies) whether appropriately or not.

This needs some basic insights on intelligence and the human being.

About ‘total human being’

This denotes a fundamentally undivided entity. On top of ‘body = mind‘ reality, there is the challenge of complexity, especially in mind/brain. As a consequence, we can only partly understand the mind/brain from the purely conceptual level. In short, the mind/brain is unlike a modern computer, as complicated as such a machine may be.

Together with the conceptual – interesting for accountability, for instance – we need the subconceptual level for a correct understanding of ourselves. This is the level of mental-neuronal patterns. Without heeding this, one may treat humans as robots of the relatively inexpensive kind ― hardly respectful and even detrimental as it shows in the field of psychosomatics. A lack of respect for human inner complexity (‘depth’) may lead to unhappiness, diminished mental growth, lots of burnout, etc. Due to socio-cultural challenges – and additionally enhancing these challenges – this becomes increasingly important.

Individualizing care

This is a hot topic in medicine, generally. Not all people who carry a specific diagnosis should be treated the same. This is, we need to avoid exaggerated bias. As to the above, respecting human complexity at the subconceptual level is (together with the conceptual) important.

A.I. is an excellent instrument to accomplish this Compassionately, not only to produce efficient members-of-society but total human beings in their unique diversity. By doing so, medical A.I. can deliver the healthcare that people need. If this seems idealistic, well, it is, as well as necessary.

In research

‘Medical A.I. for humans’ is vital for research on top of present-day research, which is, in many cases, exclusively conceptual. For instance, in Randomized Controlled Trials (comparing groups in lab-like circumstances), conceptualizing the object of study is crucial by default. Most of the effort in a good RCT goes to this endeavor. After that, results are extrapolated, which frequently breaks down with flawed conceptualization. The results in the real world then show to be less positive than those in the RCT.

A.I. can be used to aid this conceptualization, which is a good thing but – as you may know by now – not everything. Take stress.

Stress should not be treated as a blob with a score. Within stress, one may find a kaleidoscope of meaningfulness. Treating it as a blob is like putting all colors of paint together in a bucket. The result is gray. Research on gray will not yield what can be yielded with research on any individual color(s). Moreover, if distinct colors or hues may have an effect, other colors may cancel out these same effects. ‘Good stress’ and ‘bad stress’ may neutralize each other. As a result, one may find no influence where there is much hidden in the colors. Appropriately distinguishing the colors of stress leads to better science and applications. For this, we need medical A.I. centered on humans.

Something similar is applicable to all kinds of emotions. It is still in popular vogue to categorize human emotions into the basic seven (or another number). Meanwhile, the science about feelings/emotions shows that they cannot properly be categorized in many cases. They are too individually unique. Thus, categorizing/conceptualizing them toward scientific research provides results that may quickly be hardly applicable to real life. Again, medical A.I. may aid in over-categorization or it may help in transcending the same.

(Artificial) Intelligence as prediction

This – to some – original take on intelligence sees it as about predicting some part of the future also if highly uncertain and/or complex and thus merely to be approximated. This can only be achieved by a self-enhancing system, able to predict increasingly better. The application of a simple set of rules is not to be called intelligence.

Some examples:

  • Intelligence is needed by animals to predict where food or predators may be.
  • Humans use intelligence to predict which kind of human interactions lead to which outcomes.
  • Obviously, we continually use prediction in healthcare. For instance, a diagnosis is – among other things – a prediction of which therapy, if any, may lead to a better prognosis. Any A.I. system that aids in diagnosis is an asset of prediction, influencing also the kind of prediction made. Does the psyche play a role in this or is it filtered out? Even a simple RX of the chest can be of a lung or a ‘lung that forms part of a total human being.’ The RX-description and subsequent management may be very different.

The view on intelligence as prediction is applicable to humans as well as A.I. It can, therefore, aid to enhance the development of human-oriented A.I.

One gains higher intelligence (toward better predictions) through experiences, using models (theories, e.g., theory of mind), and inferencing (plain thinking, for short). In all three, predictions are weakened by too many details, leading to an inefficient system and diminished learning. Prediction can also be weakened by too few details, leading to bias. ‘A.I. for humans’ is therefore about the best quantity and quality of details. In medical A.I., this is important in almost any application.

At best, this shows a direction in which additional medical research can be performed within real life itself. One might say: learning while doing. Here lies a whole domain of scientific development to be discovered.

Human-A.I. value alignment

For necessary alignment, it is required to understand human values. This is by far not evident. Are they mainly culture-relative: American, European, Chinese? Which ones are universal?

There being no easy solution to this conundrum, it is nevertheless of crucial importance. Healthcare is about ‘becoming better.’ But what is a better human being? Therefore, what should be the goal of medicine? Simply the absence of disease? For a few decades already, the World Health Organization has broadened the definition of health – thus the goal of medicine – toward much more than disease-related.

With medical A.I., this will become even more important through increasingly transcending cultural borders. Or, if it doesn’t, will it separate even more the China block from, say, the Latin America block or the Russian block? Each culture is valuable, but this should not lead to enemy blocks who see other cultures as threatening theirs. A.I. can enhance one way or the other, also and to a huge degree when health-related. Now is the time to consider this since some degree of autonomous medical A.I. will most probably become increasingly real (even though we should be very careful), if only for the sake of individualized care in combination with maximum accuracy.

In cultural perspective

Other cultures may have a different look on what counts as the phenomenon ‘disease,’ as well as on concrete diseases. In-depth similarities will certainly be found underlying differences in surface-level appearances. Medical A.I. for humans may help us find profound similarities, leading to better healthcare for all in less symptomatic (cosmetic) medicine, which may then be more causal where it counts, and therefore also more durable.

In conclusion, one may say that medical A.I. will become increasingly important for humans and humanity. From an ethical viewpoint, we should try to develop it as humanely oriented as possible.

Leave a Reply

Related Posts

Are LLMs Parrots or Truly Creative?

Large Language Models (LLMs, such as GPT) are, at present, just mathematical distillations of human-made textual patterns — very many of them. They are, therefore, frequently described as parrots. Size matters. The parrot feature may be applied when there is little input or little diversity in input. Then, clearly, the result is a pattern-based average Read the full article…

A.I. Explainability versus ‘the Heart’

Researchers are moving towards bringing ‘heart’ into A.I. Thus, new ethical questions are popping up. One of them concerns explainability. The heart cannot explain itself. “On ne voit bien qu’avec le cœur. L’essentiel est invisible pour les yeux.” [Antoine de Saint-Exupéry] “It is only with the heart that one can see rightly; what is essential Read the full article…

The Return of Expertext

‘Expertext,’ a term coined in the nineties (*), is now more relevant than ever as an efficient combination of semantic and declarative knowledge becomes practically feasible. This combination promises to bridge the gap between raw data and meaningful insights, paving the way for advanced A.I. systems that can think more like humans. Also, my first Read the full article…

Translate »