Medical A.I. for Humans

June 4, 2022 Artifical Intelligence, Health & Healing No Comments

Medical A.I. is flourishing and will be more so in the future. Therefore, we must ensure that it serves the total human being ― far from evident.

The main challenge

Through medical A.I. even substantially much more than ever before, humans can become either more humane or, contrary to this, more robotized from the inside out. This is not so much about putting probes etc. in the brain, as it is about how people are treated in general. Basically, A.I. can enhance categorization (of humans, emotions, disease types, and therapies) whether appropriately or not.

This needs some basic and original insights on intelligence and the human being. Ideally and increasingly, we should also know what consciousness is about.

What is the ‘total human being’?

This denotes a fundamentally undivided entity. On top of non-dualistic (body = mind) reality, there is the challenge of complexity, especially in mind/brain. As a direct consequence, we can only partly understand the mind/brain at the conceptual level. In short, the mind/brain is unlike a clockwork or even a modern computer, as complicated as such a machine may be.

Thus, together with the conceptual – interesting for accountability, for instance – we need the subconceptual level for an excellent understanding of ourselves. This is the level of mental-neuronal patterns. Without heeding this, we treat ourselves as some modern robot of the relatively inexpensive kind ― hardly respectful and even detrimental such as it frequently shows in the field of psychosomatics. A lack of respect for our human inner complexity (‘depth’) may lead to unhappy people, diminished mental growth, lots of burnout, etc. Due to contemporary socio-cultural challenges – and also additionally enhancing these very challenges – this has become increasingly important.

(Artificial) Intelligence as prediction utility

This may, for some, be an original take on the phenomenon of intelligence. It sees intelligence as about predicting some part of the future also if this is highly uncertain and/or complex and can only be approximated. In view of complexity, this can only be achieved by a self-scalable system, able to predict increasingly better. The application of a simple set of rules is not to be called intelligence.

Some examples:

  • Intelligence is needed by animals to predict where food or predators may be.
  • Humans use intelligence to predict which kind of human interactions lead to which outcomes.
  • Of course, we also use our intelligence in healthcare. For instance, a diagnosis is – among other things – a prediction of which therapy, if any, may lead to a better prognosis. Any A.I. system that aids in diagnosis is an asset of prediction, influencing also the kind of prediction made. Does the psyche play a role in this or is it filtered away? Even a simple RX of the chest can be of a lung or of a lung that forms part of a total human being. The RX-description may be very different, as is the subsequent management.

The view on intelligence as prediction utility is applicable to animals, humans, and A.I. and is, therefore, an excellent candidate to enable the development of human-oriented A.I.

One gains higher intelligence (toward better predictions) through experiences, using models (theories, e.g., theory of mind), and inferencing (plain thinking, for short). In all three, predictions are weakened by too many details, leading to an inefficient system and diminished learning. It can also be weakened by too few details, leading to bias. A.I. for humans is about the best quantity and quality of details. In medical A.I., this is important in almost any application.

Individualizing care

This is a hot topic in medicine, generally. Not all people who carry a specific diagnosis should be treated the same. This is, we need to avoid exaggerated bias. As to the above, respecting human complexity at the subconceptual level is also (together with the conceptual) important.

A.I. is an excellent instrument to accomplish this in a Compassionate way, not only to produce efficient members-of-society but total human beings in their unique diversity. By doing so, medical A.I. can deliver the healthcare that people need. If this seems idealistic, well, it is, as well as necessary.

In research

Medical A.I. for humans is vital for research as an add-on to present-day research, which is mainly exclusively conceptual. For instance, in Randomized Controlled Trials (comparing groups in lab circumstances), conceptualizing the object of study is crucial by default. Most of the effort in a good RCT goes to this endeavor. Then, results are extrapolated, which also breaks down with flawed conceptualization.

A.I. can be used to aid this conceptualization, which is a good thing but – as you may know by now – not everything. Take stress.

Stress should not be treated as a blob with a score. Within stress, one may find a kaleidoscope of meaningfulness. Treating it as a blob is like putting all colors of paint together in a bucket. The result is gray. Research on gray will not yield what can be yielded with research on any individual color(s). Moreover, if distinct colors or hues may have an effect, other colors may cancel out these same effects. ‘Good stress’ and ‘bad stress’ may neutralize each other. As a result, one may find no influence where there is much. Appropriately distinguishing the colors of stress leads to better science and applications. For this, we need medical A.I. centered on humans.

Something similar is applicable to all kinds of emotions. It is still in popular vogue to categorize human emotions into the basic seven (or another number). Meanwhile, the scientific truth about feelings/emotions is that they cannot properly be categorized in many cases. They are too individually unique. Thus, categorizing/conceptualizing them toward scientific research provides results that may quickly be hardly applicable to real life. Again, medical A.I. may aid in over-categorization (which it already frequently does) or it may help in transcending the same.

Human-A.I. value alignment

For this necessary alignment, it is also required to understand human values. This is by far not evident. Are they mainly culture-relative: American, European, Chinese? Which ones are universal?

There being no easy solution to this conundrum, it is nevertheless of crucial importance. Healthcare is about ‘becoming better.’ But what is a better human being? Therefore, what should be the goal of medicine? Simply the absence of disease? For a few decades, the World Health Organization has broadened the definition of health – thus the goal of medicine – toward much more than disease-related.

With medical A.I., this will become even more important since it will increasingly transcend borders. Or, if it doesn’t, will it separate even more the China block from, say, the Latin America block or the Russian block? Each culture is valuable, but this should not lead to enemy blocks who see other cultures as threatening theirs. A.I. can support one way or the other. Now is the time to consider this since autonomous medical A.I. will undoubtedly become increasingly real, if only for the sake of individualized care.

In cultural perspective

Other cultures may have a different look on what counts as the phenomenon ‘disease,’ as well as on concrete diseases. In-depth, similarities will certainly be found together with differences in surface-level appearances. Medical A.I. for humans may help us find these more profound similarities, leading to better healthcare for all in less symptomatic (cosmetic) medicine, which may then be more causal where it counts, and therefore also more durable.

In conclusion, one may say that medical A.I. will become increasingly important for humans and humanity. We should try to develop it as humanely oriented as possible.

Leave a Reply

Related Posts

The Next Breakthrough in A.I.

will not be technological, but philosophical. Of course, technology will be necessary to realize the philosophical. It will not be one more technological breakthrough, but rather a combination of new and old technologies. “Present-day A.I. = sophisticated perception” These are the words of Yann LeCun, a leading A.I. scientist, founding father of convolutional nets, which Read the full article…

Is the Brain a General-Purpose Computer?

The brain computes, although not comparably to a present-day computer. As a computing device, it is general-purpose. Cortical wonders Scientists have found out that the neocortex – part of the brain where much of human intelligence happens – is much the same over its whole surface. Any neocortical patch can develop in a variety of Read the full article…

What Is Morality to A.I.?

Agreed: it’s not even evident what ‘morality’ means to us. Soon comes A.I. Will it be ‘morally good’? Humans have a natural propensity towards morality. Whether we tend towards ‘good’ or ‘bad’, we have feelings and generally recognize these in others too, in humans and in animals. We share organic roots. We recognize suffering and Read the full article…

Translate »