How can Medical A.I. Enhance the Human Touch?

September 8, 2022 Artifical Intelligence, Philanthropically No Comments

This is about ‘plain’ medical A.I. as any physician can use in his consultation room. The aim is a win for patients, physicians, society, and everybody.

Please also read Medical A.I. for Humans.

The danger of the reverse

The use of computers in medicine has notoriously not enhanced the human touch. Arguably, it has provoked the opposite by making medical practice generally a more technological undertaking at the detriment of the human side. That is not to be blamed on the technology but on how it has frequently been deployed. This, again, depends on how developers and decision-makers have tried to make the technology valuable.

Even with good intentions, one can make informatics more valuable bit by bit while making it less humanly valuable in the long run. For instance, by making a workflow more efficient, each ‘case’ can be handled more quickly, reducing the time for human contact. This diminishes the available time for the patient to think about health-related issues, such as the implications of a proposed medical investigation or therapy. Repeat this many times, and you get a very ‘efficient,’ yet humanly a relatively inefficient kind of medicine.

Doubtlessly, this danger is even more considerable with the use of A.I.

So, how do we steer A.I. in a more humane direction?

Logical subgoals are:

  • Reducing liabilities such as human bias, privacy and security breaches, physicians’ misdiagnoses (test misinterpretations, etc.), and unnecessary imaging studies or operations
  • Personalizing healthcare such as through ‘smart nutrition’: which food is better/worse for which person in, for instance, spiking blood sugar
  • Joint decision-making, such as by bringing the right data in the right format at the right time to enable the consideration of patients’ preferences in medical investigations and treatments

All very well, but to attain these goals, it’s not enough to bring the needed A.I. technology to the caregiver. Fortunately, we may use A.I. also to incentivize correct deployment, mainly in the light of the third subgoal.

The following is a specific background for thinking about this:

Seeing the A.I. as entering the decision process

Medical decisions can and should increasingly be seen as the result of joint decision-making. The centrality of the patient in this can go as far as his knowledge and openness. Other possible partners in each decision process are the caregiver(s) and the A.I.-system as can be envisioned.

At each moment in this decision process, the initiative for the next step may change. However, the ultimate control over who takes the initiative should be with the patient. The patient is the ‘user’ of the process concerning his health. Thus, we can coin the term ‘user-initiated initiative.’

This way, A.I. can enter the decision process most flexibly.

How can this enhance the human touch?

For example, confirmation bias can lead to the continuation of one’s actions beyond the lack of the desired outcome. Human beings are notoriously prone to this bias, physicians being no exemption. Thus, David Epstein (*) notes, “Stents for stable [angina pectoris] patients prevent zero heart attacks and extend the lives of patients a grand total of none at all.” Yet stent operations are performed on many patients who are unlikely to benefit. Patients can only undergo. However, with A.I. entering the decision process, the flow changes. In a good scenario, the change puts the patient at the heart of the decision while valuing the physician’s input even more than before. The entire process is fluid. More than ever, the physician can take the patient into account as a total human being. In the stenting example, the patient may see and understand why an operation may not be the best option. Instead, a regular check-up may be the preferred option to mitigate most of the risk.

The same is relevant for many other forms of surgery.

As a significant additional bonus, the patient who feels in control of the decision will probably have less incentive to litigate if anything unexpectedly turns out sub-optimally.

Another realistic bonus is related to many physicians’ burnout ― officially nearly half of them in the US. Lack of deep human contact with patients is seen as a substantial cause. Obviously, these burnouts themselves also influence patients. Enhancing the human touch is a welcome feature towards more empathy and less burnout.

Conclusion

The general conclusion from this example is that A.I. can enhance the human touch by bringing people closer together from the inside out. For A.I. developers/vendors, this may be a good element in the value proposition. In medical A.I., we can think of caregivers and patients as in the example.

As said, we should ensure the reverse doesn’t happen. Then, we can definitely proceed in using A.I. in many ways to make this world, including medicine, a more humane one.

**

(*) See David Epstein’s essay, “When Evidence Says No, But Doctors Say Yes.” (ProPublica, 2017) This essay is also recommended by the world-renowned cardiologist Eric Topol in his excellent book “Deep Medicine: How Artificial Intelligence Can Make Healthcare Human Again“ (Basic Books 2019)

Leave a Reply

Related Posts

Sequential Problem Solving with Partial Observability

My goodness! Please hang on. Against all odds, this may get interesting. Besides, it’s about what you do every day, all day long. This is also what many would like A.I. to do for our sake. Even more, it is what artificial intelligence is about. Contrary to this, what is called A.I. these days is Read the full article…

A.I. and Constructionism

Many people, and Western culture (if not most cultures) in general, mainly live in ‘constructed reality.’ In combination with the power of A.I., this is excruciatingly dangerous. Constructionism [see: “Constructionism“] In short, humans mainly live in a ‘constructed reality’ full of group-based assumptions. On the one side, this is an asset. It makes life simpler. Read the full article…

Causation in Humans and A.I.

Causal reasoning is needed to be human. Will it also transcend us into A.I.? Many researchers are on this path. We should try to understand it as well as possible. Some philosophy Causality is a human construct. In reality, there are only correlations. If interested in such philosophical issues, [see: “Infinite Causality”]. In the present Read the full article…

Translate »