How can Medical A.I. Enhance the Human Touch?

September 8, 2022 Artifical Intelligence, Philanthropically No Comments

This is about ‘plain’ medical A.I. as any physician can use in his consultation room. The aim is a win for patients, physicians, society, and everybody.

Please also read Medical A.I. for Humans.

The danger of the reverse

The use of computers in medicine has notoriously not enhanced the human touch. Arguably, it has provoked the opposite by making medical practice generally a more technological undertaking at the detriment of the human side. That is not to be blamed on the technology but on how it has frequently been deployed. This, again, depends on how developers and decision-makers have tried to make the technology valuable.

Even with good intentions, one can make informatics more valuable bit by bit while making it less humanly valuable in the long run. For instance, by making a workflow more efficient, each ‘case’ can be handled more quickly, reducing the time for human contact. This diminishes the available time for the patient to think about health-related issues, such as the implications of a proposed medical investigation or therapy. Repeat this many times, and you get a very ‘efficient,’ yet humanly a relatively inefficient kind of medicine.

Doubtlessly, this danger is even more considerable with the use of A.I.

So, how do we steer A.I. in a more humane direction?

Logical subgoals are:

  • Reducing liabilities such as human bias, privacy and security breaches, physicians’ misdiagnoses (test misinterpretations, etc.), and unnecessary imaging studies or operations
  • Personalizing healthcare such as through ‘smart nutrition’: which food is better/worse for which person in, for instance, spiking blood sugar
  • Joint decision-making, such as by bringing the right data in the right format at the right time to enable the consideration of patients’ preferences in medical investigations and treatments

All very well, but to attain these goals, it’s not enough to bring the needed A.I. technology to the caregiver. Fortunately, we may use A.I. also to incentivize correct deployment, mainly in the light of the third subgoal.

The following is a specific background for thinking about this:

Seeing the A.I. as entering the decision process

Medical decisions can and should increasingly be seen as the result of joint decision-making. The centrality of the patient in this can go as far as his knowledge and openness. Other possible partners in each decision process are the caregiver(s) and the A.I.-system as can be envisioned.

At each moment in this decision process, the initiative for the next step may change. However, the ultimate control over who takes the initiative should be with the patient. The patient is the ‘user’ of the process concerning his health. Thus, we can coin the term ‘user-initiated initiative.’

This way, A.I. can enter the decision process most flexibly.

How can this enhance the human touch?

For example, confirmation bias can lead to the continuation of one’s actions beyond the lack of the desired outcome. Human beings are notoriously prone to this bias, physicians being no exemption. Thus, David Epstein (*) notes, “Stents for stable [angina pectoris] patients prevent zero heart attacks and extend the lives of patients a grand total of none at all.” Yet stent operations are performed on many patients who are unlikely to benefit. Patients can only undergo. However, with A.I. entering the decision process, the flow changes. In a good scenario, the change puts the patient at the heart of the decision while valuing the physician’s input even more than before. The entire process is fluid. More than ever, the physician can take the patient into account as a total human being. In the stenting example, the patient may see and understand why an operation may not be the best option. Instead, a regular check-up may be the preferred option to mitigate most of the risk.

The same is relevant for many other forms of surgery.

As a significant additional bonus, the patient who feels in control of the decision will probably have less incentive to litigate if anything unexpectedly turns out sub-optimally.

Another realistic bonus is related to many physicians’ burnout ― officially nearly half of them in the US. Lack of deep human contact with patients is seen as a substantial cause. Obviously, these burnouts themselves also influence patients. Enhancing the human touch is a welcome feature towards more empathy and less burnout.

Conclusion

The general conclusion from this example is that A.I. can enhance the human touch by bringing people closer together from the inside out. For A.I. developers/vendors, this may be a good element in the value proposition. In medical A.I., we can think of caregivers and patients as in the example.

As said, we should ensure the reverse doesn’t happen. Then, we can definitely proceed in using A.I. in many ways to make this world, including medicine, a more humane one.

**

(*) See David Epstein’s essay, “When Evidence Says No, But Doctors Say Yes.” (ProPublica, 2017) This essay is also recommended by the world-renowned cardiologist Eric Topol in his excellent book “Deep Medicine: How Artificial Intelligence Can Make Healthcare Human Again“ (Basic Books 2019)

Leave a Reply

Related Posts

The A.I. Productivity Paradox

Despite many past promises, A.I. productivity has consistently underperformed at the macro-economic level. Each new development is believed to overcome this paradox. Could Compassionate A.I. (C.A.I.) be the missing piece? That would make C.A.I. not only the most ethical but also the most productive breakthrough toward a better world in several ways. This might be Read the full article…

Ethical A.I.

A.I. is almost here. No doubt about it. Once mature, it will answer its own ethical questions. Right now, we can still give some guidance to this near future. Time scale It’s easy to misjudge the time scale in which this will become hugely relevant to us. It will be so to our children or, Read the full article…

Bringing Compassion to the World through A.I.

This is the crucial idea behind the philanthropic project of Planetarianism as part of the AURELIS project. You can find a blog about Planetarianism here and a concrete overview presentation (ppsx for laptop) here. Concretely, it’s a set of projects aiming for this blog’s title. Compassion, basically, is no rosy moonshine. There are strong traditions Read the full article…

Translate »