Flawed algorithms
Incorrect AI algorithms that are broadly adopted could negatively affect the health of millions.
Perhaps the most unsettling claim is made by causaLens: “Causal AI is the only technology that can reason and make choices like humans do,” the website states. A tantalizing tag line that is categorically untrue.
Our mysterious and complex neurophysiological function of reasoning still eludes understanding, but one thing is certain: medical reasoning originates with listening, seeing, and touching.
As AI infiltrates mainstream medicine, opportunities for hearing, observing, and palpating will be greatly reduced.
Folkert Asselbergs from University Medical Center Utrecht, the Netherlands, who has cautioned against overhyping AI, was the discussant for an ESC study on the use of causal AI to improve cardiovascular risk estimation.
He flashed a slide of a 2019 Science article on racial bias in an algorithm that U.S. health care systems use. Remedying that bias “would increase the percentage of Black people receiving additional help from 17.7% to 46.5%,” according to the authors.
Successful integration of AI-driven technology will come only if we build human interaction into every patient encounter.
I hope I don’t live to see the rise of the physician cyborg.
Artificial intelligence could be the greatest boon since the invention of the stethoscope, but it will be our downfall if we stop administering a healthy dose of humanity to every patient encounter.
Melissa Walton-Shirley, MD, is a clinical cardiologist in Nashville, Tenn., who has retired from full-time invasive cardiology. She disclosed no relevant conflicts of interest.
A version of this article first appeared on Medscape.com.