Latest News

AI at the office: Are clinicians prepared?


 

AT SGIM 2023

Artificial Intelligence has arrived at medical offices, whether or not clinicians feel ready for it.

AI might result in more accurate, efficient, and cost-effective care. But it’s possible it could cause harm. That’s according to Benjamin Collins, MD, at Vanderbilt University Medical Center, Nashville, Tenn., who spoke on the subject at the annual meeting of the Society of General Internal Medicine.

Understanding the nuances of AI is even more important because of the quick development of the algorithms.

“When I submitted this workshop, there was no ChatGPT,” said Dr. Collins, referring to Chat Generative Pre-trained Transformer, a recently released natural language processing model. “A lot has already changed.”

Biased data

Biased data are perhaps the biggest pitfall of AI algorithms, Dr. Collins said. If garbage data go in, garbage predictions come out.

If the dataset that trains the algorithm underrepresents a particular gender or ethnic group, for example, the algorithm may not respond accurately to prompts. When an AI tool compounds existing inequalities related to socioeconomic status, ethnicity, or sexual orientation, the algorithm is biased, according to Harvard researchers.

“People often assume that artificial intelligence is free of bias due to the use of scientific processes and its development,” he said. “But whatever flaws exist in data collection and old data can lead to poor representation or underrepresentation in the data used to train the AI tool.”

Racial minorities are underrepresented in studies; therefore, data input into an AI tool might skew results for these patients.

The Framingham Heart Study, for example, which began in 1948, examined heart disease in mainly White participants. The findings from the study resulted in the creation of a sex-specific algorithm that was used to estimate the 10-year cardiovascular risk of a patient. While the cardiovascular risk score was accurate for White persons, it was less accurate for Black patients.

A study published in Science in 2019 revealed bias in an algorithm that used health care costs as a proxy for health needs. Because less money was spent on Black patients who had the same level of need as their White counterparts, the output inaccurately showed that Black patients were healthier and thus did not require extra care.

Developers can also be a source of bias, inasmuch as AI often reflects preexisting human biases, Dr. Collins said.

“Algorithmic bias presents a clear risk of harm that clinicians must play against the benefits of using AI,” Dr. Collins said. “That risk of harm is often disproportionately distributed to marginalized populations.”

As clinicians use AI algorithms to diagnose and detect disease, predict outcomes, and guide treatment, trouble comes when those algorithms perform well for some patients and poorly for others. This gap can exacerbate existing disparities in health care outcomes.

Dr. Collins advised clinicians to push to find out what data were used to train AI algorithms to determine how bias could have influenced the model and whether the developers risk-adjusted for bias. If the training data are not available, clinicians should ask their employers and AI developers to know more about the system.

Clinicians may face the so-called black box phenomenon, which occurs when developers cannot or will not explain what data went into an AI model, Dr. Collins said.

According to Stanford (Calif.) University, AI must be trained on large datasets of images that have been annotated by human experts. Those datasets can cost millions of dollars to create, meaning corporations often fund them and do not always share the data publicly.

Some groups, such as Stanford’s Center for Artificial Intelligence in Medicine and Imaging, are working to acquire annotated datasets so researchers who train AI models can know where the data came from.

Paul Haidet, MD, MPH, an internist at Penn State College of Medicine, Hershey, sees the technology as a tool that requires careful handling.

“It takes a while to learn how to use a stethoscope, and AI is like that,” Dr. Haidet said. “The thing about AI, though, is that it can be just dropped into a system and no one knows how it works.”

Dr. Haidet said he likes knowing how the sausage is made, something AI developers are often reticent to make known.

“If you’re just putting blind faith in a tool, that’s scary,” Dr. Haidet said.

Pages

Next Article:

How providers are adjusting to clinical care post PHE