Cardiologists cannot detect AI-read echos
Identifying potential limitations of this study, James D. Thomas, MD, professor of medicine, Northwestern University, Chicago, pointed out that it was a single-center trial, and he questioned a potential bias from cardiologists able to guess accurately which of the reports they were evaluating were generated by AI.
Dr. Ouyang acknowledged that this study was limited to patients at UCLA, but he pointed out that the training model was developed at Stanford (Calif.) University, so there were two sets of patients involved in testing the machine learning algorithm. He also noted that it was exceptionally large, providing a robust dataset.
As for the bias, this was evaluated as predefined endpoint.
“We asked the cardiologists to tell us [whether] they knew which reports were generated by AI,” Dr. Ouyang said. In 43% of cases, they reported they were not sure. However, when they did express confidence that the report was generated by AI, they were correct in only 32% of the cases and incorrect in 24%. Dr. Ouyang suggested these numbers argue against a substantial role for a bias affecting the trial results.
Dr. Thomas, who has an interest in the role of AI for cardiology, cautioned that there are “technical, privacy, commercial, maintenance, and regulatory barriers” that must be circumvented before AI is widely incorporated into clinical practice, but he praised this blinded trial for advancing the field. Even accounting for any limitations, he clearly shared Dr. Ouyang’s enthusiasm about the future of AI for EF assessment.
Dr. Ouyang reports financial relationships with EchoIQ, Ultromics, and InVision. Dr. Thomas reports financial relationships with Abbott, GE, egnite, EchoIQ, and Caption Health.