In my most recent column (AI & U), I suggested that artificial intelligence (AI) in its most recent newsworthy iteration, the chatbot, offers some potentially useful opportunities. For example, in the short term the ability of a machine to search for the diagnostic possibilities and treatment options in a matter of seconds sounds very appealing. The skills needed to ask the chatbot the best questions and then interpret the machine’s responses would still require a medical school education. Good news for those of you worried about job security.
However, let’s look further down the road for how AI and other technological advances might change the look and feel of primary care. It is reasonable to expect that a chatbot could engage the patient in a spoken (or written) dialog in the patient’s preferred language and targeted to his/her educational level. You already deal with this kind of interaction in a primitive form when you call the customer service department of even a small company. That is if you were lucky enough to find the number buried in the company’s website.
The “system” could then perform a targeted exam using a variety of sensors. Electronic stethoscopes and tympanographic sensors already exist. While currently most sonograms are performed by trained technicians, one can envision the technology being dumbed down to a point that the patient could operate most of the sensors himself or herself, provided the patient could reach the body part in question. The camera on a basic cell phone can take an image of a skin lesion that can already be compared with a standard set of normals and abnormals. While currently a questionable lesion triggers the provider to perform a biopsy, it is possible that sensors could become so sensitive and the algorithms so clever that the biopsy would be unnecessary. The pandemic has already shown us that patients can obtain sample swabs and accurately perform simple tests in their home.
Once the “system” has made the diagnosis, it would then converse with the patient about the various treatment options and arrange follow up. One would hope that, if the “system’s” diagnosis included a fatal outcome, it would trigger a face-to-face interaction with a counselor and a team of social workers to break the bad news and provide some kind of emotional support.
Those of you who are doubting Dorothys and Thomases may be asking what about scenarios in which the patient’s chief complaint is difficulty breathing or sudden onset of weakness? Remember, I am talking about the usual 8 a.m–6 p.m. primary care office. Any patient with a possibly life-threatening complaint would be triaged by the chatbot and would be seen at some point by a real human. However, it is likely that individual’s training would not require the breadth of the typical medical school education and instead would be targeted at the most common high-risk scenarios. This higher-acuity specialist would, of course, be assisted by a chatbot.
Patients with complaints primarily associated with mental illness would be seen by humans specializing in that area. Although I suspect there are folks somewhere brainstorming on how chatbots could potentially be effective counselors.
Clearly, the future I am suggesting leaves the patient with fewer interactions with a human, and certainly very rarely with a human who has navigated what we think of today as a traditional medical school education.
Would they do it without complaint? Would they have a choice? Do you like it when you are interrogated by the prerecorded voice on the phone tree of some company’s customer service? Do you have a choice? If that interrogation was refined to the point where it saved you time and resulted in the correct answer 99% of the time would you still complain?
If patients found that most of their primary care complaints could be handled more quickly by an AI system with minimal physician intervention and that system offered a success rate of over 90% when measured by the accuracy of the diagnosis and management plan, would they complain? They may have no other choice than to complain if primary care continues to lose favor among recent medical school graduates.
And what would the patients complain about? They already complain about the current system in which they feel that the face-to-face encounters with their physician are becoming less frequent. I often hear complaints that “the doctor just looked at the computer, and he didn’t really examine me.” By which I think they sometimes mean “touched” me.
I suspect we will discover what most of us already suspect and that is there is something special about the eye-to-eye contact and tactile interaction between the physician and the patient. The osteopathic tradition clearly makes this a priority when it utilizes manipulative medicine. It may be that if primary care medicine follows the AI-paved road I have imagined it won’t be able to match the success rate of the current system. Without that human element, with or without the hands-on aspect or even if the diagnosis is correct and the management is spot on, it just won’t work as well.
Dr. Wilkoff practiced primary care pediatrics in Brunswick, Maine, for nearly 40 years. He has authored several books on behavioral pediatrics, including “How to Say No to Your Toddler.” Other than a Littman stethoscope he accepted as a first-year medical student in 1966, Dr. Wilkoff reports having nothing to disclose. Email him at pdnews@mdedge.com.