General medicine, routine visits and such, have gradually gone from M.D.s to including Osteopaths and now Physicians Assistants with no decline in quality, and Large Language Models, colloquially called "Artificial Intelligence", like ChatGPT can assist at very low cost.
A recent experiment found that Chat GPT Plus alone had a median diagnostic accuracy of more than 92%. Since most medical personnel won't have expertise in how to prompt the tool, that could easily go higher if organizations purchase predefined prompts to implement in clinical workflow and documentation. The experiment was 50 physicians in family medicine, internal medicine and emergency medicine, half randomly assigned to use Chat GPT Plus to diagnose complex cases and half using legacy medical reference sites such as UpToDate and Google
Both groups had similar accuracy yet ChatGPT alone did better than all 50 doctors.
“We were surprised to find that adding a human physician to the mix actually reduced diagnostic accuracy though improved efficiency," says Andrew S. Parsons, MD, MPH who oversees the teaching of clinical skills to medical students at the University of Virginia School of Medicine. "These results likely mean that we need formal training in how best to use AI.”
Regardless of one experiment, even with the onerous increase in medical care costs following government encouraging 50,000,000 people to enroll in the Affordable Care Act program no one is replacing doctors, but the tool can augment human physicians, whether they went to medical school or not.
The experiment
The participating doctors created clinical vignettes based on real-life patient-care cases, including patients’ histories, physical exams and lab test results. The researchers then scored the results and examined how quickly the two groups made their diagnoses.
The median diagnostic accuracy for the docs using Chat GPT Plus was 76.3%, while the results for the physicians using conventional approaches was 73.7%. The Chat GPT group members reached their diagnoses slightly more quickly overall – 519 seconds compared with 565 seconds.
That doesn't mean LLMs are ready to be rolled into practice, a lot of expertise goes into determining downstream effects of diagnoses and treatment decisions. They are still a fine starting point and a roadmap for a future when health care costs can come down rather than another decade of 400% increases.
ChatGPT Is Cheaper In Medicine And Does Better Diagnoses Even Than Doctors Using ChatGPT
Related articles
- Why More Doctors Stopped Working During The COVID-19 Pandemic
- ChatGPT Can Replace Journalists But It Can't Pass A Doctor's Final Exam In Med School
- Looming Doctor Shortage- Are Regulations To Blame?
- MDS Nordion Opens New Radiopharmaceutical Production Facility In Belgium
- Newborn Mothers Get Too Many Opiods, Claim Osteopaths, Then They Circulate Them In Their Community
Comments