OpenAI’s newest AI model, GPT-4, has passed the US medical licensing exam with flying colors, outperforming both previous AI models and some licensed doctors. According to Harvard computer scientist and physician Dr. Isaac Kohane, GPT-4 has better clinical judgment than many doctors and can diagnose rare conditions just as accurately as a physician with years of experience. In a clinical thought experiment with GPT-4, Kohane gave the bot several key details about a newborn baby he previously treated, and the machine was able to correctly diagnose a rare condition called congenital adrenal hyperplasia.
However, GPT-4 is not without its flaws. Like its predecessors, GPT-4 can make mistakes, ranging from simple clerical errors to math mistakes that could lead to serious errors in prescribing or diagnosis. Additionally, the bot has no ethical compass, and its reasoning is limited to patterns in the data and doesn’t involve true understanding or intentionality.
Despite these limitations, GPT-4 has tremendous potential in the medical field. It has proven to be a great translator, able to distill technical jargon into language understandable by sixth-graders and translate discharge information for patients who speak different languages. It can also offer helpful suggestions to doctors about bedside manners and give summaries of lengthy reports or studies in the blink of an eye.
The authors of a forthcoming book, “The AI Revolution in Medicine,” caution that we must envision a world with smarter and smarter machines and think very hard about how we want that world to work. While GPT-4 can free up precious time and resources in the clinic, we must also consider how to ensure it offers safe and effective advice.
In conclusion, GPT-4 is a game-changing AI model with vast potential in the medical field. However, its limitations and potential risks must be carefully weighed against its benefits. As AI technology continues to advance, we must be intentional about how we use it in the pursuit of better health outcomes.