Should your doctor use AI?
Jan. 30, 2024.
1 min. read Interactions
Stanford University authors suggest how LLMs can be used, and warn of potential pitfalls
In an article published today in the Journal of Internal Medicine, authors at Stanford University suggest that large language models (LLMs, like ChatGPT) can be used for administrative tasks.
These tasks could include summarizing medical notes and aiding documentation; tasks related to augmenting knowledge, like answering diagnostic questions and questions about medical management; and tasks related to education.
However, the authors also warn of potential pitfalls, including a lack of HIPAA adherence, inherent biases, lack of personalization, and possible ethical concerns related to text generation.
The authors also suggest checks and balances: for example, always having a human being in the loop, and using AI tools to augment work tasks, rather than replace them. In addition, the authors highlight active research areas in the field that promise to improve LLMs’ usability in health care contexts.
Citation: Jesutofunmi A. Omiye, MD, MS, Haiwen Gui, BS, Shawheen J. Rezaei, MPhil, James Zou, PhD, and Roxana Daneshjou, MD, PhD. Large Language Models in Medicine: The Potentials and Pitfalls. https://doi.org/10.7326/M23-2772