back Back

Should your doctor use AI?

Jan. 30, 2024.
1 min. read 1 Interactions

Stanford University authors suggest how LLMs can be used, and warn of potential pitfalls

About the Writer

Amara Angelica

184.69491 MPXR

Amara Angelica is Senior Editor, Mindplex

Hospital scene (credit: A. Angelica/Dall-E 3)

In an article published today in the Journal of Internal Medicine, authors at Stanford University suggest that large language models (LLMs, like ChatGPT) can be used for administrative tasks.

These tasks could include summarizing medical notes and aiding documentation; tasks related to augmenting knowledge, like answering diagnostic questions and questions about medical management; and tasks related to education.


However, the authors also warn of potential pitfalls, including a lack of HIPAA adherence, inherent biases, lack of personalization, and possible ethical concerns related to text generation.

The authors also suggest checks and balances: for example, always having a human being in the loop, and using AI tools to augment work tasks, rather than replace them. In addition, the authors highlight active research areas in the field that promise to improve LLMs’ usability in health care contexts.

Citation: Jesutofunmi A. Omiye, MD, MS, Haiwen Gui, BS, Shawheen J. Rezaei, MPhil, James Zou, PhD, and Roxana Daneshjou, MD, PhD. Large Language Models in Medicine: The Potentials and Pitfalls.

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Comment on this content


0 thoughts on “Should your doctor use AI?





💯 💘 😍 🎉 👏
🟨 😴 😡 🤮 💩

Here is where you pick your favorite article of the month. An article that collected the highest number of picks is dubbed "People's Choice". Our editors have their pick, and so do you. Read some of our other articles before you decide and click this button; you can only select one article every month.

People's Choice