Teach podcast

#39 Live from AIMW24: Artificial Intelligence in Health Professions Education

May 15, 2024 | By



Transcript available via YouTube

With Dylan Fortman MD, Adam Rodman MD, and Laurah Turner PhD

Learn from the experts about the fundamentals of using AI in health professions education! Drs. Dylan Fortman, Adam Rodman, and Laurah Turner sit down with us to discuss what these models are, concerns to look out for when using these models, and how to integrate them into your teaching. Challenge yourself to explore where AI can take health professions education in the near future!

Claim CME for this episode at curbsiders.vcuhealth.org!

Website | Instagram | Twitter | Subscribe | Patreon | Free CME!| Youtube


Meet our Guests:

Dylan Fortman, MD is a current Internal Medicine Chief Resident at the University of Pittsburgh Medical Center and is planning to start a Hematology-Oncology fellowship this summer. He is enthusiastic about advancing both clinical research in Hematology-Oncology and his ongoing medical education endeavors throughout his fellowship and future career. 

Adam Rodman MD is a general internist and hospitalist at Beth Israel Deaconess Medical Center in Boston, MA. He is an assistant professor of medicine at Harvard Medical School and the co-director of the iMED Initiative at BIDMC, which is dedicated to the study and best practices of digital education and artificial intelligence. His research focuses on clinical reasoning and human-computer interaction.

Laurah Turner PhD is an Assistant Dean of Assessment and Evaluation in the Office of Medical Education and an Assistant Professor in the Department of Medical Education at the University of Cincinnati College of Medicine. Her research focuses on leveraging artificial intelligence (AI), to advance medical education assessment, personalize learning experiences, and address disparities in training. Dr. Turner’s work aims to develop assessment tools and systems that provide insights into students’ readiness for residency. She is dedicated to advancing and actualizing precision medical education by tailoring learning experiences to individual needs and styles. In her AI system development, Dr. Turner emphasizes interpretability, robustness, and scalable oversight to ensure the trustworthiness and adaptability of these tools.

Show Segments

  • Intro, Disclaimer, Guest Bio
  • Basic definitions of large language models/artificial intelligence
  • Using AI in Morning Report
  • Challenges to using AI in teaching
  • Bias in AI
  • Prompting 
  • Precision Education
  • What’s the future?
  • Take home points/outro

Artificial Intelligence (AI) in Health Professions Education (HPE) Pearls

  1. It’s not if, but when, AI will take a larger role in medical education and delivery of health care. We should familiarize ourselves with the basics, and start playing around with easily accessible options now.
  2. The term AI is nonspecific, what we’re often talking about when we casually say AI in HPE are large language models, which are a type of machine learning capable of understanding and generating human-like responses.
  3. Large language models can enhance our work in MedED in a variety of ways, such as facilitating morning reports, informing journal club discussions, or assessing the relevance of a study for individual patient cases. 
  4. Exciting new developments are coming down the line building on AI’s capabilities to create precision education.

AI in HPE Show Notes 

What is Artificial Intelligence? 

Experts in this field don’t find the term AI that helpful because it is such a broad concept; AI is something that helps with decision support. The concept of artificial intelligence actually dates back to the 1950s, when Robert Steven Ledley and Lee B. Lusted wrote what some call a seminal paper for the field of medical informatics (Ludley 1959). Medical professionals have been using AI in their daily workflow for years such as incorporating scoring systems like the 4T score, ACLS algorithm, and other calculators on MDCalc

Machine learning is a subset of AI that uses data and algorithms to imitate the way that humans learn. Generative AI is a type of machine learning which can create new data. Large language models (LLMs) are a specific type of machine learning which use neural networks to try to mimic the human brain, using algorithms to generate realistic output from human text input. The inner workings of the large language models are considered a “black box”, as we do not fully understand what is happening. There are other types of machine learning algorithms that are being used in medical education that are not large language models and can be used for other tasks besides creating human-like text output (Choi 2020).

Since we’re not experts here, we’re going to keep it simple and say “AI” when we generally mean LLMs.

What model should we be using in medical education?  

Drs. Rodman, Turner, and Fortman suggest choosing a model depending on what you want to get out of it. There are four large language models as of this recording–ChatGPT, Gemini, Claude 3, Llama 3– which can each offer human-like text output. If you are new to this, just try it out and see if you get what you’re looking for.  

OpenEvidence is more of a retrieval augmented generation (RAG) system which combines retrieval from a large set database with generative capacity to allow for a more reliable (but more limited) response than a purely generative response that the LLMs (listed above) produce.

Using AI in Morning Report: Problem Representation and Differential Diagnoses

Dr. Fortman led a workshop at the Alliance for Academic Internal Medicine’s AIMW24 conference focused on leveraging large language models (LLMs) for clinical reasoning.  He described how his institution integrates generative AI during morning reports to improve clinical reasoning and refine problem representation. Learners enter their problem representations into ChatGPT to generate a list of potential diagnoses; they can then adjust their statements using semantic qualifiers and compare their diagnoses with those suggested by the AI. Despite the possibility of the LLM generating incorrect differentials, there remains educational value in dissecting and understanding the reasoning behind its outputs, as well as pinpointing areas of diververgence or oversight. This is an approachable way to integrate AI into your teaching even if you don’t have a lot of background in the field. 

Concerns with Using AI

AI can be wrong: be aware that the models can “hallucinate”, create false information, or use incorrect data to yield incorrect results. Dr. Fortman sees even these false outputs as an educational tool that can help learners explain why an AI differential is correct or incorrect and expand their understanding. Important to be aware of HIPPA compliance. We don’t know what these models do with data that is inputted into the system, so Dr. Fortman recommends avoiding sharing patient identifiers (Cooper 2023).

Bias In AI

Since LLMs are trained on human texts, bias is inherently present within them. Additionally, bias can enter AI in the training data with biased human decisions or reflecting historical/social inequities or including flawed data sampling (groups are over- or under-represented in training data). These models undergo reinforcement learning through human feedback (RLHF), where humans train the models to increase or reduce connections in an effort to reduce bias, but this also has the potential to introduce new bias. Dr. Turner highlights that there is research into using synthetic data generated by these models to continue to train themselves. Ongoing research needs to be done to explore how to navigate guardrails around reducing bias, as well as how to detect, flag, and reduce bias when it does occur. It is important to consider not just how AI can provide biased answers, but to also how we as humans use that information in practice (Park 2023, Rajpurkar 2022, Manyika 2019)

Impact of AI on Human Productivity

The psychology of using AI is an intriguing part of integrating AI into practice. Dr. Rodman highlights some research his group is working on which shows how humans are great at accepting confirmation when they are correct, but have more trouble being told by the machine that they are wrong.   

Dr. Rodman highlights that we can learn from fields outside of medicine. Ethan Mollick, who contributed to a recent study, The Jagged Technological Frontiers, and wrote the book, Co-Intelligence, discussed a “jagged technological frontier” where AI can enhance productivity and quality of tasks by humans, though sometimes it can make people worse. Further research needs to be done on where the jagged frontiers occur, especially in the field of medicine.

Using AI in Clinical Teaching

There are many ways to integrate AI into clinical teaching, here are a few easy suggestions:

  • In morning report, as above, to help clarify problem representation or solidify differential diagnoses
  • Draft letters of recommendation based on certain criteria using a learner’s CV and reviews
  • Summarize detailed feedback into key themes
  • Adjust written materials to different levels of education quickly and easily
  • Support journal club by putting the full paper and appendices into the context window, then asking AI to explain certain aspects of the trial design or identify if a patient fits with the population studied

How to prompt AI? 

  1. The models work best if you give them a role at the start.
    1. i.e. “you are a DEI expert.” “You are a medical educator.” “You are a chief resident preparing a morning report.”
  1. Give them context
  1. Explain in detail the background of what you are looking for, i.e. “a clinical pathological case conference is … which has a pathologic diagnosis at the end which is probably rare”
  2. Dr. Turner highlights that the context window (space where you can input information) has increased dramatically recently, which has made it harder to know what the model is really focused on. This is detailed in the paper “Attention is all you need”.  

3. Give the specific task of what you want the output to look like, including format.

  1. i.e. “list out the top five diagnoses” or “summarize the educational topics in order”.  
  2. You can share an example to give the format desired (Mesko 2023)

Precision Education:

Dr. Turner describes her group’s work on individualizing the learning experience to the trainee, rather than forcing the trainee to adapt to the learning model. With precision medical education, they are trying to bring in data to inform the learning path, with several different models in progress currently:

  • Clinical reasoning workspace: AI will generate clinical scenarios that the learner can engage and interact with in a conversational manner. 
  • Memories: model that can assess a learner’s management of patients over time (i.e. types of tests ordered, cost effectiveness of care) to identify trends and give feedback to the learner.
  • Question Banks: can generate question banks similar to UWorld or AMboss to reduce the costs to medical students and residents.  They can generate unlimited questions. 
  • Patient Presentation Preparation with Feedback: they are developing a model that can listen to speech and give low stakes feedback.  
  • AI Coach: which can remember details about the learner over time and help tailor feedback for improvement.

The work is still in its early phases, currently in a pilot phase at University of Cincinnati, and Dr. Turner highlights this needs to be tested in other settings/educational pathways. This has the opportunity to give learners and educators more time for one on one education, and improve knowledge (Desai 2024, Turner 2024.)  

What’s to come with AI:

Dr. Fortman is enthusiastic about the future of AI to reduce task based work, such as ordering tests, providing patient education, etc. to reduce burnout and allow trainees to focus on education.

Dr. Turner is also excited about the potential for AI to allow learners and physicians to focus on what is important and reduce administrative burden. She is excited about the future to create safe and inclusive learning environments, which will reduce the risk learners feel around making mistakes, allowing them to reduce knowledge gaps.

Dr. Rodman is excited to see how AI leads us to reconsider how we think about diseases, and the unpredictability of what these technologies could open up. These models can make statistical associations that previously humans could not, in order to lead us to think in new ways.

Take home points:

Dr. Fortman: For the listeners who have not used AI a lot, just start experimenting with it and have some fun. 

Dr. Turner: Start to educate yourself about these new technologies, because it’s not a matter of if, it’s a matter of when, these technologies will come to the forefront of medical education/healthcare. She highlighted that the AAMC working group for the Technology Advancement Committee, of which she is part of, is putting out recommendations on how to integrate AI into medical education, admissions and selection process for medical schools, clinical care, and health equity. As these guidelines become more refined and widespread, she thinks that it will help us use these technologies more. 

Dr. Rodman: He is hopeful that these technologies can bring humanity back to medicine by reducing the time physicians spend at the computer, allowing more time for focus on the human relationship within medicine (being there for people, guiding them through their illnesses, and giving them agency and explanation in their illnesses). He hopes that we can use these technologies to make a better medical system for everybody, both patients and providers. 


  1. Bedside Rounds podcast by Dr Rodman


Listeners will appreciate the opportunities and challenges to medical education presented by generative AI in 2024.

Learning objectives

After listening to this episode listeners will…

  1. Describe what generative AI is and identify a particular platform to try out as a health professions educator.
  2. Develop a plan for specific use of generative AI in your current educator roles.
  3. Name the pitfalls and concerns around use of generative AI in healthcare and how to mitigate these.
  4. Explore what the future of AI may hold for medical education.


Dylan Fortman MD, Adam Rodman MD, and Laurah Turner PhD report no relevant financial disclosures.  The Curbsiders report no relevant financial disclosures. 


Heublein M, Fortman D, Turner L, Rodman A, Kryzhanovskaya E, Connor M. “#39 Artificial Intelligence in Health Professions Education. The Curbsiders Teach Podcast. https://thecurbsiders.com/teach May 15, 2024.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Episode Credits

Producer/Script: Molly Heublein MD
Infographic/Cover Art/Show Notes: Megan Connor MD
CME Questions: Era Kryzhanovskaya MD
Hosts/Editors: Era Kryzhanovskaya MD, Molly Heublein MD
Peer Reviewer: Michael Caputo DO
Guests: Dylan Fortman MD, Adam Rodman MD, and Laurah Turner PhD
Technical support: Podpaste
Theme Music: MorsyMusic

CME Partner


The Curbsiders are partnering with VCU Health Continuing Education to offer FREE continuing education credits for physicians and other healthcare professionals. Visit curbsiders.vcuhealth.org and search for this episode to claim credit.

Contact Us

Got feedback? Suggest a Teach topic. Recommend a guest. Tell us what you think.

Contact Us

We love hearing from you.


We and selected third parties use cookies or similar technologies for technical purposes and, with your consent, for other purposes as specified in the cookie policy. Denying consent may make related features unavailable.

Close this notice to consent.