June 23, 2022
Let’s say you’ve gone to the doctor’s office hoping to get some answers about the severe headaches you are experiencing. In the exam room, you notice your physician or advanced practice provider diligently typing on their computer as they ask you questions about your health and any issues you may be experiencing. What in the world are they doing?
They are documenting relevant information gathered from your physical and interview into the electronic medical record. This process helps providers narrow down the most likely diagnoses that could be causing your headaches. It is also an important accounting of your health for other clinicians who may access your patient chart.
Like anything else in health care, your physician or advanced practice provider began learning how to perform this crucial skill while training to be a medical professional. At the Jump Trading Simulation & Education Center, medical students, residents and other clinician learners practice documenting patient information during simulated events.
They then must wait to receive feedback from faculty on their medical note-taking, which can take weeks to complete. Our Research team at Jump is now exploring the use of an automated grading system to provide feedback on this skill in a timely manner.
What are faculty looking for when they grade patient notes? They are making sure learners are asking the right questions, conducting the right exams and whether they can apply clinical reasoning.
Research suggests learners should receive timely and specific feedback to cement learning and improve future documentation. However, faculty don’t always have the time to review patient notes immediately after a simulation. By the time students receive feedback, they may have forgotten many details of the simulated encounter.
To address this challenge, we received a grant to work with the University of Illinois Urbana-Champaign on software that applies machine learning and natural language processing to create an automated grading system. The goal is to ensure students receive feedback immediately or within the next business day.
This year, our team is testing the tool with medical students at the University of Illinois College of Medicine Peoria who are learning how to make better use of resources to provide real value for patients. The goal is to determine the accuracy of the software compared to human graders, its value and how it might shape future chart note writing.
So far, feedback from students and faculty has been positive. The machine is not perfect. It sometimes misses items, it gives credit for things learners didn’t get and it omits credit for things they did get. However, it is continually learning and improving. The more data collection opportunities we get, the more we can enhance it.
In the future, we want to enter more cases into our software, so that our algorithm can continue learning how to provide feedback. More importantly, we aim to provide real- or near real-time responses to our learners to improve their performance. By combining automated grading with simulation, we hope to tackle bigger challenges, such as improving patient-friendliness of notes.
Rebecca Ebert-Allen Rebecca Ebert-Allen is a Research Project Manager at Jump. In this role, she is responsible for managing research projects that study innovation and simulation in healthcare and health professions research. This includes research approvals, study enrollment, data collection, and manuscript development, among other tasks. She’s been a part of the Jump Research team since June 2017.