Improving Doctor-Patient Communication Using Large Language Models - Results from an Experimental Online Study
Mon—HZ_12—Talks2—1502
Presented by: Christian Böffel
This experiment investigates the potential of large language models (LLMs) to improve patient-doctor communication by translating complex medical jargon into comprehensible lay terms, thereby enhancing patients' understanding and involvement in their healthcare with the goal of enabling informed healthcare decisions. First, a set of four medical notes was created by neurologists representing typical cases in the neurology clinic. GPT4 was then used to translate these notes into layman's terms, and an experimental online study was conducted to examine how laypeople understand and subjectively evaluate the original and translated texts. Each participant was presented with two translated and two original medical notes. The order and whether the note was translated or not was counterbalanced between participants. After each note, the participant's comprehension was assessed through content-related questions. Participants were also asked to indicate how they subjectively perceived each text in terms of difficulty, uncertainty, empathy, and mental effort. Analyses comparing original and translated notes revealed significant benefits of translation in all aspects measured. The discussion addresses methodological considerations, limitations of translation accuracy, challenges related to prompting, and considerations for integrating this approach into clinical practice.
Keywords: Experimental Online Study; Large Language Models; Human-AI-Interaction; Human Factors; Engineering Psychology