Can AI Deliver Empathetic Medical Consultations? A Randomized Controlled Trial Comparing Ratings of a Digital Human and a Human Physician in High and Low Empathy Consultations
Reference
Degree Grantor
Abstract
Background: Clinical empathy in medical consultations can improve patient outcomes. However, individual and system-level barriers in healthcare systems limit equitable access to empathic care. Digital humans (DHs) are being used to address these challenges, but no RCT has investigated their ability to deliver empathic consultations compared to human physicians. Aim: The study’s primary aim was to investigate the effects of empathy skills (high or low) and source (DH or human physician) on participants’ perceptions of a brief medical consultation. Method: A factorial RCT with a 2x2 between-groups design was conducted. 124 adults aged 18 years or over were recruited through social media and randomly allocated to one of four conditions: high-empathy human physician, low-empathy human physician, high-empathy DH, or low-empathy DH. Participants completed the experiment online using the Qualtrics platform, where they watched a 4.5 minute video of a consultation with an actor for the common cold before completing the study questionnaire. Differences in clinical empathy, trust, competence, warmth and adherence intention ratings were compared between groups. Results: There was a significant main effect of empathy skills. On average, high-empathy consultations were rated higher on clinical empathy, warmth, trust, competence and adherence intention (all p <.0.5) than low-empathy consultations. A significant interaction between empathy skills and source was found for clinical empathy ratings (p = .001), with the high empathy human physician receiving the highest scores. Source significantly affected participants’ perceived adherence intention for the actor (p = .010). No significant differences between the human and DH consultations were found for the other outcomes. Conclusion: Overall, findings support the integration of DHs into strained healthcare systems. These agents can use high-empathy skills to perform comparably to human physicians in routine consultations. DHs can utilize clinical empathy to build warm, trusting and competent relationships, that improve the potential for compliance. The study extends empathy-related findings with human physicians to DHs and provides insights into the development of effective agents. A closer inspection of the ethics of using AI in patient care is encouraged. Future research using Large Language Models could investigate whether these results apply to dynamic and real-world patient settings.