Comparing Physician and Artificial Intelligence Chatbot Responses to Patient Questions Posted to a Public Social Media Forum

被引:1192
作者
Ayers, John W. [1 ,2 ]
Poliak, Adam [3 ]
Dredze, Mark [4 ]
Leas, Eric C. [1 ,5 ]
Zhu, Zechariah [1 ]
Kelley, Jessica B. [6 ]
Faix, Dennis J. [7 ]
Goodman, Aaron M. [8 ,9 ]
Longhurst, Christopher A. [10 ]
Hogarth, Michael [10 ,11 ]
Smith, Davey M. [2 ,11 ]
机构
[1] Univ Calif San Diego, Qualcomm Inst, La Jolla, CA 92093 USA
[2] Univ Calif San Diego, Dept Med, Div Infect Dis & Global Publ Hlth, La Jolla, CA USA
[3] Bryn Mawr Coll, Dept Comp Sci, Bryn Mawr, PA USA
[4] Johns Hopkins Univ, Dept Comp Sci, Baltimore, MD USA
[5] Univ Calif San Diego, Herbert Wertheim Sch Publ Hlth & Human Longev Sci, La Jolla, CA USA
[6] Human Longev, La Jolla, CA USA
[7] Naval Hlth Res Ctr, Navy, San Diego, CA USA
[8] Univ Calif San Diego, Dept Med, Div Blood & Marrow Transplantat, La Jolla, CA USA
[9] Univ Calif San Diego, Moores Canc Ctr, La Jolla, CA USA
[10] Univ Calif San Diego, Dept Biomed Informat, La Jolla, CA USA
[11] Univ Calif San Diego, Altman Clin Translat Res Inst, La Jolla, CA USA
基金
美国国家卫生研究院;
关键词
ERA; IMPACT; CARE;
D O I
10.1001/jamainternmed.2023.1838
中图分类号
R5 [内科学];
学科分类号
100201 [内科学];
摘要
IMPORTANCE The rapid expansion of virtual health care has caused a surge in patient messages concomitant with more work and burnout among health care professionals. Artificial intelligence (AI) assistants could potentially aid in creating answers to patient questions by drafting responses that could be reviewed by clinicians. OBJECTIVE To evaluate the ability of an AI chatbot assistant (ChatGPT), released in November 2022, to provide quality and empathetic responses to patient questions. DESIGN, SETTING, AND PARTICIPANTS In this cross-sectional study, a public and nonidentifiable database of questions from a public social media forum (Reddit's r/AskDocs) was used to randomly draw 195 exchanges from October 2022 where a verified physician responded to a public question. Chatbot responses were generated by entering the original question into a fresh session (without prior questions having been asked in the session) on December 22 and 23, 2022. The original question along with anonymized and randomly ordered physician and chatbot responses were evaluated in triplicate by a team of licensed health care professionals. Evaluators chose "which response was better" and judged both "the quality of information provided" (very poor, poor, acceptable, good, or very good) and "the empathy or bedside manner provided" (not empathetic, slightly empathetic, moderately empathetic, empathetic, and very empathetic). Mean outcomes were ordered on a 1 to 5 scale and compared between chatbot and physicians. RESULTS Of the 195 questions and responses, evaluators preferred chatbot responses to physician responses in 78.6%(95% CI, 75.0%-81.8%) of the 585 evaluations. Mean (IQR) physician responses were significantly shorter than chatbot responses (52 [17-62] words vs 211 [168-245] words; t = 25.4; P <.001). Chatbot responses were rated of significantly higher quality than physician responses (t = 13.3; P <.001). The proportion of responses rated as good or very good quality (>= 4), for instance, was higher for chatbot than physicians (chatbot: 78.5%, 95% CI, 72.3%-84.1%; physicians: 22.1%, 95% CI, 16.4%-28.2%;). This amounted to 3.6 times higher prevalence of good or very good quality responses for the chatbot. Chatbot responses were also rated significantly more empathetic than physician responses (t = 18.9; P <.001). The proportion of responses rated empathetic or very empathetic (>= 4) was higher for chatbot than for physicians (physicians: 4.6%, 95% CI, 2.1%-7.7%; chatbot: 45.1%, 95% CI, 38.5%-51.8%; physicians: 4.6%, 95% CI, 2.1%-7.7%). This amounted to 9.8 times higher prevalence of empathetic or very empathetic responses for the chatbot. CONCLUSIONS In this cross-sectional study, a chatbot generated quality and empathetic responses to patient questions posed in an online forum. Further exploration of this technology is warranted in clinical settings, such as using chatbot to draft responses that physicians could then edit. Randomized trials could assess further if using AI assistants might improve responses, lower clinician burnout, and improve patient outcomes.
引用
收藏
页码:589 / 596
页数:8
相关论文
共 26 条
[1]
[Anonymous], 2018, PUSHSH REDD API V4 0
[2]
Aroyo L., 2018, SUBJ AMB DIS CROWDS
[3]
Ask Docs, REDD
[4]
Ayers JW., 2018, NATURE DIGITAL MED
[5]
Chang N, 2016, P 3 AAAI C HUM COMP
[6]
Devlin J., ARXIV
[7]
Ensuring Quality in the Era of Virtual Care [J].
Herzer, Kurt R. ;
Pronovost, Peter J. .
JAMA-JOURNAL OF THE AMERICAN MEDICAL ASSOCIATION, 2021, 325 (05) :429-430
[8]
Association Between Billing Patient Portal Messages as e-Visits and Patient Messaging Volume [J].
Holmgren, A. Jay ;
Byron, Maria E. ;
Grouse, Carrie K. ;
Adler-Milstein, Julia .
JAMA-JOURNAL OF THE AMERICAN MEDICAL ASSOCIATION, 2023, 329 (04) :339-342
[9]
Assessing the impact of the COVID-19 pandemic on clinician ambulatory electronic health record use [J].
Holmgren, A. Jay ;
Downing, N. Lance ;
Tang, Mitchell ;
Sharp, Christopher ;
Longhurst, Christopher ;
Huckman, Robert S. .
JOURNAL OF THE AMERICAN MEDICAL INFORMATICS ASSOCIATION, 2022, 29 (03) :453-460
[10]
Hu K., 2023, REUTERS FEB