Monitoring expert system performance using continuous user feedback

被引:15
作者
Kahn, MG
Steib, SA
Dunagan, WC
Fraser, VJ
机构
[1] WASHINGTON UNIV, SCH MED, DIV GEN MED SCI, QUAL MANAGEMENT SECT, ST LOUIS, MO 63110 USA
[2] WASHINGTON UNIV, SCH MED, DEPT MED, DIV INFECT DIS, ST LOUIS, MO 63110 USA
关键词
D O I
10.1136/jamia.1996.96310635
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Objective: To evaluate the applicability of metrics collected during routine use to monitor the performance of a deployed expert system. Methods: Two extensive formal evaluations of the GermWatcher (Washington University school of Medicine) expert system were performed approximately six months apart. Deficiencies noted during the first evaluation were corrected via a series of interim changes to the expert system rules, even though the expert system was in routine use. As part of their daily work routine, infection control nurses reviewed expert system output and changed the output results with which they disagreed. The rate of nurse disagreement with expert system output was used as an indirect or surrogate metric of expert system performance between formal evaluations. The results of the second evaluation were used to validate the disagreement rate as an indirect performance measure. Based on continued monitoring of user feedback, expert-system changes incorporated after the second formal evaluation have resulted in additional improvements in performance. Results: The rate-of nurse disagreement with GermWatcher output decreased consistently after each change to the program. The second formal evaluation confirmed a marked improvement in the program's performance, justifying the use of the nurses' disagreement rate as an indirect performance metric. Conclusions: Metrics collected during the routine use of the GermWatcher expert system can be used to monitor the performance of the expert system. The impact of improvements to the program can be followed using continuous user feedback without requiring extensive formal evaluations after each modification. When possible, the design of an expert system should incorporate measures-of system performance that can be collected and monitored during the routine use of the system.
引用
收藏
页码:216 / 223
页数:8
相关论文
共 12 条
[1]   NOSOCOMIAL INFECTIONS - VALIDATION OF SURVEILLANCE AND COMPUTER MODELING TO IDENTIFY PATIENTS AT RISK [J].
BRODERICK, A ;
MORI, M ;
NETTLEMAN, MD ;
STREED, SA ;
WENZEL, RP .
AMERICAN JOURNAL OF EPIDEMIOLOGY, 1990, 131 (04) :734-742
[2]  
Centers for Disease Control (CDC), 1992, MMWR Morb Mortal Wkly Rep, V41, P783
[3]   NATIONAL NOSOCOMIAL INFECTIONS SURVEILLANCE SYSTEM (NNIS) - DESCRIPTION OF SURVEILLANCE METHODS [J].
EMORI, TG ;
CULVER, DH ;
HORAN, TC ;
JARVIS, WR ;
WHITE, JW ;
OLSON, DR ;
BANERJEE, S ;
EDWARDS, JR ;
MARTONE, WJ ;
GAYNES, RP ;
HUGHES, JM .
AMERICAN JOURNAL OF INFECTION CONTROL, 1991, 19 (01) :19-35
[4]  
ENGELBRECHT R, 1995, ASSESS EVAL HIGH EDU, P51
[5]   CDC DEFINITIONS FOR NOSOCOMIAL INFECTIONS, 1988 [J].
GARNER, JS ;
JARVIS, WR ;
EMORI, TG ;
HORAN, TC ;
HUGHES, JM .
AMERICAN JOURNAL OF INFECTION CONTROL, 1988, 16 (03) :128-140
[6]   MONITORING THE MONITOR - AUTOMATED STATISTICAL TRACKING OF A CLINICAL EVENT MONITOR [J].
HRIPCSAK, G .
COMPUTERS AND BIOMEDICAL RESEARCH, 1993, 26 (05) :449-466
[7]  
HRIPCSAK G, 1994, SCAMC P, P636
[8]  
JORGENSEN T, 1995, ASSESSMENT EVALUATIO, P111
[9]  
KAHN MG, 1993, SCAMC P, P171
[10]  
KAHN MG, 1995, MEDINFO 95 EDM ALB C, P1064