Does automation bias decision-making?

被引:246
作者
Skitka, LJ
Mosier, KL
Burdick, M
机构
[1] Univ Illinois, Dept Psychol, Chicago, IL 60607 USA
[2] San Francisco State Univ, San Francisco, CA USA
[3] NASA, San Jose State Univ Fdn, Ames Res Ctr, Moffett Field, CA USA
基金
美国国家航空航天局;
关键词
D O I
10.1006/ijhc.1999.0252
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Computerized system monitors and decision aids are increasingly common additions to critical decision-making contexts such as intensive care units, nuclear power plants and aircraft cockpits. These aids are introduced with the ubiquitous goal of "reducing human error". The present study compared error rates in a simulated flight task with and without a computer that monitored system states and made decision recommendations. Participants in non-automated settings out-performed their counterparts with a very but not perfectly reliable automated aid on a monitoring task. Participants with an aid made errors of omission (missed events when not explicitly prompted about them by the aid) and commission (did what an automated aid recommended, even when it contradicted their training and other 100% valid and available indicators). Possible causes and consequences of automation bias are discussed (C) 1999 Academic Press.
引用
收藏
页码:991 / 1006
页数:16
相关论文
共 33 条