Attributions of ethical responsibility by Artificial Intelligence practitioners

被引:80
作者
Orr, Will [1 ]
Davis, Jenny L. [1 ]
机构
[1] Australian Natl Univ, Sch Sociol, GPO Box 4, Canberra, ACT 0200, Australia
关键词
Artificial intelligence (AI); accountability; AI ethics; inequality; professions; organizations; TECHNOLOGY; BIAS;
D O I
10.1080/1369118X.2020.1713842
中图分类号
G2 [信息与知识传播];
学科分类号
05 ; 0503 ;
摘要
Systems based on Artificial Intelligence (AI) are increasingly normalized as part of work, leisure, and governance in contemporary societies. Although ethics in AI has received significant attention, it remains unclear where the burden of responsibility lies. Through twenty-one interviews with AI practitioners in Australia, this research seeks to understand how ethical attributions figure into the professional imagination. As institutionally embedded technical experts, AI practitioners act as a connective tissue linking the range of actors that come in contact with, and have effects upon, AI products and services. Findings highlight that practitioners distribute ethical responsibility across a range of actors and factors, reserving a portion of responsibility for themselves, albeit constrained. Characterized by imbalances of decision-making power and technical expertise, practitioners position themselves as mediators between powerful bodies that set parameters for production; users who engage with products once they leave the proverbial workbench; and AI systems that evolve and develop beyond practitioner control. Distributing responsibility throughout complex sociotechnical networks, practitioners preclude simple attributions of accountability for the social effects of AI. This indicates that AI ethics are not the purview of any singular player but instead, derive from collectivities that require critical guidance and oversight at all stages of conception, production, distribution, and use.
引用
收藏
页码:719 / 735
页数:17
相关论文
共 50 条
[1]   Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability [J].
Ananny, Mike ;
Crawford, Kate .
NEW MEDIA & SOCIETY, 2018, 20 (03) :973-989
[2]   Toward an Ethics of Algorithms: Convening, Observation, Probability, and Timeliness [J].
Ananny, Mike .
SCIENCE TECHNOLOGY & HUMAN VALUES, 2016, 41 (01) :93-117
[3]  
Angwin J., 2016, Machine Bias
[4]  
[Anonymous], 1988, The psychology of everyday things
[5]  
[Anonymous], FAIRNESS ACCOUNTABIL
[6]   The Moral Machine experiment [J].
Awad, Edmond ;
Dsouza, Sohan ;
Kim, Richard ;
Schulz, Jonathan ;
Henrich, Joseph ;
Shariff, Azim ;
Bonnefon, Jean-Francois ;
Rahwan, Iyad .
NATURE, 2018, 563 (7729) :59-+
[7]  
Braun V., 2006, Qual. Res. Psychol, V3, P77, DOI DOI 10.1191/1478088706QP063OA
[8]  
Buolamwini J. A., 2017, Tech. Rep.
[9]   How the machine 'thinks': Understanding opacity in machine learning algorithms [J].
Burrell, Jenna .
BIG DATA & SOCIETY, 2016, 3 (01) :1-12
[10]   After the individual in society: Lessons on collectivity from science, technology and society [J].
Callon, M ;
Law, J .
CANADIAN JOURNAL OF SOCIOLOGY-CAHIERS CANADIENS DE SOCIOLOGIE, 1997, 22 (02) :165-182