Transparent to whom? No algorithmic accountability without a critical audience

被引:116
作者
Kemper, Jakko [1 ]
Kolkman, Daan [2 ]
机构
[1] Univ Amsterdam, Amsterdam Sch Cultural Anal, Spuistr 134, NL-1012 VB Amsterdam, Netherlands
[2] Tech Univ Eindhoven, Jheronimus Acad Data Sci, Eindhoven, Netherlands
关键词
Data science; algorithms; transparency; algorithmic accountability; algorithmic decision-making; glitch studies; BIG DATA; UNCERTAINTY; SCIENCE;
D O I
10.1080/1369118X.2018.1477967
中图分类号
G2 [信息与知识传播];
学科分类号
05 ; 0503 ;
摘要
Big data and data science transform organizational decision-making. We increasingly defer decisions to algorithms because machines have earned a reputation of outperforming us. As algorithms become embedded within organizations, they become more influential and increasingly opaque. Those who create algorithms may make arbitrary decisions in all stages of the ?data value chain?, yet these subjectivities are obscured from view. Algorithms come to reflect the biases of their creators, can reinforce established ways of thinking, and may favour some political orientations over others. This is a cause for concern and calls for more transparency in the development, implementation, and use of algorithms in public- and private-sector organizations. We argue that one elementary ? yet key ? question remains largely undiscussed. If transparency is a primary concern, then to whom should algorithms be transparent? We consider algorithms as socio-technical assemblages and conclude that without a critical audience, algorithms cannot be held accountable.
引用
收藏
页码:2081 / 2096
页数:16
相关论文
共 74 条
[1]   Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability [J].
Ananny, Mike ;
Crawford, Kate .
NEW MEDIA & SOCIETY, 2018, 20 (03) :973-989
[2]  
Angwin J., 2016, Making Algorithms Accountable. Agustos 12
[3]  
[Anonymous], 2012, The Stanford Encyclopedia of Philosophy
[4]   Big Data's Disparate Impact [J].
Barocas, Solon ;
Selbst, Andrew D. .
CALIFORNIA LAW REVIEW, 2016, 104 (03) :671-732
[5]  
Berardi Franco, 2015, And: Phenomenology of the End
[6]  
Betancourt M, 2017, ROUTLEDGE FOCUS, P1
[7]  
Beunza D, 2007, SOCIOL REV MONOGR, P13
[8]  
Bodo B., 2017, Yale JL Tech., V19, P133
[9]   Semantics derived automatically from language corpora contain human-like biases [J].
Caliskan, Aylin ;
Bryson, Joanna J. ;
Narayanan, Arvind .
SCIENCE, 2017, 356 (6334) :183-186
[10]  
Caliskan-Islam Aylin, 2016, Science, P1