On the ethics of algorithmic decision-making in healthcare

被引:240
作者
Grote, Thomas [1 ,2 ,3 ]
Berens, Philipp [4 ]
机构
[1] Univ Tubingen, Eth & Philosophy Lab, Tubingen, Germany
[2] Univ Tubingen, Cluster Excellence Machine Learning New Perspect, Tubingen, Germany
[3] Univ Tubingen, Int Ctr Eth Sci & Human IZEW, Tubingen, Germany
[4] Univ Tubingen, Inst Ophthalm Res, Tubingen, Germany
关键词
DEEP; MEDICINE; RISK;
D O I
10.1136/medethics-2019-105586
中图分类号
B82 [伦理学(道德学)];
学科分类号
摘要
In recent years, a plethora of high-profile scientific publications has been reporting about machine learning algorithms outperforming clinicians in medical diagnosis or treatment recommendations. This has spiked interest in deploying relevant algorithms with the aim of enhancing decision-making in healthcare. In this paper, we argue that instead of straightforwardly enhancing the decision-making capabilities of clinicians and healthcare institutions, deploying machines learning algorithms entails trade-offs at the epistemic and the normative level. Whereas involving machine learning might improve the accuracy of medical diagnosis, it comes at the expense of opacity when trying to assess the reliability of given diagnosis. Drawing on literature in social epistemology and moral responsibility, we argue that the uncertainty in question potentially undermines the epistemic authority of clinicians. Furthermore, we elucidate potential pitfalls of involving machine learning in healthcare with respect to paternalism, moral responsibility and fairness. At last, we discuss how the deployment of machine learning algorithms might shift the evidentiary norms of medical diagnosis. In this regard, we hope to lay the grounds for further ethical reflection of the opportunities and pitfalls of machine learning for enhancing decision-making in healthcare.
引用
收藏
页码:205 / 211
页数:7
相关论文
共 39 条
[1]  
[Anonymous], IMPR DIAGN HLTH CAR
[2]   HEALTH AS A THEORETICAL CONCEPT [J].
BOORSE, C .
PHILOSOPHY OF SCIENCE, 1977, 44 (04) :542-573
[3]  
Broadbent A, 2013, NEW DIR PHIL SCI, P1, DOI 10.1057/9781137315601
[4]   How the machine 'thinks': Understanding opacity in machine learning algorithms [J].
Burrell, Jenna .
BIG DATA & SOCIETY, 2016, 3 (01) :1-12
[5]  
Cassam Quassim., 2019, VICES MIND INTELLECT
[6]   Epistemology of Disagreement: The Good News [J].
Christensen, David .
PHILOSOPHICAL REVIEW, 2007, 116 (02) :187-217
[7]   MALADY - A NEW TREATMENT OF DISEASE [J].
CLOUSER, KD ;
CULVER, CM ;
GERT, B .
HASTINGS CENTER REPORT, 1981, 11 (03) :29-37
[8]   Clinically applicable deep learning for diagnosis and referral in retinal disease [J].
De Fauw, Jeffrey ;
Ledsam, Joseph R. ;
Romera-Paredes, Bernardino ;
Nikolov, Stanislav ;
Tomasev, Nenad ;
Blackwell, Sam ;
Askham, Harry ;
Glorot, Xavier ;
O'Donoghue, Brendan ;
Visentin, Daniel ;
van den Driessche, George ;
Lakshminarayanan, Balaji ;
Meyer, Clemens ;
Mackinder, Faith ;
Bouton, Simon ;
Ayoub, Kareem ;
Chopra, Reena ;
King, Dominic ;
Karthikesalingam, Alan ;
Hughes, Cian O. ;
Raine, Rosalind ;
Hughes, Julian ;
Sim, Dawn A. ;
Egan, Catherine ;
Tufail, Adnan ;
Montgomery, Hugh ;
Hassabis, Demis ;
Rees, Geraint ;
Back, Trevor ;
Khaw, Peng T. ;
Suleyman, Mustafa ;
Cornebise, Julien ;
Keane, Pearse A. ;
Ronneberger, Olaf .
NATURE MEDICINE, 2018, 24 (09) :1342-+
[9]  
Di Nucci E., J MED ETHICS
[10]   Not Just a Truthometer: Taking Oneself Seriously (but not Too Seriously) in Cases of Peer Disagreement [J].
Enoch, David .
MIND, 2010, 119 (476) :953-997