The Functional morality of Robots

被引:17
作者
Johansson, Linda [1 ]
机构
[1] Royal Inst Technol, Philosophy, Stockholm, Sweden
关键词
Functional Morality; Moral Responsibility; Moral Turing Test; Robot Morality; Understanding;
D O I
10.4018/jte.2010100105
中图分类号
B82 [伦理学(道德学)];
学科分类号
摘要
It is often argued that a robot cannot be held morally responsible for its actions. The author suggests that one should use the same criteria for robots as for humans, regarding the ascription of moral responsibility. When deciding whether humans are moral agents one should look at their behaviour and listen to the reasons they give for their judgments in order to determine that they understood the situation properly. The author suggests that this should be done for robots as well. In this regard, if a robot passes a moral version of the Turing Test- a Moral Turing Test (MTT) we should hold the robot morally responsible for its actions. This is supported by the impossibility of deciding who actually has (semantic or only syntactic) understanding of a moral situation, and by two examples: the transferring of a human mind into a computer, and aliens who actually are robots.
引用
收藏
页码:65 / 73
页数:9
相关论文
共 15 条
[11]  
Mackie J., 1977, ETHICS INVENTING RIG
[12]  
Mott WH, 2006, PHILOSOPHY OF CHINESE MILITARY CULTURE: SHIH VS LI, P1, DOI 10.1057/9781403983138
[13]   The turing triage test [J].
Sparrow R. .
Ethics and Information Technology, 2004, 6 (4) :203-213
[14]   Information, Ethics, and Computers: The Problem of Autonomous Moral Agents [J].
Stahl B.C. .
Minds and Machines, 2004, 14 (1) :67-83
[15]  
Wallach Wendell., 2009, MORAL MACHINES TEACH