On the morality of artificial agents

被引:487
作者
Floridi, L [1 ]
Sanders, JW [1 ]
机构
[1] Univ Oxford, Informat Eth Grp, Oxford OX1 2JD, England
关键词
artificial agents; computer ethics; levels of abstraction; moral responsibility;
D O I
10.1023/B:MIND.0000035461.63578.9d
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Artificial agents (AAs), particularly but not only those in Cyberspace, extend the class of entities that can be involved in moral situations. For they can be conceived of as moral patients ( as entities that can be acted upon for good or evil) and also as moral agents ( as entities that can perform actions, again for good or evil). In this paper, we clarify the concept of agent and go on to separate the concerns of morality and responsibility of agents ( most interestingly for us, of AAs). We conclude that there is substantial and important scope, particularly in Computer Ethics, for the concept of moral agent not necessarily exhibiting free will, mental states or responsibility. This complements the more traditional approach, common at least since Montaigne and Descartes, which considers whether or not (artificial) agents have mental states, feelings, emotions and so on. By focussing directly on 'mind-less morality' we are able to avoid that question and also many of the concerns of Artificial Intelligence. A vital component in our approach is the 'Method of Abstraction' for analysing the level of abstraction (LoA) at which an agent is considered to act. The LoA is determined by the way in which one chooses to describe, analyse and discuss a system and its context. The 'Method of Abstraction' is explained in terms of an 'interface' or set of features or observables at a given 'LoA'. Agenthood, and in particular moral agenthood, depends on a LoA. Our guidelines for agenthood are: interactivity ( response to stimulus by change of state), autonomy ( ability to change state without stimulus) and adaptability ( ability to change the 'transition rules' by which state is changed) at a given LoA. Morality may be thought of as a 'threshold' defined on the observables in the interface determining the LoA under consideration. An agent is morally good if its actions all respect that threshold; and it is morally evil if some action violates it. That view is particularly informative when the agent constitutes a software or digital system, and the observables are numerical. Finally we review the consequences for Computer Ethics of our approach. In conclusion, this approach facilitates the discussion of the morality of agents not only in Cyberspace but also in the biosphere, where animals can be considered moral agents without their having to display free will, emotions or mental states, and in social contexts, where systems like organizations can play the role of moral agents. The primary 'cost' of this facility is the extension of the class of agents and moral agents to embrace AAs.
引用
收藏
页码:349 / 379
页数:31
相关论文
共 33 条
[1]   Prolegomena to any future artificial moral agent [J].
Allen, C ;
Varner, G ;
Zinser, J .
JOURNAL OF EXPERIMENTAL & THEORETICAL ARTIFICIAL INTELLIGENCE, 2000, 12 (03) :251-261
[2]  
[Anonymous], 1996, The Philosophy of Artificial Life
[3]  
ARNOLD A, 1994, PRENTICE HALL INT SE
[4]  
Badham John., 1983, War Games
[5]  
Cassirer E., 1953, Substance and Function and Einstein's Theory of Relativity
[6]  
Danielson P., 1992, ARTIFICIAL MORALITY
[7]  
Dennett Daniel C., 1997, Hal's Legacy: 2001's Computer as Dream and Reality, P351
[8]   GASTRIC-CANCER [J].
DIXON, MF ;
ECTORS, NL .
CURRENT OPINION IN GASTROENTEROLOGY, 1995, 11 :38-41
[9]  
EPSTEIN RG, 1997, CASE KILLER ROBOT
[10]   On the intrinsic value of information objects and the infosphere [J].
Floridi L. .
Ethics and Information Technology, 2002, 4 (4) :287-304