An Analysis of the Interaction Between Intelligent Software Agents and Human Users

被引:79
作者
Burr, Christopher [1 ]
Cristianini, Nello [1 ]
Ladyman, James [2 ]
机构
[1] Univ Bristol, Dept Comp Sci, Merchant Venturers Bldg,Woodland Rd, Bristol BS8 1UB, Avon, England
[2] Univ Bristol, Dept Philosophy, Cotham House, Bristol BS6 6JL, Avon, England
基金
欧洲研究理事会;
关键词
Artificial intelligence; Machine learning; Human-computer interaction; Nudge; Persuasion; Autonomy; SOCIAL NETWORKING; FACEBOOK; PERSUASION; ADDICTION; NEWS;
D O I
10.1007/s11023-018-9479-0
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Interactions between an intelligent software agent (ISA) and a human user are ubiquitous in everyday situations such as access to information, entertainment, and purchases. In such interactions, the ISA mediates the user's access to the content, or controls some other aspect of the user experience, and is not designed to be neutral about outcomes of user choices. Like human users, ISAs are driven by goals, make autonomous decisions, and can learn from experience. Using ideas from bounded rationality (and deploying concepts from artificial intelligence, behavioural economics, control theory, and game theory), we frame these interactions as instances of an ISA whose reward depends on actions performed by the user. Such agents benefit by steering the user's behaviour towards outcomes that maximise the ISA's utility, which may or may not be aligned with that of the user. Video games, news recommendation aggregation engines, and fitness trackers can all be instances of this general case. Our analysis facilitates distinguishing various subcases of interaction (i.e. deception, coercion, trading, and nudging), as well as second-order effects that might include the possibility for adaptive interfaces to induce behavioural addiction, and/or change in user belief. We present these types of interaction within a conceptual framework, and review current examples of persuasive technologies and the issues that arise from their use. We argue that the nature of the feedback commonly used by learning agents to update their models and subsequent decisions could steer the behaviour of human users away from what benefits them, and in a direction that can undermine autonomy and cause further disparity between actions and goals as exemplified by addictive and compulsive behaviour. We discuss some of the ethical, social and legal implications of this technology and argue that it can sometimes exploit and reinforce weaknesses in human beings.
引用
收藏
页码:735 / 774
页数:40
相关论文
共 122 条
[1]  
Alter A., 2017, Irresistible: why we can't stop checking, scrolling, clicking and watching
[2]   Beyond Self-Report: Tools to Compare Estimated and Real-World Smartphone Use [J].
Andrews, Sally ;
Ellis, David A. ;
Shaw, Heather ;
Piwek, Lukasz .
PLOS ONE, 2015, 10 (10)
[3]  
Angwin J, 2015, TIGER MOM TAX ASIANS
[4]  
[Anonymous], 2014, Advances in neural information processing systems
[5]  
[Anonymous], SMART COACH YOUR SID
[6]  
[Anonymous], FIRSTC
[7]  
[Anonymous], PHILOS MEETS INTERNE
[8]  
[Anonymous], 2011, DEF ADD
[9]  
[Anonymous], 2003, PERSUASIVE TECHNOLOG
[10]  
[Anonymous], 2011, PRACTICAL ETHICS