Header

UZH-Logo

Maintenance Infos

Can a Robot Lie? Exploring the Folk Concept of Lying as Applied to Artificial Agents


Kneer, Markus (2021). Can a Robot Lie? Exploring the Folk Concept of Lying as Applied to Artificial Agents. Cognitive Science, 45(10):e13032.

Abstract

The potential capacity for robots to deceive has received considerable attention recently. Many papers explore the technical possibility for a robot to engage in deception for beneficial purposes (e.g., in education or health). In this short experimental paper, I focus on a more paradigmatic case: robot lying (lying being the textbook example of deception) for nonbeneficial purposes as judged from the human point of view. More precisely, I present an empirical experiment that investigates the following three questions: (a) Are ordinary people willing to ascribe deceptive intentions to artificial agents? (b) Are they as willing to judge a robot lie as a lie as they would be when human agents engage in verbal deception? (c) Do people blame a lying artificial agent to the same extent as a lying human agent? The response to all three questions is a resounding yes. This, I argue, implies that robot deception and its normative consequences deserve considerably more attention than they presently receive.

Abstract

The potential capacity for robots to deceive has received considerable attention recently. Many papers explore the technical possibility for a robot to engage in deception for beneficial purposes (e.g., in education or health). In this short experimental paper, I focus on a more paradigmatic case: robot lying (lying being the textbook example of deception) for nonbeneficial purposes as judged from the human point of view. More precisely, I present an empirical experiment that investigates the following three questions: (a) Are ordinary people willing to ascribe deceptive intentions to artificial agents? (b) Are they as willing to judge a robot lie as a lie as they would be when human agents engage in verbal deception? (c) Do people blame a lying artificial agent to the same extent as a lying human agent? The response to all three questions is a resounding yes. This, I argue, implies that robot deception and its normative consequences deserve considerably more attention than they presently receive.

Statistics

Citations

Dimensions.ai Metrics
9 citations in Web of Science®
10 citations in Scopus®
Google Scholar™

Altmetrics

Downloads

81 downloads since deposited on 30 Dec 2021
57 downloads since 12 months
Detailed statistics

Additional indexing

Item Type:Journal Article, refereed, original work
Communities & Collections:01 Faculty of Theology and the Study of Religion > Center for Ethics
06 Faculty of Arts > Institute of Philosophy
Dewey Decimal Classification:170 Ethics
Scopus Subject Areas:Social Sciences & Humanities > Experimental and Cognitive Psychology
Life Sciences > Cognitive Neuroscience
Physical Sciences > Artificial Intelligence
Uncontrolled Keywords:Artificial Intelligence, Cognitive Neuroscience, Experimental and Cognitive Psychology
Language:English
Date:1 October 2021
Deposited On:30 Dec 2021 07:44
Last Modified:26 Apr 2024 01:38
Publisher:Wiley-Blackwell Publishing, Inc.
ISSN:0364-0213
OA Status:Hybrid
Free access at:Publisher DOI. An embargo period may apply.
Publisher DOI:https://doi.org/10.1111/cogs.13032
Project Information:
  • : FunderSNSF
  • : Grant IDPZ00P1_179912
  • : Project TitleReading Guilty Minds
  • Content: Published Version
  • Licence: Creative Commons: Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0)