Header

UZH-Logo

Maintenance Infos

Playing the Blame Game with Robots


Kneer, Markus; Stuart, Michael T (2021). Playing the Blame Game with Robots. In: HRI '21: ACM/IEEE International Conference on Human-Robot Interaction, Boulder CO USA, 8 March 2021 - 11 March 2021. ACM, 407-411.

Abstract

Recent research shows - somewhat astonishingly - that people are willing to ascribe moral blame to AI-driven systems when they cause harm [1]-[4]. In this paper, we explore the moral-psychological underpinnings of these findings. Our hypothesis was that the reason why people ascribe moral blame to AI systems is that they consider them capable of entertaining inculpating mental states (what is called mens rea in the law). To explore this hypothesis, we created a scenario in which an AI system runs a risk of poisoning people by using a novel type of fertilizer. Manipulating the computational (or quasi-cognitive) abilities of the AI system in a between-subjects design, we tested whether people's willingness to ascribe knowledge of a substantial risk of harm (i.e., recklessness) and blame to the AI system. Furthermore, we investigated whether the ascription of recklessness and blame to the AI system would influence the perceived blameworthiness of the system's user (or owner). In an experiment with 347 participants, we found (i) that people are willing to ascribe blame to AI systems in contexts of recklessness, (ii) that blame ascriptions depend strongly on the willingness to attribute recklessness and (iii) that the latter, in turn, depends on the perceived "cognitive" capacities of the system. Furthermore, our results suggest (iv) that the higher the computational sophistication of the AI system, the more blame is shifted from the human user to the AI system.

Abstract

Recent research shows - somewhat astonishingly - that people are willing to ascribe moral blame to AI-driven systems when they cause harm [1]-[4]. In this paper, we explore the moral-psychological underpinnings of these findings. Our hypothesis was that the reason why people ascribe moral blame to AI systems is that they consider them capable of entertaining inculpating mental states (what is called mens rea in the law). To explore this hypothesis, we created a scenario in which an AI system runs a risk of poisoning people by using a novel type of fertilizer. Manipulating the computational (or quasi-cognitive) abilities of the AI system in a between-subjects design, we tested whether people's willingness to ascribe knowledge of a substantial risk of harm (i.e., recklessness) and blame to the AI system. Furthermore, we investigated whether the ascription of recklessness and blame to the AI system would influence the perceived blameworthiness of the system's user (or owner). In an experiment with 347 participants, we found (i) that people are willing to ascribe blame to AI systems in contexts of recklessness, (ii) that blame ascriptions depend strongly on the willingness to attribute recklessness and (iii) that the latter, in turn, depends on the perceived "cognitive" capacities of the system. Furthermore, our results suggest (iv) that the higher the computational sophistication of the AI system, the more blame is shifted from the human user to the AI system.

Statistics

Citations

Dimensions.ai Metrics
8 citations in Web of Science®
12 citations in Scopus®
Google Scholar™

Altmetrics

Downloads

51 downloads since deposited on 29 Dec 2021
51 downloads since 12 months
Detailed statistics

Additional indexing

Item Type:Conference or Workshop Item (Paper), refereed, original work
Communities & Collections:01 Faculty of Theology > Center for Ethics
06 Faculty of Arts > Institute of Philosophy
Dewey Decimal Classification:170 Ethics
Language:English
Event End Date:11 March 2021
Deposited On:29 Dec 2021 08:37
Last Modified:20 Jan 2023 07:31
Publisher:ACM
ISBN:9781450382908
OA Status:Green
Publisher DOI:https://doi.org/10.1145/3434074.3447202
Project Information:
  • : FunderSNSF
  • : Grant IDPZ00P1_179912
  • : Project TitleReading Guilty Minds
  • : Project Websitehttps://www.guiltymindslab.com/
  • : FunderSNSF
  • : Grant IDPZ00P1_179986
  • : Project TitleImagination in Science: What is it, how do we learn from it, and how can we improve it?
  • Content: Published Version
  • Language: English