Header

UZH-Logo

Maintenance Infos

Trust does not need to be human: it is possible to trust medical AI


Ferrario, Andrea; Loi, Michele; Viganò, Eleonora (2020). Trust does not need to be human: it is possible to trust medical AI. Journal of Medical Ethics:Epub ahead of print.

Abstract

In his recent article ‘Limits of trust in medical AI,’ Hatherley argues that, if we believe that the motivations that are usually recognised as relevant for interpersonal trust have to be applied to interactions between humans and medical artificial intelligence, then these systems do not appear to be the appropriate objects of trust. In this response, we argue that it is possible to discuss trust in medical artificial intelligence (AI), if one refrains from simply assuming that trust describes human–human interactions. To do so, we consider an account of trust that distinguishes trust from reliance in a way that is compatible with trusting non-human agents. In this account, to trust a medical AI is to rely on it with little monitoring and control of the elements that make it trustworthy. This attitude does not imply specific properties in the AI system that in fact only humans can have. This account of trust is applicable, in particular, to all cases where a physician relies on the medical AI predictions to support his or her decision making.

Abstract

In his recent article ‘Limits of trust in medical AI,’ Hatherley argues that, if we believe that the motivations that are usually recognised as relevant for interpersonal trust have to be applied to interactions between humans and medical artificial intelligence, then these systems do not appear to be the appropriate objects of trust. In this response, we argue that it is possible to discuss trust in medical artificial intelligence (AI), if one refrains from simply assuming that trust describes human–human interactions. To do so, we consider an account of trust that distinguishes trust from reliance in a way that is compatible with trusting non-human agents. In this account, to trust a medical AI is to rely on it with little monitoring and control of the elements that make it trustworthy. This attitude does not imply specific properties in the AI system that in fact only humans can have. This account of trust is applicable, in particular, to all cases where a physician relies on the medical AI predictions to support his or her decision making.

Statistics

Citations

Dimensions.ai Metrics

Altmetrics

Downloads

4 downloads since deposited on 05 Feb 2021
4 downloads since 12 months
Detailed statistics

Additional indexing

Item Type:Journal Article, refereed, original work
Communities & Collections:04 Faculty of Medicine > Institute of Biomedical Ethics and History of Medicine
Dewey Decimal Classification:610 Medicine & health
Scopus Subject Areas:Health Sciences > Issues, Ethics and Legal Aspects
Social Sciences & Humanities > Health (social science)
Social Sciences & Humanities > Arts and Humanities (miscellaneous)
Health Sciences > Health Policy
Uncontrolled Keywords:trust, medical AI, human-AI interaction
Language:English
Date:25 November 2020
Deposited On:05 Feb 2021 16:49
Last Modified:06 Feb 2021 21:03
Publisher:BMJ Publishing Group
ISSN:0306-6800
OA Status:Hybrid
Free access at:Publisher DOI. An embargo period may apply.
Publisher DOI:https://doi.org/10.1136/medethics-2020-106922

Download

Hybrid Open Access

Download PDF  'Trust does not need to be human: it is possible to trust medical AI'.
Preview
Content: Published Version
Filetype: PDF
Size: 225kB
View at publisher
Licence: Creative Commons: Attribution-NonCommercial 4.0 International (CC BY-NC 4.0)