Header

UZH-Logo

Maintenance Infos

In AI We Trust Incrementally: a Multi-layer Model of Trust to Analyze Human-Artificial Intelligence Interactions


Ferrario, Andrea; Loi, Michele; Vigano, Eleonora (2019). In AI We Trust Incrementally: a Multi-layer Model of Trust to Analyze Human-Artificial Intelligence Interactions. Philosophy & Technology:1-17.

Abstract

Real engines of the artificial intelligence (AI) revolution, machine learning (ML) models, and algorithms are embedded nowadays in many services and products around us. As a society, we argue it is now necessary to transition into a phronetic paradigm focused on the ethical dilemmas stemming from the conception and application of AIs to define actionable recommendations as well as normative solutions. However, both academic research and society-driven initiatives are still quite far from clearly defining a solid program of study and intervention. In this contribution, we will focus on selected ethical investigations around AI by proposing an incremental model of trust that can be applied to both human-human and human-AI interactions. Starting with a quick overview of the existing accounts of trust, with special attention to Taddeo’s concept of “e-trust,” we will discuss all the components of the proposed model and the reasons to trust in human-AI interactions in an example of relevance for business organizations. We end this contribution with an analysis of the epistemic and pragmatic reasons of trust in human-AI interactions and with a discussion of kinds of normativity in trustworthiness of AIs.

Abstract

Real engines of the artificial intelligence (AI) revolution, machine learning (ML) models, and algorithms are embedded nowadays in many services and products around us. As a society, we argue it is now necessary to transition into a phronetic paradigm focused on the ethical dilemmas stemming from the conception and application of AIs to define actionable recommendations as well as normative solutions. However, both academic research and society-driven initiatives are still quite far from clearly defining a solid program of study and intervention. In this contribution, we will focus on selected ethical investigations around AI by proposing an incremental model of trust that can be applied to both human-human and human-AI interactions. Starting with a quick overview of the existing accounts of trust, with special attention to Taddeo’s concept of “e-trust,” we will discuss all the components of the proposed model and the reasons to trust in human-AI interactions in an example of relevance for business organizations. We end this contribution with an analysis of the epistemic and pragmatic reasons of trust in human-AI interactions and with a discussion of kinds of normativity in trustworthiness of AIs.

Statistics

Citations

Dimensions.ai Metrics

Altmetrics

Downloads

1 download since deposited on 12 Dec 2019
0 downloads since 12 months
Detailed statistics

Additional indexing

Item Type:Journal Article, refereed, original work
Communities & Collections:04 Faculty of Medicine > Institute of Biomedical Ethics and History of Medicine
Dewey Decimal Classification:610 Medicine & health
Scopus Subject Areas:Social Sciences & Humanities > Philosophy
Social Sciences & Humanities > History and Philosophy of Science
Uncontrolled Keywords:Philosophy, History and Philosophy of Science
Language:English
Date:23 October 2019
Deposited On:12 Dec 2019 12:32
Last Modified:29 Jul 2020 11:54
Publisher:Springer
ISSN:2210-5433
OA Status:Closed
Publisher DOI:https://doi.org/10.1007/s13347-019-00378-3
Project Information:
  • : FunderH2020
  • : Grant ID700540
  • : Project TitleCANVAS - Constructing an Alliance for Value-driven Cybersecurity

Download

Closed Access: Download allowed only for UZH members