Navigation auf zora.uzh.ch

Search ZORA

ZORA (Zurich Open Repository and Archive)

Migration von ZORA auf die Software DSpace

ZORA will change to a new software on 8th September 2025. Please note: deadline for new submissions is 21th July 2025!

Information & dates for training courses can be found here: Information on Software Migration.

Justifying Our Credences in the Trustworthiness of AI Systems: A Reliabilistic Approach

Ferrario, Andrea (2024). Justifying Our Credences in the Trustworthiness of AI Systems: A Reliabilistic Approach. Science and Engineering Ethics, 30(6):55.

Abstract

We address an open problem in the philosophy of artificial intelligence (AI): how to justify the epistemic attitudes we have towards the trustworthiness of AI systems. The problem is important, as providing reasons to believe that AI systems are worthy of trust is key to appropriately rely on these systems in human-AI interactions. In our approach, we consider the trustworthiness of an AI as a time-relative, composite property of the system with two distinct facets. One is the actual trustworthiness of the AI and the other is the perceived trustworthiness of the system as assessed by its users while interacting with it. We show that credences, namely, beliefs we hold with a degree of confidence, are the appropriate attitude for capturing the facets of the trustworthiness of an AI over time. Then, we introduce a reliabilistic account providing justification to the credences in the trustworthiness of AI, which we derive from Tang’s probabilistic theory of justified credence. Our account stipulates that a credence in the trustworthiness of an AI system is justified if and only if it is caused by an assessment process that tends to result in a high proportion of credences for which the actual and perceived trustworthiness of the AI are calibrated. This approach informs research on the ethics of AI and human-AI interactions by providing actionable recommendations on how to measure the reliability of the process through which users perceive the trustworthiness of the system, investigating its calibration to the actual levels of trustworthiness of the AI as well as users’ appropriate reliance on the system.

Additional indexing

Item Type:Journal Article, refereed, original work
Communities & Collections:04 Faculty of Medicine > Institute of Biomedical Ethics and History of Medicine
Dewey Decimal Classification:610 Medicine & health
Scopus Subject Areas:Social Sciences & Humanities > Health (social science)
Health Sciences > Issues, Ethics and Legal Aspects
Health Sciences > Health Policy
Social Sciences & Humanities > Management of Technology and Innovation
Language:English
Date:21 November 2024
Deposited On:24 Jan 2025 07:47
Last Modified:30 Jun 2025 02:08
Publisher:Springer
ISSN:1353-3452
OA Status:Hybrid
Free access at:Publisher DOI. An embargo period may apply.
Publisher DOI:https://doi.org/10.1007/s11948-024-00522-z
PubMed ID:39570550
Project Information:
  • Funder: University of Zurich
  • Grant ID:
  • Project Title:
Download PDF  'Justifying Our Credences in the Trustworthiness of AI Systems: A Reliabilistic Approach'.
Preview
  • Content: Published Version
  • Language: English
  • Licence: Creative Commons: Attribution 4.0 International (CC BY 4.0)

Metadata Export

Statistics

Citations

Altmetrics

Downloads

15 downloads since deposited on 24 Jan 2025
15 downloads since 12 months
Detailed statistics

Authors, Affiliations, Collaborations

Similar Publications