Navigation auf zora.uzh.ch

Search ZORA

ZORA (Zurich Open Repository and Archive)

The perils and promises of fact-checking with large language models

Quelle, Dorian; Bovet, Alexandre (2024). The perils and promises of fact-checking with large language models. Frontiers in Artificial Intelligence, 7:01-14.

Abstract

Automated fact-checking, using machine learning to verify claims, has grown vital as misinformation spreads beyond human fact-checking capacity. Large language models (LLMs) like GPT-4 are increasingly trusted to write academic papers, lawsuits, and news articles and to verify information, emphasizing their role in discerning truth from falsehood and the importance of being able to verify their outputs. Understanding the capacities and limitations of LLMs in fact-checking tasks is therefore essential for ensuring the health of our information ecosystem. Here, we evaluate the use of LLM agents in fact-checking by having them phrase queries, retrieve contextual data, and make decisions. Importantly, in our framework, agents explain their reasoning and cite the relevant sources from the retrieved context. Our results show the enhanced prowess of LLMs when equipped with contextual information. GPT-4 outperforms GPT-3, but accuracy varies based on query language and claim veracity. While LLMs show promise in fact-checking, caution is essential due to inconsistent accuracy. Our investigation calls for further research, fostering a deeper comprehension of when agents succeed and when they fail.

Additional indexing

Item Type:Journal Article, refereed, original work
Communities & Collections:07 Faculty of Science > Institute of Mathematics
08 Research Priority Programs > Digital Society Initiative
07 Faculty of Science > Department of Mathematical Modeling and Machine Learning
Dewey Decimal Classification:510 Mathematics
Scopus Subject Areas:Physical Sciences > Artificial Intelligence
Uncontrolled Keywords:Artificial Intelligence fact-checking, misinformation, large language models, human computer interaction, natural language processing, low-resource languages
Language:English
Date:7 February 2024
Deposited On:24 Apr 2024 10:43
Last Modified:27 Feb 2025 02:43
Publisher:Frontiers Research Foundation
ISSN:2624-8212
OA Status:Gold
Free access at:Publisher DOI. An embargo period may apply.
Publisher DOI:https://doi.org/10.3389/frai.2024.1341697
PubMed ID:38384276
Download PDF  'The perils and promises of fact-checking with large language models'.
Preview
  • Content: Published Version
  • Language: English
  • Licence: Creative Commons: Attribution 4.0 International (CC BY 4.0)

Metadata Export

Statistics

Citations

Dimensions.ai Metrics
6 citations in Web of Science®
4 citations in Scopus®
Google Scholar™

Altmetrics

Downloads

12 downloads since deposited on 24 Apr 2024
12 downloads since 12 months
Detailed statistics

Authors, Affiliations, Collaborations

Similar Publications