Publication: The perils and promises of fact-checking with large language models
The perils and promises of fact-checking with large language models
Date
Date
Date
Citations
Quelle, D., & Bovet, A. (2024). The perils and promises of fact-checking with large language models. Frontiers in Artificial Intelligence, 7, 01–14. https://doi.org/10.3389/frai.2024.1341697
Abstract
Abstract
Abstract
Automated fact-checking, using machine learning to verify claims, has grown vital as misinformation spreads beyond human fact-checking capacity. Large language models (LLMs) like GPT-4 are increasingly trusted to write academic papers, lawsuits, and news articles and to verify information, emphasizing their role in discerning truth from falsehood and the importance of being able to verify their outputs. Understanding the capacities and limitations of LLMs in fact-checking tasks is therefore essential for ensuring the health of our inf
Metrics
Downloads
Views
Additional indexing
Creators (Authors)
Volume
Volume
Volume
Page range/Item number
Page range/Item number
Page range/Item number
Page end
Page end
Page end
Item Type
Item Type
Item Type
Keywords
Language
Language
Language
Publication date
Publication date
Publication date
Date available
Date available
Date available
ISSN or e-ISSN
ISSN or e-ISSN
ISSN or e-ISSN
OA Status
OA Status
OA Status
Free Access at
Free Access at
Free Access at
Publisher DOI
Metrics
Downloads
Views
Citations
Quelle, D., & Bovet, A. (2024). The perils and promises of fact-checking with large language models. Frontiers in Artificial Intelligence, 7, 01–14. https://doi.org/10.3389/frai.2024.1341697