Publication: Source framing triggers systematic bias in large language models
Source framing triggers systematic bias in large language models
Date
Date
Date
Citations
Germani, F., & Spitale, G. (2025). Source framing triggers systematic bias in large language models. Science Advances, 11(45), eadz2924. https://doi.org/10.1126/sciadv.adz2924
Abstract
Abstract
Abstract
Large language models (LLMs) are increasingly used to evaluate text, raising urgent questions about whether their judgments are consistent, unbiased, and robust to framing effects. Here, we examine inter- and intramodel agreement across four state-of-the-art LLMs tasked with evaluating 4800 narrative statements on 24 different topics of social, political, and public health relevance, for a total of 192,000 assessments. We manipulate the disclosed source of each statement to assess how attribution to either another LLM or a human autho
Additional indexing
Creators (Authors)
Volume
Volume
Volume
Number
Number
Number
Page range/Item number
Page range/Item number
Page range/Item number
Item Type
Item Type
Item Type
Dewey Decimal Classifikation
Dewey Decimal Classifikation
Dewey Decimal Classifikation
Language
Language
Language
Publication date
Publication date
Publication date
Date available
Date available
Date available
ISSN or e-ISSN
ISSN or e-ISSN
ISSN or e-ISSN
OA Status
OA Status
OA Status
Free Access at
Free Access at
Free Access at
Publisher DOI
Citations
Germani, F., & Spitale, G. (2025). Source framing triggers systematic bias in large language models. Science Advances, 11(45), eadz2924. https://doi.org/10.1126/sciadv.adz2924