Navigation auf zora.uzh.ch

Search ZORA

ZORA (Zurich Open Repository and Archive)

On language models’ cognitive biases in reading time prediction

Haller, Patrick; Bolliger, Lena S; Jäger, Lena A (2024). On language models’ cognitive biases in reading time prediction. In: ICML 2024 Workshop on LLMs and Cognition, Vienna, Austria, 27 July 2024, s.n..

Abstract

To date, most investigations on surprisal and entropy effects in reading have been conducted on the group-level, disregarding individual differences. In this work, we revisit the predictive power (PP) of different language models’ (LMs’) surprisal and entropy measures on data of human reading times by incorporating information of language users’ cognitive capacities. To do so, we assess the PP of surprisal and entropy estimated from generative LMs on reading data from subjects for which scores from psychometric tests targeting different cognitive domains are available. Specifically, we investigate if modulating surprisal and entropy relative to the readers’ cognitive scores increases prediction accuracy of reading times, and we examine whether LMs exhibit systematic biases in the prediction of reading times for cognitively high- or low-scoring groups, allowing us to investigate what type of psycholinguistic subjects a given LM emulates. We find that incorporating cognitive capacities mostly increases PP of surprisal and entropy on reading times, and that individuals performing high in cognitive tests are less sensitive to predictability effects. Our results further suggest that the analyzed LMs emulate readers with lower verbal intelligence, suggesting that for a given target group (i.e., individuals with high verbal intelligence), these LMs provide less accurate predictability estimates. Finally, our study underlines the value of incorporating individual-level information to gain insights into how LMs operate internally.

Additional indexing

Item Type:Conference or Workshop Item (Paper), refereed, original work
Communities & Collections:06 Faculty of Arts > Institute of Computational Linguistics
06 Faculty of Arts > Zurich Center for Linguistics
Dewey Decimal Classification:410 Linguistics
000 Computer science, knowledge & systems
Language:English, German
Event End Date:27 July 2024
Deposited On:07 Jan 2025 11:00
Last Modified:30 Jan 2025 09:27
Publisher:s.n.
Series Name:ICML Workshop on Large Language Models and Cognition
OA Status:Green
Free access at:Official URL. An embargo period may apply.
Official URL:https://openreview.net/forum?id=io5QAglkER
Download PDF  'On language models’ cognitive biases in reading time prediction'.
Preview
  • Content: Published Version
  • Language: English
  • Licence: Creative Commons: Attribution 4.0 International (CC BY 4.0)

Metadata Export

Statistics

Downloads

3 downloads since deposited on 07 Jan 2025
3 downloads since 12 months
Detailed statistics

Authors, Affiliations, Collaborations

Similar Publications