Navigation auf zora.uzh.ch

Search ZORA

ZORA (Zurich Open Repository and Archive)

Surprisal from language models can predict ERPs in processing predicate-argument structures only if enriched by an Agent Preference principle

Huber, Eva; Sauppe, Sebastian; Isasi-Isasmendi, Arrate; Bornkessel-Schlesewsky, Ina; Merlo, Paola; Bickel, Balthasar (2024). Surprisal from language models can predict ERPs in processing predicate-argument structures only if enriched by an Agent Preference principle. Neurobiology of language, 5(1):167-200.

Abstract

Language models based on artificial neural networks increasingly capture key aspects of how humans process sentences. Most notably, model-based surprisals predict event-related potentials such as N400 amplitudes during parsing. Assuming that these models represent realistic estimates of human linguistic experience, their success in modelling language processing raises the possibility that the human processing system relies on no other principles than the general architecture of language models and on sufficient linguistic input. Here, we test this hypothesis on N400 effects observed during the processing of verb-final sentences in German, Basque, and Hindi. By stacking Bayesian generalised additive models, we show that, in each language, N400 amplitudes and topographies in the region of the verb are best predicted when model-based surprisals are complemented by an Agent Preference principle that transiently interprets initial role-ambiguous NPs as agents, leading to reanalysis when this interpretation fails. Our findings demonstrate the need for this principle independently of usage frequencies and structural differences between languages. The principle has an unequal force, however. Compared to surprisal, its effect is weakest in German, stronger in Hindi, and still stronger in Basque. This gradient is correlated with the extent to which grammars allow unmarked NPs to be patients, a structural feature that boosts reanalysis effects. We conclude that language models gain more neurobiological plausibility by incorporating an Agent Preference. Conversely, theories of human processing profit from incorporating surprisal estimates in addition to principles like the Agent Preference, which arguably have distinct evolutionary roots.

Additional indexing

Item Type:Journal Article, refereed, original work
Communities & Collections:06 Faculty of Arts > Department of Comparative Language Science
Special Collections > NCCR Evolving Language
Special Collections > Centers of Competence > Center for the Interdisciplinary Study of Language Evolution
06 Faculty of Arts > Zurich Center for Linguistics
Dewey Decimal Classification:150 Psychology
490 Other languages
890 Other literatures
410 Linguistics
Uncontrolled Keywords:Neurology, Linguistics and Language
Language:English
Date:1 April 2024
Deposited On:20 Dec 2023 08:35
Last Modified:26 Feb 2025 02:41
Publisher:MIT Press
ISSN:2641-4368
Additional Information:Special Issue: Cognitive Computational Neuroscience of Language
OA Status:Gold
Publisher DOI:https://doi.org/10.1162/nol_a_00121
Project Information:
Download PDF  'Surprisal from language models can predict ERPs in processing predicate-argument structures only if enriched by an Agent Preference principle'.
Preview
  • Content: Published Version
  • Language: English
  • Licence: Creative Commons: Attribution 4.0 International (CC BY 4.0)

Metadata Export

Statistics

Citations

Dimensions.ai Metrics
4 citations in Web of Science®
5 citations in Scopus®
Google Scholar™

Altmetrics

Downloads

39 downloads since deposited on 20 Dec 2023
36 downloads since 12 months
Detailed statistics

Authors, Affiliations, Collaborations

Similar Publications