Header

UZH-Logo

Maintenance Infos

Model-free reinforcement learning operates over information stored in working-memory to drive human choices


Feher da Silva, Carolina; Yao, Yuan-Wei; Hare, Todd A (2017). Model-free reinforcement learning operates over information stored in working-memory to drive human choices. bioRxiv 107698, Cold Spring Harbor Laboratory.

Abstract

Model-free learning creates stimulus-response associations, but are there limits to the types of stimuli it can operate over? Most experiments on reward-learning have used discrete sensory stimuli, but there is no algorithmic reason to restrict model-free learning to external stimuli, and theories suggest that model-free processes may operate over highly abstract concepts and goals. Our study aimed to determine whether model-free learning can operate over environmental states defined by information held in working memory. We compared the data from human participants in two conditions that presented learning cues either simultaneously or as a temporal sequence that required working memory. There was a significant influence of model-free learning in the working memory condition. Moreover, both groups showed greater model-free effects than simulated model-based agents. Thus, we show that model-free learning processes operate not just in parallel, but also in cooperation with canonical executive functions such as working memory to support behavior.

Abstract

Model-free learning creates stimulus-response associations, but are there limits to the types of stimuli it can operate over? Most experiments on reward-learning have used discrete sensory stimuli, but there is no algorithmic reason to restrict model-free learning to external stimuli, and theories suggest that model-free processes may operate over highly abstract concepts and goals. Our study aimed to determine whether model-free learning can operate over environmental states defined by information held in working memory. We compared the data from human participants in two conditions that presented learning cues either simultaneously or as a temporal sequence that required working memory. There was a significant influence of model-free learning in the working memory condition. Moreover, both groups showed greater model-free effects than simulated model-based agents. Thus, we show that model-free learning processes operate not just in parallel, but also in cooperation with canonical executive functions such as working memory to support behavior.

Statistics

Citations

Dimensions.ai Metrics

Altmetrics

Downloads

103 downloads since deposited on 09 Feb 2018
8 downloads since 12 months
Detailed statistics

Additional indexing

Other titles:Can model-free reinforcement learning operate over information stored in working-memory?
Item Type:Working Paper
Communities & Collections:03 Faculty of Economics > Department of Economics
Dewey Decimal Classification:330 Economics
Language:English
Date:2017
Deposited On:09 Feb 2018 09:42
Last Modified:22 Sep 2023 13:14
Series Name:bioRxiv
Number of Pages:24
OA Status:Green
Publisher DOI:https://doi.org/10.1101/107698
  • Content: Published Version
  • Licence: Creative Commons: Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0)
  • Content: Published Version
  • Licence: Creative Commons: Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0)