Header

UZH-Logo

Maintenance Infos

Pruning representations in a distributed model of working memory: a mechanism for refreshing and removal?


Shepherdson, Peter; Oberauer, Klaus (2018). Pruning representations in a distributed model of working memory: a mechanism for refreshing and removal? Annals of the New York Academy of Sciences, 1424(1):221-238.

Abstract

Substantial behavioral evidence suggests that attention plays an important role in working memory. Frequently, attention is characterized as enhancing representations by increasing their strength or activation level. Despite the intuitive appeal of this idea, using attention to strengthen representations in computational models can lead to unexpected outcomes. Representational strengthening frequently leads to worse, rather than better, performance, contradicting behavioral results. Here, we propose an alternative to a pure strengthening account, in which attention is used to selectively strengthen useful and weaken less useful components of distributed memory representations, thereby pruning the representations. We use a simple sampling algorithm to implement this pruning mechanism in a computational model of working memory. Our simulations show that pruning representations in this manner leads to improvements in performance compared with a lossless (i.e., decay-free) baseline condition, for both discrete recall (e.g., of a list of words) and continuous reproduction (e.g., of an array of colors). Pruning also offers a potential explanation of why a retro-cue drawing attention to one memory item during the retention interval improves performance. These results indicate that a pruning mechanism could provide a viable alternative to pure strengthening accounts of attention to representations in working memory.

Abstract

Substantial behavioral evidence suggests that attention plays an important role in working memory. Frequently, attention is characterized as enhancing representations by increasing their strength or activation level. Despite the intuitive appeal of this idea, using attention to strengthen representations in computational models can lead to unexpected outcomes. Representational strengthening frequently leads to worse, rather than better, performance, contradicting behavioral results. Here, we propose an alternative to a pure strengthening account, in which attention is used to selectively strengthen useful and weaken less useful components of distributed memory representations, thereby pruning the representations. We use a simple sampling algorithm to implement this pruning mechanism in a computational model of working memory. Our simulations show that pruning representations in this manner leads to improvements in performance compared with a lossless (i.e., decay-free) baseline condition, for both discrete recall (e.g., of a list of words) and continuous reproduction (e.g., of an array of colors). Pruning also offers a potential explanation of why a retro-cue drawing attention to one memory item during the retention interval improves performance. These results indicate that a pruning mechanism could provide a viable alternative to pure strengthening accounts of attention to representations in working memory.

Statistics

Citations

Dimensions.ai Metrics
2 citations in Web of Science®
2 citations in Scopus®
Google Scholar™

Altmetrics

Additional indexing

Item Type:Journal Article, refereed, original work
Communities & Collections:06 Faculty of Arts > Institute of Psychology
Dewey Decimal Classification:150 Psychology
Scopus Subject Areas:Life Sciences > General Neuroscience
Life Sciences > General Biochemistry, Genetics and Molecular Biology
Social Sciences & Humanities > History and Philosophy of Science
Language:English
Date:23 April 2018
Deposited On:12 Sep 2018 11:31
Last Modified:26 Jan 2022 17:25
Publisher:Wiley-Blackwell Publishing, Inc.
ISSN:0077-8923
OA Status:Closed
Publisher DOI:https://doi.org/10.1111/nyas.13659
PubMed ID:29683491
Full text not available from this repository.