Header

UZH-Logo

Maintenance Infos

Local Structure Helps Learning Optimized Automata in Recurrent Neural Networks


Binas, J; Indiveri, G; Pfeiffer, M (2015). Local Structure Helps Learning Optimized Automata in Recurrent Neural Networks. In: The International Joint Conference on Neural Networks (IJCNN) 2015, Killarney, Ireland, 11 July 2015 - 17 July 2015, 1-7.

Abstract

Deterministic behavior can be modeled conveniently in the framework of finite automata. We present a recurrent neural network model based on biologically plausible circuit motifs that can learn deterministic transition models from given input sequences. Furthermore, we introduce simple structural constraints on the connectivity that are inspired by biology. Simulation results show that this leads to great improvements in terms of training time, and efficient use of resources in the converged system. Previous work has shown how specific instances of finite-state machines (FSMs) can be synthesized in recurrent neural networks by interconnecting multiple soft winner-take-all (SWTA) circuits - small circuits that can faithfully reproduce many computational properties of cortical networks. We extend this framework with a reinforcement learning mechanism to learn correct state transitions as input and reward signals are provided. Not only does the network learn a model for the observed sequences, and encode it in the recurrent synaptic weights, it also finds solutions that are close-to-optimal in the number of states required to model the target system, leading to efficient scaling behavior as the size of the target problems increases.

Abstract

Deterministic behavior can be modeled conveniently in the framework of finite automata. We present a recurrent neural network model based on biologically plausible circuit motifs that can learn deterministic transition models from given input sequences. Furthermore, we introduce simple structural constraints on the connectivity that are inspired by biology. Simulation results show that this leads to great improvements in terms of training time, and efficient use of resources in the converged system. Previous work has shown how specific instances of finite-state machines (FSMs) can be synthesized in recurrent neural networks by interconnecting multiple soft winner-take-all (SWTA) circuits - small circuits that can faithfully reproduce many computational properties of cortical networks. We extend this framework with a reinforcement learning mechanism to learn correct state transitions as input and reward signals are provided. Not only does the network learn a model for the observed sequences, and encode it in the recurrent synaptic weights, it also finds solutions that are close-to-optimal in the number of states required to model the target system, leading to efficient scaling behavior as the size of the target problems increases.

Statistics

Citations

Altmetrics

Downloads

10 downloads since deposited on 19 Feb 2016
8 downloads since 12 months
Detailed statistics

Additional indexing

Item Type:Conference or Workshop Item (Speech), not refereed, original work
Communities & Collections:07 Faculty of Science > Institute of Neuroinformatics
Dewey Decimal Classification:570 Life sciences; biology
Language:English
Event End Date:17 July 2015
Deposited On:19 Feb 2016 12:37
Last Modified:17 Aug 2017 14:18
Publisher:IEEE Xplore
Free access at:Publisher DOI. An embargo period may apply.
Publisher DOI:https://doi.org/10.1109/IJCNN.2015.7280714
Related URLs:http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=7280714&tag=1 (Publisher)

Download

Preview Icon on Download
Preview
Content: Accepted Version
Filetype: PDF
Size: 258kB
View at publisher

TrendTerms

TrendTerms displays relevant terms of the abstract of this publication and related documents on a map. The terms and their relations were extracted from ZORA using word statistics. Their timelines are taken from ZORA as well. The bubble size of a term is proportional to the number of documents where the term occurs. Red, orange, yellow and green colors are used for terms that occur in the current document; red indicates high interlinkedness of a term with other terms, orange, yellow and green decreasing interlinkedness. Blue is used for terms that have a relation with the terms in this document, but occur in other documents.
You can navigate and zoom the map. Mouse-hovering a term displays its timeline, clicking it yields the associated documents.

Author Collaborations