Header

UZH-Logo

Maintenance Infos

Digital Multiplier-Less Spiking Neural Network Architecture of Reinforcement Learning in a Context-Dependent Task


Asgari, Hajar; Maybodi, Babak Mazloom-Nezhad; Kreiser, Raphaela; Sandamirskaya, Yulia (2020). Digital Multiplier-Less Spiking Neural Network Architecture of Reinforcement Learning in a Context-Dependent Task. IEEE Journal on Emerging and Selected Topics in Circuits and Systems, 10(4):498-511.

Abstract

Neuromorphic engineers develop event-based spiking neural networks (SNNs) in hardware. These SNNs closer resemble the dynamics of biological neurons than conventional artificial neural networks and achieve higher efficiency thanks to the event-based, asynchronous nature of the processing. Learning in the hardware SNNs is a more challenging task, however. The conventional supervised learning methods cannot be directly applied to SNNs due to the non-differentiable event-based nature of their activation. For this reason, learning in SNNs is currently an active research topic. Reinforcement learning (RL) is a particularly promising learning method for neuromorphic implementation, especially in the field of autonomous agents' control. An SNN realization of a bio-inspired RL model is in the focus of this work. In particular, in this article, we propose a new digital multiplier-less hardware implementation of an SNN with RL capability. We show how the proposed network can learn stimulus-response associations in a context-dependent task. The task is inspired by biological experiments that study RL in animals. The architecture is described using the standard digital design flow and uses power- and space-efficient cores. The proposed hardware SNN model is compared both to data from animal experiments and to a computational model. We perform a comparison to the behavioral experiments using a robot, to show the learning capability in hardware in a closed sensory-motor loop.

Abstract

Neuromorphic engineers develop event-based spiking neural networks (SNNs) in hardware. These SNNs closer resemble the dynamics of biological neurons than conventional artificial neural networks and achieve higher efficiency thanks to the event-based, asynchronous nature of the processing. Learning in the hardware SNNs is a more challenging task, however. The conventional supervised learning methods cannot be directly applied to SNNs due to the non-differentiable event-based nature of their activation. For this reason, learning in SNNs is currently an active research topic. Reinforcement learning (RL) is a particularly promising learning method for neuromorphic implementation, especially in the field of autonomous agents' control. An SNN realization of a bio-inspired RL model is in the focus of this work. In particular, in this article, we propose a new digital multiplier-less hardware implementation of an SNN with RL capability. We show how the proposed network can learn stimulus-response associations in a context-dependent task. The task is inspired by biological experiments that study RL in animals. The architecture is described using the standard digital design flow and uses power- and space-efficient cores. The proposed hardware SNN model is compared both to data from animal experiments and to a computational model. We perform a comparison to the behavioral experiments using a robot, to show the learning capability in hardware in a closed sensory-motor loop.

Statistics

Citations

Altmetrics

Additional indexing

Item Type:Journal Article, refereed, original work
Communities & Collections:07 Faculty of Science > Institute of Neuroinformatics
Dewey Decimal Classification:570 Life sciences; biology
Scopus Subject Areas:Physical Sciences > Electrical and Electronic Engineering
Uncontrolled Keywords:Electrical and Electronic Engineering
Language:English
Date:1 December 2020
Deposited On:16 Feb 2021 07:05
Last Modified:17 Feb 2021 21:02
Publisher:Institute of Electrical and Electronics Engineers
ISSN:2156-3357
OA Status:Closed
Publisher DOI:https://doi.org/10.1109/jetcas.2020.3031040

Download

Full text not available from this repository.
View at publisher

Get full-text in a library