Header

UZH-Logo

Maintenance Infos

Effective sensor fusion with event-based sensors and deep network architectures


Neil, Daniel; Liu, Shih-Chii (2016). Effective sensor fusion with event-based sensors and deep network architectures. In: IEEE International Symposium on Circuits and Systems (ISCAS) 2016, Montreal, Canada, 22 May 2016 - 25 May 2016, 2282-2285.

Abstract

The use of spiking neuromorphic sensors with state-of-art deep networks is currently an active area of research. Still relatively unexplored are the pre-processing steps needed to transform spikes from these sensors and the types of network architectures that can produce high-accuracy performance using these sensors. This paper discusses several methods for preprocessing the spiking data from these sensors for use with various deep network architectures. The outputs of these preprocessing methods are evaluated using different networks including a deep fusion network composed of Convolutional Neural Networks and Recurrent Neural Networks, to jointly solve a recognition task using the MNIST (visual) and TIDIGITS (audio) benchmark datasets. With only 1000 visual input spikes from a spiking hardware retina, the classification accuracy of 64.5% achieved by a particular trained fusion network increases to 98.31% when combined with inputs from a spiking hardware cochlea.

Abstract

The use of spiking neuromorphic sensors with state-of-art deep networks is currently an active area of research. Still relatively unexplored are the pre-processing steps needed to transform spikes from these sensors and the types of network architectures that can produce high-accuracy performance using these sensors. This paper discusses several methods for preprocessing the spiking data from these sensors for use with various deep network architectures. The outputs of these preprocessing methods are evaluated using different networks including a deep fusion network composed of Convolutional Neural Networks and Recurrent Neural Networks, to jointly solve a recognition task using the MNIST (visual) and TIDIGITS (audio) benchmark datasets. With only 1000 visual input spikes from a spiking hardware retina, the classification accuracy of 64.5% achieved by a particular trained fusion network increases to 98.31% when combined with inputs from a spiking hardware cochlea.

Statistics

Citations

Altmetrics

Downloads

28 downloads since deposited on 26 Jan 2017
28 downloads since 12 months
Detailed statistics

Additional indexing

Item Type:Conference or Workshop Item (Speech), refereed, original work
Communities & Collections:07 Faculty of Science > Institute of Neuroinformatics
Dewey Decimal Classification:570 Life sciences; biology
Language:English
Event End Date:25 May 2016
Deposited On:26 Jan 2017 14:47
Last Modified:29 Aug 2017 13:11
Publisher:Institute of Electrical and Electronics Engineers
Series Name:IEEE International Symposium on Circuits and Systems (ISCAS)
ISSN:2379-447X
Publisher DOI:https://doi.org/10.1109/ISCAS.2016.7539039
Related URLs:http://iscas2016.org/ (Organisation)
http://ieeexplore.ieee.org/document/7539039/ (Publisher)

Download

Download PDF  'Effective sensor fusion with event-based sensors and deep network architectures'.
Preview
Filetype: PDF
Size: 257kB
View at publisher