Header

UZH-Logo

Maintenance Infos

Learning to be Efficient: Algorithms for Training Low-Latency, Low-Compute Deep Spiking Neural Networks


Neil, Daniel; Pfeiffer, Michael; Liu, Shih-Chii (2016). Learning to be Efficient: Algorithms for Training Low-Latency, Low-Compute Deep Spiking Neural Networks. In: SAC 2016 31st ACM Symposium on Applied Computing Pisa, Italy April 4-8, 2016, Pisa, Italy, 4 April 1904 - 8 April 2016.

Abstract

Recent advances have allowed Deep Spiking Neural Networks (SNNs) to perform at the same accuracy levels as Artificial Neural Networks (ANNs), but have also highlighted a unique property of SNNs: whereas in ANNs, every neuron needs to update once before an output can be created, the computational effort in an SNN depends on the number of spikes created in the network. While higher spike rates and longer computing times typically improve classification performance, very good results can already be achieved earlier. Here we investigate how Deep SNNs can be optimized to reach desired high accuracy levels as quickly as possible. Different approaches are compared which either minimize the number of spikes created, or aim at rapid classification by enforcing the learning of feature detectors that respond to few input spikes. A variety of networks with different optimization approaches are trained on the MNIST benchmark to perform at an accuracy level of at least 98%, while monitoring the average number of input spikes and spikes created within the network to reach this level of accuracy. The majority of SNNs required significantly fewer computations than frame-based ANN approaches. The most efficient SNN achieves an answer in less than 42% of the computational steps necessary for the ANN, and the fastest SNN requires only 25% of the original number of input spikes to achieve equal classification accuracy. Our results suggest that SNNs can be optimized to dramatically decrease the latency as well as the computation requirements for Deep Neural Networks, making them particularly attractive for applications like robotics, where real-time restrictions to produce outputs and low energy budgets are common.

Abstract

Recent advances have allowed Deep Spiking Neural Networks (SNNs) to perform at the same accuracy levels as Artificial Neural Networks (ANNs), but have also highlighted a unique property of SNNs: whereas in ANNs, every neuron needs to update once before an output can be created, the computational effort in an SNN depends on the number of spikes created in the network. While higher spike rates and longer computing times typically improve classification performance, very good results can already be achieved earlier. Here we investigate how Deep SNNs can be optimized to reach desired high accuracy levels as quickly as possible. Different approaches are compared which either minimize the number of spikes created, or aim at rapid classification by enforcing the learning of feature detectors that respond to few input spikes. A variety of networks with different optimization approaches are trained on the MNIST benchmark to perform at an accuracy level of at least 98%, while monitoring the average number of input spikes and spikes created within the network to reach this level of accuracy. The majority of SNNs required significantly fewer computations than frame-based ANN approaches. The most efficient SNN achieves an answer in less than 42% of the computational steps necessary for the ANN, and the fastest SNN requires only 25% of the original number of input spikes to achieve equal classification accuracy. Our results suggest that SNNs can be optimized to dramatically decrease the latency as well as the computation requirements for Deep Neural Networks, making them particularly attractive for applications like robotics, where real-time restrictions to produce outputs and low energy budgets are common.

Statistics

Altmetrics

Downloads

8 downloads since deposited on 27 Jan 2017
8 downloads since 12 months
Detailed statistics

Additional indexing

Item Type:Conference or Workshop Item (Speech), refereed, original work
Communities & Collections:07 Faculty of Science > Institute of Neuroinformatics
Dewey Decimal Classification:570 Life sciences; biology
Language:English
Event End Date:8 April 2016
Deposited On:27 Jan 2017 08:21
Last Modified:31 Mar 2017 07:09
Publisher:Proceedings of the 31st Annual ACM Symposium on Applied Computing
Series Name:ACM Symposium on Applied Computing
Free access at:Official URL. An embargo period may apply.
Publisher DOI:https://doi.org/10.1145/2851613.2851724
Official URL:http://dl.acm.org/citation.cfm?doid=2851613.2851724

Download

Preview Icon on Download
Preview
Filetype: PDF
Size: 940kB
View at publisher
Licence: Creative Commons: Attribution-No Derivatives 4.0 International (CC BY-ND 4.0)

TrendTerms

TrendTerms displays relevant terms of the abstract of this publication and related documents on a map. The terms and their relations were extracted from ZORA using word statistics. Their timelines are taken from ZORA as well. The bubble size of a term is proportional to the number of documents where the term occurs. Red, orange, yellow and green colors are used for terms that occur in the current document; red indicates high interlinkedness of a term with other terms, orange, yellow and green decreasing interlinkedness. Blue is used for terms that have a relation with the terms in this document, but occur in other documents.
You can navigate and zoom the map. Mouse-hovering a term displays its timeline, clicking it yields the associated documents.

Author Collaborations