Header

UZH-Logo

Maintenance Infos

Learning to be Efficient: Algorithms for Training Low-Latency, Low-Compute Deep Spiking Neural Networks


Neil, Daniel; Pfeiffer, Michael; Liu, Shih-Chii (2016). Learning to be Efficient: Algorithms for Training Low-Latency, Low-Compute Deep Spiking Neural Networks. In: SAC 2016 31st ACM Symposium on Applied Computing Pisa, Italy April 4-8, 2016, Pisa, Italy, 4 April 1904 - 8 April 2016.

Abstract

Recent advances have allowed Deep Spiking Neural Networks (SNNs) to perform at the same accuracy levels as Artificial Neural Networks (ANNs), but have also highlighted a unique property of SNNs: whereas in ANNs, every neuron needs to update once before an output can be created, the computational effort in an SNN depends on the number of spikes created in the network. While higher spike rates and longer computing times typically improve classification performance, very good results can already be achieved earlier. Here we investigate how Deep SNNs can be optimized to reach desired high accuracy levels as quickly as possible. Different approaches are compared which either minimize the number of spikes created, or aim at rapid classification by enforcing the learning of feature detectors that respond to few input spikes. A variety of networks with different optimization approaches are trained on the MNIST benchmark to perform at an accuracy level of at least 98%, while monitoring the average number of input spikes and spikes created within the network to reach this level of accuracy. The majority of SNNs required significantly fewer computations than frame-based ANN approaches. The most efficient SNN achieves an answer in less than 42% of the computational steps necessary for the ANN, and the fastest SNN requires only 25% of the original number of input spikes to achieve equal classification accuracy. Our results suggest that SNNs can be optimized to dramatically decrease the latency as well as the computation requirements for Deep Neural Networks, making them particularly attractive for applications like robotics, where real-time restrictions to produce outputs and low energy budgets are common.

Abstract

Recent advances have allowed Deep Spiking Neural Networks (SNNs) to perform at the same accuracy levels as Artificial Neural Networks (ANNs), but have also highlighted a unique property of SNNs: whereas in ANNs, every neuron needs to update once before an output can be created, the computational effort in an SNN depends on the number of spikes created in the network. While higher spike rates and longer computing times typically improve classification performance, very good results can already be achieved earlier. Here we investigate how Deep SNNs can be optimized to reach desired high accuracy levels as quickly as possible. Different approaches are compared which either minimize the number of spikes created, or aim at rapid classification by enforcing the learning of feature detectors that respond to few input spikes. A variety of networks with different optimization approaches are trained on the MNIST benchmark to perform at an accuracy level of at least 98%, while monitoring the average number of input spikes and spikes created within the network to reach this level of accuracy. The majority of SNNs required significantly fewer computations than frame-based ANN approaches. The most efficient SNN achieves an answer in less than 42% of the computational steps necessary for the ANN, and the fastest SNN requires only 25% of the original number of input spikes to achieve equal classification accuracy. Our results suggest that SNNs can be optimized to dramatically decrease the latency as well as the computation requirements for Deep Neural Networks, making them particularly attractive for applications like robotics, where real-time restrictions to produce outputs and low energy budgets are common.

Statistics

Citations

Dimensions.ai Metrics

Altmetrics

Downloads

42 downloads since deposited on 27 Jan 2017
20 downloads since 12 months
Detailed statistics

Additional indexing

Item Type:Conference or Workshop Item (Speech), refereed, original work
Communities & Collections:07 Faculty of Science > Institute of Neuroinformatics
Dewey Decimal Classification:570 Life sciences; biology
Language:English
Event End Date:8 April 2016
Deposited On:27 Jan 2017 08:21
Last Modified:23 Oct 2018 07:03
Publisher:Proceedings of the 31st Annual ACM Symposium on Applied Computing
Series Name:ACM Symposium on Applied Computing
OA Status:Green
Free access at:Official URL. An embargo period may apply.
Publisher DOI:https://doi.org/10.1145/2851613.2851724
Official URL:http://dl.acm.org/citation.cfm?doid=2851613.2851724

Download

Download PDF  'Learning to be Efficient: Algorithms for Training Low-Latency, Low-Compute Deep Spiking Neural Networks'.
Preview
Filetype: PDF
Size: 940kB
View at publisher
Licence: Creative Commons: Attribution-No Derivatives 4.0 International (CC BY-ND 4.0)