Header

UZH-Logo

Maintenance Infos

Temporal Pattern Coding in Deep Spiking Neural Networks


Rueckauer, Bodo; Liu, Shih-Chii (2021). Temporal Pattern Coding in Deep Spiking Neural Networks. In: 2021 International Joint Conference on Neural Networks (IJCNN), Shenzhen, China, 18 July 2021 - 22 July 2021, IJCNN.

Abstract

Deep Artificial Neural Networks (ANNs) employ a simplified analog neuron model that mimics the rate transfer function of integrate-and-fire neurons. In Spiking Neural Networks (SNNs), the predominant information transmission method is based on rate codes. This code is inefficient from a hardware perspective because the number of transmitted spikes is proportional to the encoded analog value. Alternate codes such as temporal codes that are based on single spikes are difficult to scale up for large networks due to their sensitivity to spike timing noise. Here we present a study of an encoding scheme based on temporal spike patterns. This scheme inherits the efficiency of temporal codes but retains the robustness of rate codes. The pattern code is evaluated on MNIST, CIFAR-10, and ImageNet image classification tasks. We compare the network performance of ANNs, rate-coded SNNs, and temporal-coded SNNs, using the classification error and operation count as performance metrics. We also estimate the power consumption of the digital logic needed for the operations associated with each encoding type, and the impact of the bit precision of the weights and activations. On ImageNet, the temporal pattern code achieves up to 35× reduction in the estimated power consumption compared to the rate-coded SNN, and 42× compared to the ANN. The classification error of the pattern-coded SNN is increased by <1% compared to the ANN, and decreased by 2% compared to the rate-coded SNN.

Abstract

Deep Artificial Neural Networks (ANNs) employ a simplified analog neuron model that mimics the rate transfer function of integrate-and-fire neurons. In Spiking Neural Networks (SNNs), the predominant information transmission method is based on rate codes. This code is inefficient from a hardware perspective because the number of transmitted spikes is proportional to the encoded analog value. Alternate codes such as temporal codes that are based on single spikes are difficult to scale up for large networks due to their sensitivity to spike timing noise. Here we present a study of an encoding scheme based on temporal spike patterns. This scheme inherits the efficiency of temporal codes but retains the robustness of rate codes. The pattern code is evaluated on MNIST, CIFAR-10, and ImageNet image classification tasks. We compare the network performance of ANNs, rate-coded SNNs, and temporal-coded SNNs, using the classification error and operation count as performance metrics. We also estimate the power consumption of the digital logic needed for the operations associated with each encoding type, and the impact of the bit precision of the weights and activations. On ImageNet, the temporal pattern code achieves up to 35× reduction in the estimated power consumption compared to the rate-coded SNN, and 42× compared to the ANN. The classification error of the pattern-coded SNN is increased by <1% compared to the ANN, and decreased by 2% compared to the rate-coded SNN.

Statistics

Citations

Dimensions.ai Metrics
2 citations in Web of Science®
7 citations in Scopus®
Google Scholar™

Altmetrics

Downloads

225 downloads since deposited on 31 Mar 2022
117 downloads since 12 months
Detailed statistics

Additional indexing

Item Type:Conference or Workshop Item (Paper), refereed, original work
Communities & Collections:07 Faculty of Science > Institute of Neuroinformatics
Dewey Decimal Classification:570 Life sciences; biology
Scopus Subject Areas:Physical Sciences > Software
Physical Sciences > Artificial Intelligence
Language:English
Event End Date:22 July 2021
Deposited On:31 Mar 2022 11:12
Last Modified:01 Apr 2022 20:00
Publisher:IJCNN
OA Status:Green
Publisher DOI:https://doi.org/10.1109/ijcnn52387.2021.9533837
  • Content: Accepted Version