Header

UZH-Logo

Maintenance Infos

Unsupervised learning of digit recognition using spike-timing-dependent plasticity


Diehl, P U; Cook, M (2015). Unsupervised learning of digit recognition using spike-timing-dependent plasticity. IEEE Transactions on Neural Networks and Learning Systems, 26(12):2999-3008.

Abstract

The recent development of power-efficient neuromorphic hardware offers great opportunities for applications where power consumption is a main concern, ranging from mobile platforms to server farms. However, it remains a challenging task to design spiking neural networks (SNN) to do pattern recognition on such hardware. We present a SNN for digit recognition which relies on mechanisms commonly used on neuromorphic hardware, i.e. exponential synapses with spiketiming- dependent plasticity, lateral inhibition, and an adaptive threshold. Unlike most other approaches, we do not present any class labels to the network; the network uses unsupervised learning. The performance of our network scales well with the number of neurons used. Intuitively, the used algorithm is comparable to k-means and competitive learning algorithms such as vector quantization and self-organizing maps, each neuron learns a representation of a part of the input space, similar to a centroid in k-means. Our architecture achieves 95% accuracy on the MNIST benchmark, which outperforms other unsupervised learning methods for SNNs. The fact that we used no domainspecific knowledge points toward a more general applicability of the network design.

Abstract

The recent development of power-efficient neuromorphic hardware offers great opportunities for applications where power consumption is a main concern, ranging from mobile platforms to server farms. However, it remains a challenging task to design spiking neural networks (SNN) to do pattern recognition on such hardware. We present a SNN for digit recognition which relies on mechanisms commonly used on neuromorphic hardware, i.e. exponential synapses with spiketiming- dependent plasticity, lateral inhibition, and an adaptive threshold. Unlike most other approaches, we do not present any class labels to the network; the network uses unsupervised learning. The performance of our network scales well with the number of neurons used. Intuitively, the used algorithm is comparable to k-means and competitive learning algorithms such as vector quantization and self-organizing maps, each neuron learns a representation of a part of the input space, similar to a centroid in k-means. Our architecture achieves 95% accuracy on the MNIST benchmark, which outperforms other unsupervised learning methods for SNNs. The fact that we used no domainspecific knowledge points toward a more general applicability of the network design.

Statistics

Citations

1 citation in Web of Science®
34 citations in Scopus®
Google Scholar™

Altmetrics

Downloads

63 downloads since deposited on 25 Feb 2015
26 downloads since 12 months
Detailed statistics

Additional indexing

Item Type:Journal Article, not refereed, original work
Communities & Collections:07 Faculty of Science > Institute of Neuroinformatics
Dewey Decimal Classification:570 Life sciences; biology
Language:English
Date:2015
Deposited On:25 Feb 2015 08:29
Last Modified:05 Apr 2016 19:00
Publisher:Institute of Electrical and Electronics Engineers
ISSN:2162-237X
Additional Information:© 2015 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
Publisher DOI:https://doi.org/10.1109/TNNLS.2015.2399491

Download

Preview Icon on Download
Preview
Content: Accepted Version
Filetype: PDF
Size: 654kB
View at publisher

TrendTerms

TrendTerms displays relevant terms of the abstract of this publication and related documents on a map. The terms and their relations were extracted from ZORA using word statistics. Their timelines are taken from ZORA as well. The bubble size of a term is proportional to the number of documents where the term occurs. Red, orange, yellow and green colors are used for terms that occur in the current document; red indicates high interlinkedness of a term with other terms, orange, yellow and green decreasing interlinkedness. Blue is used for terms that have a relation with the terms in this document, but occur in other documents.
You can navigate and zoom the map. Mouse-hovering a term displays its timeline, clicking it yields the associated documents.

Author Collaborations