UZH-Logo

Maintenance Infos

Hardware-amenable structural learning for spike-based pattern classification using a simple model of active dendrites


Hussain, S; Liu, S-C; Basu, A (2015). Hardware-amenable structural learning for spike-based pattern classification using a simple model of active dendrites. Neural Computation, 27(4):845-897.

Abstract

This letter presents a spike-based model that employs neurons with functionally distinct dendritic compartments for classifying high-dimensional binary patterns. The synaptic inputs arriving on each dendritic subunit are nonlinearly processed before being linearly integrated at the soma, giving the neuron the capacity to perform a large number of input-output mappings. The model uses sparse synaptic connectivity, where each synapse takes a binary value. The optimal connection pattern of a neuron is learned by using a simple hardware-friendly, margin-enhancing learning algorithm inspired by the mechanism of structural plasticity in biological neurons. The learning algorithm groups correlated synaptic inputs on the same dendritic branch. Since the learning results in modified connection patterns, it can be incorporated into current event-based neuromorphic systems with little overhead. This work also presents a branch-specific spike-based version of this structural plasticity rule. The proposed model is evaluated on benchmark binary classification problems, and its performance is compared against that achieved using support vector machine and extreme learning machine techniques. Our proposed method attains comparable performance while using 10% to 50% less in computational resource than the other reported techniques.

Abstract

This letter presents a spike-based model that employs neurons with functionally distinct dendritic compartments for classifying high-dimensional binary patterns. The synaptic inputs arriving on each dendritic subunit are nonlinearly processed before being linearly integrated at the soma, giving the neuron the capacity to perform a large number of input-output mappings. The model uses sparse synaptic connectivity, where each synapse takes a binary value. The optimal connection pattern of a neuron is learned by using a simple hardware-friendly, margin-enhancing learning algorithm inspired by the mechanism of structural plasticity in biological neurons. The learning algorithm groups correlated synaptic inputs on the same dendritic branch. Since the learning results in modified connection patterns, it can be incorporated into current event-based neuromorphic systems with little overhead. This work also presents a branch-specific spike-based version of this structural plasticity rule. The proposed model is evaluated on benchmark binary classification problems, and its performance is compared against that achieved using support vector machine and extreme learning machine techniques. Our proposed method attains comparable performance while using 10% to 50% less in computational resource than the other reported techniques.

Citations

3 citations in Web of Science®
5 citations in Scopus®
Google Scholar™

Altmetrics

Downloads

7 downloads since deposited on 22 Feb 2016
7 downloads since 12 months
Detailed statistics

Additional indexing

Item Type:Journal Article, refereed, original work
Communities & Collections:07 Faculty of Science > Institute of Neuroinformatics
Dewey Decimal Classification:570 Life sciences; biology
Language:English
Date:2015
Deposited On:22 Feb 2016 09:43
Last Modified:05 Apr 2016 20:04
Publisher:MIT Press
Series Name:Neural Computation
Number of Pages:53
ISSN:0899-7667
Publisher DOI:https://doi.org/10.1162/NECO_a_00713

Download

[img]
Preview
Content: Accepted Version
Filetype: PDF
Size: 1MB
View at publisher

TrendTerms

TrendTerms displays relevant terms of the abstract of this publication and related documents on a map. The terms and their relations were extracted from ZORA using word statistics. Their timelines are taken from ZORA as well. The bubble size of a term is proportional to the number of documents where the term occurs. Red, orange, yellow and green colors are used for terms that occur in the current document; red indicates high interlinkedness of a term with other terms, orange, yellow and green decreasing interlinkedness. Blue is used for terms that have a relation with the terms in this document, but occur in other documents.
You can navigate and zoom the map. Mouse-hovering a term displays its timeline, clicking it yields the associated documents.

Author Collaborations