UZH-Logo

Maintenance Infos

Recurrent competitive networks can learn locally excitatory topologies


Jug, Florian; Cook, Matthew; Steger, Angelika (2012). Recurrent competitive networks can learn locally excitatory topologies. In: IEEE International Joint Conference on Neural Networks (IJCNN) 2012 , Brisbane, Australia, 10 June 2012 - 15 June 2012, 1-8.

Abstract

A common form of neural network consists of spatially arranged neurons, with weighted connections between the units providing both local excitation and long-range or global inhibition. Such networks, known as soft-winner-take-all networks or lateral-inhibition type neural fields, have been shown to exhibit desirable information-processing properties including balancing the influence of compatible inputs, deciding between incompatible inputs, signal restoration from noisy, weak, or overly strong input, and the ability to be used as trainable building blocks in larger networks. However, the local excitatory connections in such a network are typically hand-wired based on a fixed spatial arrangement which is chosen using prior knowledge of the dimensionality of the data to be learned by such a network, and neuroanatomical evidence is stubbornly inconsistent with these wiring schemes. Here we present a learning rule that allows networks with completely random internal connectivity to learn the weighted connections necessary for implementing the “local” excitation used by these networks, where the locality is with respect to the inherent topology of the input received by the network, rather than being based on an arbitrarily prescribed spatial arrangement of the cells in the network. We use the Siegert approximation to leaky integrate-and-fire neurons, obtaining networks with consistently sparse activity, to which we apply standard Hebbian learning with weight normalization, plus homeostatic activity regulation to ensure full network utilization. Our results show that such networks learn appropriate excitatory connections from the input, and do not require these connections to be hand-wired with a fixed topology as they traditionally have been for decades.

A common form of neural network consists of spatially arranged neurons, with weighted connections between the units providing both local excitation and long-range or global inhibition. Such networks, known as soft-winner-take-all networks or lateral-inhibition type neural fields, have been shown to exhibit desirable information-processing properties including balancing the influence of compatible inputs, deciding between incompatible inputs, signal restoration from noisy, weak, or overly strong input, and the ability to be used as trainable building blocks in larger networks. However, the local excitatory connections in such a network are typically hand-wired based on a fixed spatial arrangement which is chosen using prior knowledge of the dimensionality of the data to be learned by such a network, and neuroanatomical evidence is stubbornly inconsistent with these wiring schemes. Here we present a learning rule that allows networks with completely random internal connectivity to learn the weighted connections necessary for implementing the “local” excitation used by these networks, where the locality is with respect to the inherent topology of the input received by the network, rather than being based on an arbitrarily prescribed spatial arrangement of the cells in the network. We use the Siegert approximation to leaky integrate-and-fire neurons, obtaining networks with consistently sparse activity, to which we apply standard Hebbian learning with weight normalization, plus homeostatic activity regulation to ensure full network utilization. Our results show that such networks learn appropriate excitatory connections from the input, and do not require these connections to be hand-wired with a fixed topology as they traditionally have been for decades.

Citations

Altmetrics

Downloads

24 downloads since deposited on 28 Feb 2013
1 download since 12 months
Detailed statistics

Additional indexing

Item Type:Conference or Workshop Item (Speech), refereed, original work
Communities & Collections:07 Faculty of Science > Institute of Neuroinformatics
Dewey Decimal Classification:570 Life sciences; biology
Language:English
Event End Date:15 June 2012
Deposited On:28 Feb 2013 07:22
Last Modified:05 Apr 2016 16:36
Publisher:IEEE
Series Name:Proceedings of the International Joint Conference on Neural Networks
Number of Pages:8
ISSN:2161-4393
ISBN:978-1-4673-1489-3
Additional Information:© 2012 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
Publisher DOI:https://doi.org/10.1109/IJCNN.2012.6252786
Permanent URL: https://doi.org/10.5167/uzh-75316

Download

[img]
Preview
Content: Accepted Version
Filetype: PDF
Size: 1MB
View at publisher

TrendTerms

TrendTerms displays relevant terms of the abstract of this publication and related documents on a map. The terms and their relations were extracted from ZORA using word statistics. Their timelines are taken from ZORA as well. The bubble size of a term is proportional to the number of documents where the term occurs. Red, orange, yellow and green colors are used for terms that occur in the current document; red indicates high interlinkedness of a term with other terms, orange, yellow and green decreasing interlinkedness. Blue is used for terms that have a relation with the terms in this document, but occur in other documents.
You can navigate and zoom the map. Mouse-hovering a term displays its timeline, clicking it yields the associated documents.

Author Collaborations