Abstract
A common form of neural network consists of spatially arranged neurons, with weighted connections between the units providing both local excitation and long-range or global inhibition. Such networks, known as soft-winner-take-all networks or lateral-inhibition type neural fields, have been shown to exhibit desirable information-processing properties including balancing the influence of compatible inputs, deciding between incompatible inputs, signal restoration from noisy, weak, or overly strong input, and the ability to be used as trainable building blocks in larger networks. However, the local excitatory connections in such a network are typically hand-wired based on a fixed spatial arrangement which is chosen using prior knowledge of the dimensionality of the data to be learned by such a network, and neuroanatomical evidence is stubbornly inconsistent with these wiring schemes. Here we present a learning rule that allows networks with completely random internal connectivity to learn the weighted connections necessary for implementing the “local” excitation used by these networks, where the locality is with respect to the inherent topology of the input received by the network, rather than being based on an arbitrarily prescribed spatial arrangement of the cells in the network. We use the Siegert approximation to leaky integrate-and-fire neurons, obtaining networks with consistently sparse activity, to which we apply standard Hebbian learning with weight normalization, plus homeostatic activity regulation to ensure full network utilization. Our results show that such networks learn appropriate excitatory connections from the input, and do not require these connections to be hand-wired with a fixed topology as they traditionally have been for decades.