Navigation auf zora.uzh.ch

Search

ZORA (Zurich Open Repository and Archive)

Sharpening projections

Cook, M; Jug, F; Krautz, C (2009). Sharpening projections. BMC Neuroscience, 10(Suppl 1):P214.

Abstract

It is known that neuronal arbors can project roughly topographically from their source to their target area [1]. However, developmental rules in the cortex often provide only roughly reciprocal connections through chemical cues governing the physical location of distant axonal arbors, leaving open the question of whether more precise reciprocal connectivity is possible in these cases [2,3]. We address this question of whether it is possible for a biologically plausible learning rule to adjust the synaptic weights so that the projection effectively becomes more precise than the anatomy alone provides. We have discovered a biologically plausible set of learning rules that can adjust the synaptic weights so that precisely reciprocal ones are strengthened while others are weakened, thus effectively increasing the specificity of the projections.
The question introduced above can be generalized to any number of areas connected in a feed-forward cycle with feedback only coming as a result of information traveling around the entire cycle. We examined the cases of two or three areas connected in a directed cycle. Each area was represented by a pool of linear threshold units having a sigmoidal threshold function, with inter-area weights initialized to random weight matrices, corresponding to the worst case of a neuronal arbor completely covering the target area. For the dynamics of the unit activities and the weights, we combined three techniques that plausibly have biological counterparts: winner-take-all circuitry [4,5], activity regulation [6], and Hebbian learning [7,8]. On the shortest time scale, winner-take-all circuitry within each pool ensures that the current configuration of activity consists of a clearly localized region of activation. On an intermediate time scale, activity regulation within each unit dampens the likelihood for units to win the winner-take-all competition if they have been highly active but increases their chance to win if they have been inactive, thereby enforcing fair use of the units. On the slowest time scale, Hebbian learning increases the weights along active cycles, making them more likely to recur. Over time, this encourages specific cycles to be strengthened, while the activity regulation ensures that these specific cycles cover all of the units in each pool.
Depending on the combinations of parameters used, it was possible to generate two kinds of reciprocal connectivity. In the first kind, strong precise reciprocal connections developed, with weaker strengths for connections close to being reciprocal, and much weaker connections elsewhere. In other words, the product of the weight matrices was a blurred identity matrix. In the second kind, each area partitioned itself into distinct subgroups, and then the subgroups formed fully connected cycles. In this case, the product of the matrices was a crisp block identity matrix. Both forms of connectivity can be useful within larger artificial architectures and we claim they could easily occur in brains.

Additional indexing

Item Type:Journal Article, refereed, original work
Communities & Collections:07 Faculty of Science > Institute of Neuroinformatics
Dewey Decimal Classification:570 Life sciences; biology
Language:English
Date:2009
Deposited On:28 Feb 2010 10:52
Last Modified:28 Jun 2022 08:34
Publisher:BioMed Central
ISSN:1471-2202
OA Status:Gold
Free access at:Publisher DOI. An embargo period may apply.
Publisher DOI:https://doi.org/10.1186/1471-2202-10-S1-P214
Related URLs:http://www.ini.uzh.ch/node/23982 (Organisation)
Download PDF  'Sharpening projections'.
Preview
  • Content: Published Version

Metadata Export

Statistics

Citations

Dimensions.ai Metrics

Altmetrics

Downloads

33 downloads since deposited on 28 Feb 2010
2 downloads since 12 months
Detailed statistics

Authors, Affiliations, Collaborations

Similar Publications