Header

UZH-Logo

Maintenance Infos

Spatiotemporal features for asynchronous event-based data


Lagorce, Xavier; Ieng, Sio-Hoi; Clady, Xavier; Pfeiffer, Michael; Benosman, Ryad B (2015). Spatiotemporal features for asynchronous event-based data. Frontiers in Neuroscience:9:46.

Abstract

Bio-inspired asynchronous event-based vision sensors are currently introducing a paradigm shift in visual information processing. These new sensors rely on a stimulus-driven principle of light acquisition similar to biological retinas. They are event-driven and fully asynchronous, thereby reducing redundancy and encoding exact times of input signal changes, leading to a very precise temporal resolution. Approaches for higher-level computer vision often rely on the reliable detection of features in visual frames, but similar definitions of features for the novel dynamic and event-based visual input representation of silicon retinas have so far been lacking. This article addresses the problem of learning and recognizing features for event-based vision sensors, which capture properties of truly spatiotemporal volumes of sparse visual event information. A novel computational architecture for learning and encoding spatiotemporal features is introduced based on a set of predictive recurrent reservoir networks, competing via winner-take-all selection. Features are learned in an unsupervised manner from real-world input recorded with event-based vision sensors. It is shown that the networks in the architecture learn distinct and task-specific dynamic visual features, and can predict their trajectories over time.

Abstract

Bio-inspired asynchronous event-based vision sensors are currently introducing a paradigm shift in visual information processing. These new sensors rely on a stimulus-driven principle of light acquisition similar to biological retinas. They are event-driven and fully asynchronous, thereby reducing redundancy and encoding exact times of input signal changes, leading to a very precise temporal resolution. Approaches for higher-level computer vision often rely on the reliable detection of features in visual frames, but similar definitions of features for the novel dynamic and event-based visual input representation of silicon retinas have so far been lacking. This article addresses the problem of learning and recognizing features for event-based vision sensors, which capture properties of truly spatiotemporal volumes of sparse visual event information. A novel computational architecture for learning and encoding spatiotemporal features is introduced based on a set of predictive recurrent reservoir networks, competing via winner-take-all selection. Features are learned in an unsupervised manner from real-world input recorded with event-based vision sensors. It is shown that the networks in the architecture learn distinct and task-specific dynamic visual features, and can predict their trajectories over time.

Statistics

Citations

5 citations in Web of Science®
3 citations in Scopus®
Google Scholar™

Altmetrics

Downloads

17 downloads since deposited on 11 Feb 2016
8 downloads since 12 months
Detailed statistics

Additional indexing

Item Type:Journal Article, refereed, original work
Communities & Collections:07 Faculty of Science > Institute of Neuroinformatics
Dewey Decimal Classification:570 Life sciences; biology
Language:English
Date:2015
Deposited On:11 Feb 2016 08:39
Last Modified:07 Aug 2017 10:58
Publisher:Frontiers Research Foundation
ISSN:1662-453X
Free access at:PubMed ID. An embargo period may apply.
Publisher DOI:https://doi.org/10.3389/fnins.2015.00046
PubMed ID:25759637

Download

Download PDF  'Spatiotemporal features for asynchronous event-based data'.
Preview
Content: Published Version
Filetype: PDF
Size: 7MB
View at publisher
Licence: Creative Commons: Attribution 4.0 International (CC BY 4.0)