Header

UZH-Logo

Maintenance Infos

Steering a Predator Robot using a Mixed Frame/Event-Driven Convolutional Neural Network


Moeys, Diederik Paul; Corradi, Federico; Kerr, Emmett; Vance, Philip; Das, Gautham; Neil, Daniel; Kerr, Dermot; Delbruck, Tobi (2016). Steering a Predator Robot using a Mixed Frame/Event-Driven Convolutional Neural Network. In: IEEE International Conference on Event-Based Control, Communication, and Signal Processing EBCCSP 2016, Krakow, Poland, 13 June 2016 - 15 June 2016.

Abstract

This paper describes the application of a Convolutional Neural Network (CNN) in the context of a predator/prey scenario. The CNN is trained and run on data from a Dynamic and Active Pixel Sensor (DAVIS) mounted on a Summit XL robot (the predator), which follows another one (the prey). The CNN is driven by both conventional image frames and dynamic vision sensor "frames" that consist of a constant number of DAVIS ON and OFF events. The network is thus "data driven" at a sample rate proportional to the scene activity, so the effective sample rate varies from 15 Hz to 240 Hz depending on the robot speeds. The network generates four outputs: steer right, left, center and non-visible. After off-line training on labeled data, the network is imported on the on-board Summit XL robot which runs jAER and receives steering directions in real time. Successful results on closed-loop trials, with accuracies up to 87% or 92% (depending on evaluation criteria) are reported. Although the proposed approach discards the precise DAVIS event timing, it offers the significant advantage of compatibility with conventional deep learning technology without giving up the advantage of data-driven computing.

Abstract

This paper describes the application of a Convolutional Neural Network (CNN) in the context of a predator/prey scenario. The CNN is trained and run on data from a Dynamic and Active Pixel Sensor (DAVIS) mounted on a Summit XL robot (the predator), which follows another one (the prey). The CNN is driven by both conventional image frames and dynamic vision sensor "frames" that consist of a constant number of DAVIS ON and OFF events. The network is thus "data driven" at a sample rate proportional to the scene activity, so the effective sample rate varies from 15 Hz to 240 Hz depending on the robot speeds. The network generates four outputs: steer right, left, center and non-visible. After off-line training on labeled data, the network is imported on the on-board Summit XL robot which runs jAER and receives steering directions in real time. Successful results on closed-loop trials, with accuracies up to 87% or 92% (depending on evaluation criteria) are reported. Although the proposed approach discards the precise DAVIS event timing, it offers the significant advantage of compatibility with conventional deep learning technology without giving up the advantage of data-driven computing.

Statistics

Altmetrics

Downloads

0 downloads since deposited on 26 Jan 2017
0 downloads since 12 months

Additional indexing

Item Type:Conference or Workshop Item (Speech), refereed, original work
Communities & Collections:07 Faculty of Science > Institute of Neuroinformatics
Dewey Decimal Classification:570 Life sciences; biology
Language:English
Event End Date:15 June 2016
Deposited On:26 Jan 2017 15:06
Last Modified:19 Feb 2017 06:11
Publisher:Proceedings of 2016 Second International Conference on Event-based Control, Communication, and Signal Processing (EBCCSP)
Series Name:IEEE Second International Conference on Event-Based Control, Communication and Signal Processing (EBCCSP)
Publisher DOI:https://doi.org/10.1109/EBCCSP.2016.7605233
Official URL:http://ieeexplore.ieee.org/document/7605233/

Download

Preview Icon on Download
Filetype: PDF - Registered users only
Size: 2MB
View at publisher

TrendTerms

TrendTerms displays relevant terms of the abstract of this publication and related documents on a map. The terms and their relations were extracted from ZORA using word statistics. Their timelines are taken from ZORA as well. The bubble size of a term is proportional to the number of documents where the term occurs. Red, orange, yellow and green colors are used for terms that occur in the current document; red indicates high interlinkedness of a term with other terms, orange, yellow and green decreasing interlinkedness. Blue is used for terms that have a relation with the terms in this document, but occur in other documents.
You can navigate and zoom the map. Mouse-hovering a term displays its timeline, clicking it yields the associated documents.

Author Collaborations