Header

UZH-Logo

Maintenance Infos

Steering a Predator Robot using a Mixed Frame/Event-Driven Convolutional Neural Network


Moeys, Diederik Paul; Corradi, Federico; Kerr, Emmett; Vance, Philip; Das, Gautham; Neil, Daniel; Kerr, Dermot; Delbruck, Tobi (2016). Steering a Predator Robot using a Mixed Frame/Event-Driven Convolutional Neural Network. In: IEEE International Conference on Event-Based Control, Communication, and Signal Processing EBCCSP 2016, Krakow, Poland, 13 June 2016 - 15 June 2016, Proceedings of 2016 Second International Conference on Event-based Control, Communication, and Signal Processing (EBCCSP).

Abstract

This paper describes the application of a Convolutional Neural Network (CNN) in the context of a predator/prey scenario. The CNN is trained and run on data from a Dynamic and Active Pixel Sensor (DAVIS) mounted on a Summit XL robot (the predator), which follows another one (the prey). The CNN is driven by both conventional image frames and dynamic vision sensor "frames" that consist of a constant number of DAVIS ON and OFF events. The network is thus "data driven" at a sample rate proportional to the scene activity, so the effective sample rate varies from 15 Hz to 240 Hz depending on the robot speeds. The network generates four outputs: steer right, left, center and non-visible. After off-line training on labeled data, the network is imported on the on-board Summit XL robot which runs jAER and receives steering directions in real time. Successful results on closed-loop trials, with accuracies up to 87% or 92% (depending on evaluation criteria) are reported. Although the proposed approach discards the precise DAVIS event timing, it offers the significant advantage of compatibility with conventional deep learning technology without giving up the advantage of data-driven computing.

Abstract

This paper describes the application of a Convolutional Neural Network (CNN) in the context of a predator/prey scenario. The CNN is trained and run on data from a Dynamic and Active Pixel Sensor (DAVIS) mounted on a Summit XL robot (the predator), which follows another one (the prey). The CNN is driven by both conventional image frames and dynamic vision sensor "frames" that consist of a constant number of DAVIS ON and OFF events. The network is thus "data driven" at a sample rate proportional to the scene activity, so the effective sample rate varies from 15 Hz to 240 Hz depending on the robot speeds. The network generates four outputs: steer right, left, center and non-visible. After off-line training on labeled data, the network is imported on the on-board Summit XL robot which runs jAER and receives steering directions in real time. Successful results on closed-loop trials, with accuracies up to 87% or 92% (depending on evaluation criteria) are reported. Although the proposed approach discards the precise DAVIS event timing, it offers the significant advantage of compatibility with conventional deep learning technology without giving up the advantage of data-driven computing.

Statistics

Citations

Dimensions.ai Metrics
44 citations in Web of Science®
77 citations in Scopus®
Google Scholar™

Altmetrics

Downloads

0 downloads since deposited on 26 Jan 2017
0 downloads since 12 months

Additional indexing

Item Type:Conference or Workshop Item (Speech), refereed, original work
Communities & Collections:07 Faculty of Science > Institute of Neuroinformatics
Dewey Decimal Classification:570 Life sciences; biology
Scopus Subject Areas:Physical Sciences > Computer Networks and Communications
Physical Sciences > Signal Processing
Physical Sciences > Control and Optimization
Language:English
Event End Date:15 June 2016
Deposited On:26 Jan 2017 15:06
Last Modified:26 Jan 2022 11:49
Publisher:Proceedings of 2016 Second International Conference on Event-based Control, Communication, and Signal Processing (EBCCSP)
Series Name:IEEE Second International Conference on Event-Based Control, Communication and Signal Processing (EBCCSP)
OA Status:Closed
Publisher DOI:https://doi.org/10.1109/EBCCSP.2016.7605233
Official URL:http://ieeexplore.ieee.org/document/7605233/