Abstract
Event cameras are bio-inspired vision sensors that naturally
capture the dynamics of a scene, filtering out redundant
information. This paper presents a deep neural
network approach that unlocks the potential of event cameras
on a challenging motion-estimation task: prediction
of a vehicle’s steering angle. To make the best out of this
sensor–algorithm combination, we adapt state-of-the-art
convolutional architectures to the output of event sensors
and extensively evaluate the performance of our approach
on a publicly available large scale event-camera dataset
(1000 km). We present qualitative and quantitative explanations
of why event cameras allow robust steering prediction
even in cases where traditional cameras fail, e.g. challenging
illumination conditions and fast motion. Finally, we
demonstrate the advantages of leveraging transfer learning
from traditional to event-based vision, and show that our
approach outperforms state-of-the-art algorithms based on
standard cameras.