Header

UZH-Logo

Maintenance Infos

DDD20 End-to-End Event Camera Driving Dataset: Fusing Frames and Events with Deep Learning for Improved Steering Prediction


Hu, Yuhuang; Binas, Jonathan; Neil, Daniel; Liu, Shih-Chii; Delbruck, Tobi (2020). DDD20 End-to-End Event Camera Driving Dataset: Fusing Frames and Events with Deep Learning for Improved Steering Prediction. In: 2020 IEEE 23rd International Conference on Intelligent Transportation Systems (ITSC), Rhodes, 20 September 2020 - 23 September 2020.

Abstract

Neuromorphic event cameras are useful for dynamic vision problems under difficult lighting conditions. To enable studies of using event cameras in automobile driving applications, this paper reports a new end-to-end driving dataset called DDD20. The dataset was captured with a DAVIS camera that concurrently streams both dynamic vision sensor (DVS) brightness change events and active pixel sensor (APS) intensity frames. DDD20 is the longest event camera end-to-end driving dataset to date with 51h of DAVIS event+frame camera and vehicle human control data collected from 4000km of highway and urban driving under a variety of lighting conditions. Using DDD20, we report the first study of fusing brightness change events and intensity frame data using a deep learning approach to predict the instantaneous human steering wheel angle. Over all day and night conditions, the explained variance for human steering prediction from a Resnet-32 is significantly better from the fused DVS+APS frames (0.88) than using either DVS (0.67) or APS (0.77) data alone.

Abstract

Neuromorphic event cameras are useful for dynamic vision problems under difficult lighting conditions. To enable studies of using event cameras in automobile driving applications, this paper reports a new end-to-end driving dataset called DDD20. The dataset was captured with a DAVIS camera that concurrently streams both dynamic vision sensor (DVS) brightness change events and active pixel sensor (APS) intensity frames. DDD20 is the longest event camera end-to-end driving dataset to date with 51h of DAVIS event+frame camera and vehicle human control data collected from 4000km of highway and urban driving under a variety of lighting conditions. Using DDD20, we report the first study of fusing brightness change events and intensity frame data using a deep learning approach to predict the instantaneous human steering wheel angle. Over all day and night conditions, the explained variance for human steering prediction from a Resnet-32 is significantly better from the fused DVS+APS frames (0.88) than using either DVS (0.67) or APS (0.77) data alone.

Statistics

Citations

Dimensions.ai Metrics

Altmetrics

Downloads

3 downloads since deposited on 16 Feb 2021
3 downloads since 12 months
Detailed statistics

Additional indexing

Item Type:Conference or Workshop Item (Paper), refereed, original work
Communities & Collections:07 Faculty of Science > Institute of Neuroinformatics
Dewey Decimal Classification:570 Life sciences; biology
Scopus Subject Areas:Physical Sciences > Artificial Intelligence
Social Sciences & Humanities > Decision Sciences (miscellaneous)
Social Sciences & Humanities > Information Systems and Management
Physical Sciences > Modeling and Simulation
Social Sciences & Humanities > Education
Language:English
Event End Date:23 September 2020
Deposited On:16 Feb 2021 08:13
Last Modified:18 Feb 2021 12:17
Publisher:IEEE
ISBN:9781728141497
OA Status:Green
Publisher DOI:https://doi.org/10.1109/itsc45102.2020.9294515

Download

Green Open Access

Download PDF  'DDD20 End-to-End Event Camera Driving Dataset: Fusing Frames and Events with Deep Learning for Improved Steering Prediction'.
Preview
Content: Accepted Version
Filetype: PDF
Size: 1MB
View at publisher