Header

UZH-Logo

Maintenance Infos

E-NeRF: Neural Radiance Fields From a Moving Event Camera


Klenk, Simon; Koestler, Lukas; Scaramuzza, Davide; Cremers, Daniel (2023). E-NeRF: Neural Radiance Fields From a Moving Event Camera. IEEE Robotics and Automation Letters, 8(3):1587-1594.

Abstract

Estimating neural radiance fields (NeRFs) from “ideal” images has been extensively studied in the computer vision community. Most approaches assume optimal illumination and slow camera motion. These assumptions are often violated in robotic applications, where images may contain motion blur, and the scene may not have suitable illumination. This can cause significant problems for downstream tasks such as navigation, inspection, or visualization of the scene. To alleviate these problems, we present E-NeRF, the first method which estimates a volumetric scene representation in the form of a NeRF from a fast-moving event camera. Our method can recover NeRFs during very fast motion and in high-dynamic-range conditions where frame-based approaches fail. We show that rendering high-quality frames is possible by only providing an event stream as input. Furthermore, by combining events and frames, we can estimate NeRFs of higher quality than state-of-the-art approaches under severe motion blur. We also show that combining events and frames can overcome failure cases of NeRF estimation in scenarios where only a few input views are available without requiring additional regularization.

Abstract

Estimating neural radiance fields (NeRFs) from “ideal” images has been extensively studied in the computer vision community. Most approaches assume optimal illumination and slow camera motion. These assumptions are often violated in robotic applications, where images may contain motion blur, and the scene may not have suitable illumination. This can cause significant problems for downstream tasks such as navigation, inspection, or visualization of the scene. To alleviate these problems, we present E-NeRF, the first method which estimates a volumetric scene representation in the form of a NeRF from a fast-moving event camera. Our method can recover NeRFs during very fast motion and in high-dynamic-range conditions where frame-based approaches fail. We show that rendering high-quality frames is possible by only providing an event stream as input. Furthermore, by combining events and frames, we can estimate NeRFs of higher quality than state-of-the-art approaches under severe motion blur. We also show that combining events and frames can overcome failure cases of NeRF estimation in scenarios where only a few input views are available without requiring additional regularization.

Statistics

Citations

Dimensions.ai Metrics

Altmetrics

Downloads

3 downloads since deposited on 27 Feb 2024
3 downloads since 12 months
Detailed statistics

Additional indexing

Item Type:Journal Article, refereed, original work
Communities & Collections:03 Faculty of Economics > Department of Informatics
Dewey Decimal Classification:000 Computer science, knowledge & systems
Scopus Subject Areas:Physical Sciences > Control and Systems Engineering
Physical Sciences > Biomedical Engineering
Physical Sciences > Human-Computer Interaction
Physical Sciences > Mechanical Engineering
Physical Sciences > Computer Vision and Pattern Recognition
Physical Sciences > Computer Science Applications
Physical Sciences > Control and Optimization
Physical Sciences > Artificial Intelligence
Scope:Discipline-based scholarship (basic research)
Language:English
Date:30 January 2023
Deposited On:27 Feb 2024 13:31
Last Modified:02 May 2024 13:17
Publisher:Institute of Electrical and Electronics Engineers
ISSN:2377-3766
Additional Information:© 2023 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
OA Status:Closed
Publisher DOI:https://doi.org/10.1109/LRA.2023.3240646