Header

UZH-Logo

Maintenance Infos

Learning to Exploit Multiple Vision Modalities by Using Grafted Networks


Hu, Yuhuang; Delbruck, Tobi; Liu, Shih-Chii (2020). Learning to Exploit Multiple Vision Modalities by Using Grafted Networks. In: Vedaldi, Andrea; Bischof, Horst; Brox, Thomas; Frahm, Jan-Michael. Computer Vision – ECCV 2020. Cham: Springer, 85-101.

Abstract

Novel vision sensors such as thermal, hyperspectral, polarization, and event cameras provide information that is not available from conventional intensity cameras. An obstacle to using these sensors with current powerful deep neural networks is the lack of large labeled training datasets. This paper proposes a Network Grafting Algorithm (NGA), where a new front end network driven by unconventional visual inputs replaces the front end network of a pretrained deep network that processes intensity frames. The self-supervised training uses only synchronously-recorded intensity frames and novel sensor data to maximize feature similarity between the pretrained network and the grafted network. We show that the enhanced grafted network reaches competitive average precision (AP50) scores to the pretrained network on an object detection task using thermal and event camera datasets, with no increase in inference costs. Particularly, the grafted network driven by thermal frames showed a relative improvement of 49.11% over the use of intensity frames. The grafted front end has only 5–8% of the total parameters and can be trained in a few hours on a single GPU equivalent to 5% of the time that would be needed to train the entire object detector from labeled data. NGA allows new vision sensors to capitalize on previously pretrained powerful deep models, saving on training cost and widening a range of applications for novel sensors.

Abstract

Novel vision sensors such as thermal, hyperspectral, polarization, and event cameras provide information that is not available from conventional intensity cameras. An obstacle to using these sensors with current powerful deep neural networks is the lack of large labeled training datasets. This paper proposes a Network Grafting Algorithm (NGA), where a new front end network driven by unconventional visual inputs replaces the front end network of a pretrained deep network that processes intensity frames. The self-supervised training uses only synchronously-recorded intensity frames and novel sensor data to maximize feature similarity between the pretrained network and the grafted network. We show that the enhanced grafted network reaches competitive average precision (AP50) scores to the pretrained network on an object detection task using thermal and event camera datasets, with no increase in inference costs. Particularly, the grafted network driven by thermal frames showed a relative improvement of 49.11% over the use of intensity frames. The grafted front end has only 5–8% of the total parameters and can be trained in a few hours on a single GPU equivalent to 5% of the time that would be needed to train the entire object detector from labeled data. NGA allows new vision sensors to capitalize on previously pretrained powerful deep models, saving on training cost and widening a range of applications for novel sensors.

Statistics

Citations

Dimensions.ai Metrics

Altmetrics

Downloads

46 downloads since deposited on 16 Feb 2021
42 downloads since 12 months
Detailed statistics

Additional indexing

Item Type:Book Section, refereed, original work
Communities & Collections:07 Faculty of Science > Institute of Neuroinformatics
Dewey Decimal Classification:570 Life sciences; biology
Scopus Subject Areas:Physical Sciences > Theoretical Computer Science
Physical Sciences > General Computer Science
Language:English
Date:2020
Deposited On:16 Feb 2021 08:43
Last Modified:27 Jan 2022 05:54
Publisher:Springer
Series Name:Lecture Notes in Computer Science
Number:12372
ISSN:0302-9743
ISBN:978-3-030-58582-2
Additional Information:This is a post-peer-review, pre-copyedit version of an article published in Lecture Notes in Computer Science. The final authenticated version is available online at: https://doi.org/10.1007/978-3-030-58517-4_6
OA Status:Green
Publisher DOI:https://doi.org/10.1007/978-3-030-58517-4_6

Download

Green Open Access

Download PDF  'Learning to Exploit Multiple Vision Modalities by Using Grafted Networks'.
Preview
Content: Accepted Version
Filetype: PDF
Size: 5MB
View at publisher
Get full-text in a library