Header

UZH-Logo

Maintenance Infos

block-matching optical flow for dynamic vision sensors: algorithm and FPGA implementation


Liu, Min; Delbruck, Tobi (2017). block-matching optical flow for dynamic vision sensors: algorithm and FPGA implementation. In: IEEE International Symposium on Circuits and Systems (ISCAS) 2017, Baltimore, USA, 29 May 2017 - 6 January 2017, 0.

Abstract

Rapid and low power computation of optical flow (OF) is potentially useful in robotics. The dynamic vision sensor (DVS) event camera produces quick and sparse output, and has high dynamic range, but conventional OF algorithms are frame-based and cannot be directly used with event-based cameras. Previous DVS OF methods do not work well with dense textured input and are designed for implementation in logic circuits. This paper proposes a new block-matching based DVS OF algorithm which is inspired by motion estimation methods used for MPEG video compression. The algorithm was implemented both in software and on FPGA. For each event, it computes the motion direction as one of 9 directions. The speed of the motion is set by the sample interval. Results show that the Average Angular Error can be improved by 30% compared with previous methods. The OF can be calculated on FPGA with 50 MHz clock in 0.2 us per event (11 clock cycles), 20 times faster than a Java software implementation running on a desktop PC. Sample data is shown that the method works on scenes dominated by edges, sparse features, and dense texture.

Abstract

Rapid and low power computation of optical flow (OF) is potentially useful in robotics. The dynamic vision sensor (DVS) event camera produces quick and sparse output, and has high dynamic range, but conventional OF algorithms are frame-based and cannot be directly used with event-based cameras. Previous DVS OF methods do not work well with dense textured input and are designed for implementation in logic circuits. This paper proposes a new block-matching based DVS OF algorithm which is inspired by motion estimation methods used for MPEG video compression. The algorithm was implemented both in software and on FPGA. For each event, it computes the motion direction as one of 9 directions. The speed of the motion is set by the sample interval. Results show that the Average Angular Error can be improved by 30% compared with previous methods. The OF can be calculated on FPGA with 50 MHz clock in 0.2 us per event (11 clock cycles), 20 times faster than a Java software implementation running on a desktop PC. Sample data is shown that the method works on scenes dominated by edges, sparse features, and dense texture.

Statistics

Citations

Dimensions.ai Metrics

Altmetrics

Downloads

0 downloads since deposited on 23 Feb 2018
0 downloads since 12 months

Additional indexing

Item Type:Conference or Workshop Item (Paper), original work
Communities & Collections:07 Faculty of Science > Institute of Neuroinformatics
Dewey Decimal Classification:570 Life sciences; biology
Language:English
Event End Date:6 January 2017
Deposited On:23 Feb 2018 09:45
Last Modified:14 Mar 2018 18:01
Publisher:Proceedings of IEEE International Symposium on Circuits and Systems (ISCAS) 2017
Series Name:Proceedings of ISCAS 2017
OA Status:Closed
Free access at:Official URL. An embargo period may apply.
Publisher DOI:https://doi.org/10.1109/ISCAS.2017.8050295
Official URL:http://ieeexplore.ieee.org/document/8050295

Download