EKLT: Asynchronous, Photometric Feature Tracking using Events and Frames (IJCV'19)

  Рет қаралды 5,742

UZH Robotics and Perception Group

UZH Robotics and Perception Group

Күн бұрын

This work presents EKLT, a feature tracking method that leverages the complementarity of event cameras and standard cameras to track visual features with low latency. Event cameras are novel sensors that output pixel-level brightness changes, called "events". They offer significant advantages over standard cameras, namely a very high dynamic range, no motion blur, and a latency in the order of microseconds. However, because the same scene pattern can produce different events depending on the motion direction, establishing event correspondences across time is challenging. By contrast, standard cameras provide intensity measurements (frames) that do not depend on motion direction. Our method extracts features on frames and subsequently tracks them asynchronously using events, thereby exploiting the best of both types of data: the frames provide a photometric representation that does not depend on motion direction and the events provide low-latency updates. In contrast to previous works, which are based on heuristics, this is the first principled method that uses raw intensity measurements directly, based on a generative event model within a maximum-likelihood framework. As a result, our method produces feature tracks that are more accurate than the state of the art, across a wide variety of scenes.
Reference:
Daniel Gehrig, Henri Rebecq, Guillermo Gallego, Davide Scaramuzza.
EKLT: Asynchronous, Photometric Feature Tracking using Events and Frames.
International Journal of Computer Vision (IJCV), Aug. 2019.
PDF: rpg.ifi.uzh.ch/...
Source code of the EKLT tracker: github.com/uzh...
Evaluation code: github.com/uzh...
Our research page on event based vision: rpg.ifi.uzh.ch/...
For event-camera datasets and event camera simulator, see here:
rpg.ifi.uzh.ch/...
Other resources on event cameras (publications, software, drivers, where to buy, etc.):
github.com/uzh...
Affiliations: D. Gehrig, H. Rebecq, G. Gallego and D. Scaramuzza are with the Robotics and Perception Group, Dept. of Informatics, University of Zurich, and Dept. of Neuroinformatics, University of Zurich and ETH Zurich, Switzerland rpg.ifi.uzh.ch/

Пікірлер: 1
@roohollakhorrambakht8104
@roohollakhorrambakht8104 5 жыл бұрын
It would be a dream come true joining you guys.
Data-Driven Methods for Event Cameras (Ph.D. defense of Mathias Gehrig)
21:51
UZH Robotics and Perception Group
Рет қаралды 3,3 М.
Event-Based Motion Segmentation by Motion Compensation (ICCV'19)
3:50
UZH Robotics and Perception Group
Рет қаралды 10 М.
Brawl Stars Edit😈📕
00:15
Kan Andrey
Рет қаралды 46 МЛН
АЗАРТНИК 4 |СЕЗОН 2 Серия
31:45
Inter Production
Рет қаралды 1 МЛН
Event-based Asynchronous Sparse Convolutional Networks (ECCV 2020 Presentation)
7:56
UZH Robotics and Perception Group
Рет қаралды 5 М.
Measure Speed With Raspberry Pi + OpenCV + Python
10:36
Core Electronics
Рет қаралды 42 М.
Mitigating Motion Blur in Neural Radiance Fields with Events and Frames (CVPR 2024)
4:30
UZH Robotics and Perception Group
Рет қаралды 871
Dynamic Vision Sensor (DVS)
4:20
Unitectra
Рет қаралды 18 М.
Efficient, Data-Driven Perception with Event Cameras (Ph.D. Defense of Daniel Gehrig)
20:35
UZH Robotics and Perception Group
Рет қаралды 4,4 М.
Real-time Visual-Inertial Odometry for Event Cameras using Keyframe-based Nonlinear Optimization
3:03
No, Einstein Didn’t Solve the Biggest Problem in Physics
8:04
Sabine Hossenfelder
Рет қаралды 236 М.
High Speed and High Dynamic Range Video with an Event Camera
3:23
UZH Robotics and Perception Group
Рет қаралды 100 М.