TimeLens: Event-based Video Frame Interpolation (CVPR 2021)

  Рет қаралды 12,831

UZH Robotics and Perception Group

UZH Robotics and Perception Group

3 жыл бұрын

State-of-the-art frame interpolation methods generate intermediate frames by inferring object motions in the image from consecutive key-frames. In the absence of additional information, first-order approximations, i.e. optical flow, must be used, but this choice restricts the types of motions that can be modeled, leading to errors in highly dynamic scenarios. Event cameras are novel sensors that address this limitation by providing auxiliary visual information in the blind-time between frames. They asynchronously measure per-pixel brightness changes and do this with high temporal resolution and low latency. Event-based frame interpolation methods typically adopt a synthesis-based approach, where predicted frame residuals are directly applied to the key-frames. However, while these approaches can capture non-linear motions they suffer from ghosting and perform poorly in low-texture regions with few events. Thus, synthesis-based and flow-based approaches are complementary. In this work, we introduce Time Lens, a novel indicates equal contribution method that leverages the advantages of both. We extensively evaluate our method on three synthetic and two real benchmarks where we show an up to 5.21 dB improvement in terms of PSNR over state-of-the-art frame-based and event-based methods. Finally, we release a new large-scale dataset in highly dynamic scenarios, aimed at pushing the limits of existing methods.
Reference:
Stepan Tulyakov*, Daniel Gehrig*, Stamatios Georgoulis, Julius Erbach, Mathias Gehrig, Yuanyou Li, Davide Scaramuzza.
TimeLens: Event-based Video Frame Interpolation
IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Nashville, 2021
PDF: rpg.ifi.uzh.ch/docs/CVPR21_Geh...
Webpage: rpg.ifi.uzh.ch/timelens
Code and Datasets: github.com/uzh-rpg/rpg_timelens
Slides: rpg.ifi.uzh.ch/timelens/slides...
Our research page on event based vision: rpg.ifi.uzh.ch/research_dvs.html
For event-camera datasets, see here:
rpg.ifi.uzh.ch/davis_data.html
and here: github.com/uzh-rpg/event-base...
For an event camera simulator: rpg.ifi.uzh.ch/esim/index.html
For a survey paper on event cameras, see here:
rpg.ifi.uzh.ch/docs/EventVisio...
Other resources on event cameras (publications, software, drivers, where to buy, etc.):
github.com/uzh-rpg/event-base...
Affiliation:
D. Gehrig, M. Gehrig, and D. Scaramuzza are with the Robotics and Perception Group, Dept. of Informatics, University of Zurich, and Dept. of Neuroinformatics, University of Zurich and ETH Zurich, Switzerland rpg.ifi.uzh.ch/
Stepan Tulyakov, Stamatios Georgoulis, Julius Erbach and Yuanyou Li are with Huawei Zurich Research Center.

Пікірлер: 29
@theslowmoguys
@theslowmoguys 3 жыл бұрын
Well we had a good innings, lads. 😂
@bcmason1911
@bcmason1911 3 жыл бұрын
lmaooooooooo
@SirWrender
@SirWrender 2 жыл бұрын
omg hahaha Gavin! I was wondering if you'd see this
@danielgehrig9009
@danielgehrig9009 2 жыл бұрын
Don't worry, there are always more work to to :)
@turgor127
@turgor127 3 жыл бұрын
This is gonna be in 2 minute papers.
@BUDA20
@BUDA20 3 жыл бұрын
is now ;)
@FloraSora
@FloraSora 3 жыл бұрын
Nice.
@James-wd9ib
@James-wd9ib 3 ай бұрын
what a time to be alive
@irlshrek
@irlshrek 3 жыл бұрын
Man. Obviously this could be turned into an application that you feed a video into and it spits out a high refresh rate version. Incredible
@silvahe
@silvahe Жыл бұрын
i was wondering that how can you let the normal camera and the event camera get the same sight.
@FloraSora
@FloraSora 3 жыл бұрын
I need an in-depth tutorial for this so badly!!! I'm dying to use this.
@detroxlp1
@detroxlp1 3 жыл бұрын
same for me, English is not my native language, I do not understand how to use my own videos.
@bruce_luo
@bruce_luo 3 жыл бұрын
Hope we can have event-based cameras as easy as webcams someday.
@wayoftradingmalayalam2624
@wayoftradingmalayalam2624 2 жыл бұрын
Bro you find 2. Put that in Google collapse, any tutorial out there in the youtube ?
@Upscaled
@Upscaled 3 жыл бұрын
Great video, looks awesome
@kylebowles9820
@kylebowles9820 2 жыл бұрын
nice work!
@cahydra
@cahydra 3 жыл бұрын
it would be nice if this was released to the public, or if it already is then make it more user friendly since it would be amazing to try this
@dmalyavin
@dmalyavin 3 жыл бұрын
There is already github repo for it, and you can request access to their training data and training code. However just to note that you cant run this against a normal video as far as I can see. Their process requires an additional data feed that captured by their event camera. If anyone knows how to ether generate or capture this type of interframe event information (as far as I understood its just per pixel brightness changes) would be cool to find out. Thanks!
@FloraSora
@FloraSora 3 жыл бұрын
@@dmalyavin Waait so I can't feed in my videos even if I jumped through all the hoops to get the github code to run? Aaaaaagghhh
@NilasEdits
@NilasEdits 3 жыл бұрын
​@@dmalyavin So this won't be a replacement to Twixtor.. and here I was getting my hopes up :'(
@Ardeact
@Ardeact 2 жыл бұрын
I thought dain was impressive but the fact that there's basically no artifacts on the moving image is amazing
@reel60frames45
@reel60frames45 2 жыл бұрын
My videos are the reference to study the issues.( artefacts by high dynamic motion)
@skk6811
@skk6811 2 жыл бұрын
This is optical flow on steroids. Should write this to run on the Apple M1 chip.. It would rock.
@bob2859
@bob2859 2 жыл бұрын
Amazing! Now I just need a Prophesee Gen4M to fall off a truck...
@Kaapalkeens
@Kaapalkeens 3 жыл бұрын
oh man, really a great technology. But apparently its only on special hardware, Great, nontheless!
@roidroid
@roidroid 3 жыл бұрын
It would be interesting to gradually reduce the framerate of the RGB camera, to find the limits of *how much* of the missing data your system is capable of constructing. I'm curious how FEW keyframes are truly required to get reliable results. Or maybe a system can automatically request the RGB/keyframe only as required, and only for the nessesary part of the image rather than the entire frame.
@danielgehrig9009
@danielgehrig9009 3 жыл бұрын
Thats a really cool idea! In our paper (linked in the description) we go down to 5 FPS and are still able to interpolate the video reasonably well. However, I would not recommend to go any lower than that. When using events you can indeed do something like "adaptive slow-mo" where you only request keyframes when there is motion (enough events). This will make the approach much more efficient!
@danielgehrig9009
@danielgehrig9009 3 жыл бұрын
@Haneesh Allu i guess that you can alway collect a fixed number of events and then you would have an adaptive framerate. Since the fastest moving object will trigger the most events it should always adapt its framerate to this object. For slower moving objects it should be easier then.
@megazenn22
@megazenn22 3 жыл бұрын
uploaded at 25fps 🤣🤣🤣🤣🤣
Event Cameras: Opportunities and the Road Ahead (CVPR 2020)
18:54
UZH Robotics and Perception Group
Рет қаралды 29 М.
Efficient, Data-Driven Perception with Event Cameras (Ph.D. Defense of Daniel Gehrig)
20:35
UZH Robotics and Perception Group
Рет қаралды 3,9 М.
Самый Молодой Актёр Без Оскара 😂
00:13
Глеб Рандалайнен
Рет қаралды 12 МЛН
Looks realistic #tiktok
00:22
Анастасия Тарасова
Рет қаралды 106 МЛН
Опасность фирменной зарядки Apple
00:57
SuperCrastan
Рет қаралды 10 МЛН
Phase-Based Video Motion Processing
6:22
Neal Wadhwa
Рет қаралды 119 М.
Data-Driven Methods for Event Cameras (Ph.D. defense of Mathias Gehrig)
21:51
UZH Robotics and Perception Group
Рет қаралды 3 М.
The moment we stopped understanding AI [AlexNet]
17:38
Welch Labs
Рет қаралды 809 М.
NeuralRecon 5-minutes Introduction Video (CVPR 2021 oral)
5:01
Jiaming Sun
Рет қаралды 8 М.
Nature's Incredible ROTATING MOTOR (It’s Electric!) - Smarter Every Day 300
29:37
Real-time Visual-Inertial Odometry for Event Cameras using Keyframe-based Nonlinear Optimization
3:03
Human-Level Performance with Autonomous Vision-based Drones - Davide Scaramuzza
1:26:23
UZH Robotics and Perception Group
Рет қаралды 10 М.
Boosting Monocular Depth Estimation to High Resolution (CVPR 2021)
4:51
Yağız Aksoy - Computational Photography Lab @ SFU
Рет қаралды 29 М.
Kumanda İle Bilgisayarı Yönetmek #shorts
0:29
Osman Kabadayı
Рет қаралды 2,1 МЛН
Samsung laughing on iPhone #techbyakram
0:12
Tech by Akram
Рет қаралды 5 МЛН
İĞNE İLE TELEFON TEMİZLEMEK!🤯
0:17
Safak Novruz
Рет қаралды 409 М.