Amazing AR Effects Are Coming!

  Рет қаралды 206,896

Two Minute Papers

Two Minute Papers

Күн бұрын

❤️ Check out Weights & Biases and sign up for a free demo here: www.wandb.com/...
Their mentioned post is available here:
app.wandb.ai/l...
📝 The paper "Consistent Video Depth Estimation" is available here:
roxanneluo.git...
🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible:
Aleksandr Mashrabov, Alex Haro, Alex Paden, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bruno Mikuš, Bryan Learn, Christian Ahlin, Daniel Hasegan, Eric Haddad, Eric Martel, Gordon Child, Javier Bustamante, Lorin Atzberger, Lukas Biewald, Michael Albrecht, Nader S., Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Robin Graham, Steef, Sunil Kim, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh.
More info if you would like to appear here: / twominutepapers
Károly Zsolnai-Fehér's links:
Instagram: / twominutepapers
Twitter: / twominutepapers
Web: cg.tuwien.ac.a...

Пікірлер: 460
@youtube_fantastic
@youtube_fantastic 4 жыл бұрын
Nothing like smell of a fresh 2 minute paper!
@chrisray1567
@chrisray1567 4 жыл бұрын
I love the smell of wood pulp in the morning!
@NubNublet
@NubNublet 4 жыл бұрын
Especially one that isn’t more fluid simulation lol
@ZorgKirill
@ZorgKirill 4 жыл бұрын
that runs for 5 minutes xD
@DwAboutItManFr
@DwAboutItManFr 4 жыл бұрын
Image the smell two papers down the line?
@phiro4305
@phiro4305 4 жыл бұрын
Nothing like the smell of a depth-mapped cat
@blenderguru
@blenderguru 4 жыл бұрын
Holy cow! Realtime depth information could be used for overlaying information for all sorts of displays: classroom learning, live events, concerts, task specific assistance. Might be a big leap forward for AR. Exciting tech.
@brandonporter6223
@brandonporter6223 4 жыл бұрын
Gonna be a crazy decade
@thecrinjemasterjay
@thecrinjemasterjay 4 жыл бұрын
Woah it's the guru himself!
@sqworm5397
@sqworm5397 4 жыл бұрын
That "dear fellow scholars" hits hard everytime
@joancaic2853
@joancaic2853 4 жыл бұрын
What a time to be alive!
@tempahp
@tempahp 4 жыл бұрын
Hold on to your papers!
@aliasd5423
@aliasd5423 4 жыл бұрын
*happy fellow scholar noises
@CellularInterceptor
@CellularInterceptor 4 жыл бұрын
it's their trademark
@peskarr
@peskarr 4 жыл бұрын
Lets add one more paper to this little one, becouse everybody needs a friend. Little neural network has to have a place to set there. There he goes...
@WangleLine
@WangleLine 4 жыл бұрын
*excited paper holding*
@keyboard1819
@keyboard1819 4 жыл бұрын
Hey, you are a Jonas tyroller viewer. RIGHT!!?
@DuckTheFinn
@DuckTheFinn 4 жыл бұрын
You can't expect us to keep a grip on these papers if you keep showing stuff like this!
@alexcrespo3252
@alexcrespo3252 4 жыл бұрын
My papers all emigrated to nigeria to start a new life
@Metaloid-wv4kz
@Metaloid-wv4kz 4 жыл бұрын
I had to order in a dump truck.
@b3nsu
@b3nsu 4 жыл бұрын
the 2010s were the years of the smartphone the 2020s will be the decade of AR and VR
@tiavor
@tiavor 4 жыл бұрын
@nasolem we can only hope and wait for a real SAO.
@laurenz1337_
@laurenz1337_ 4 жыл бұрын
And the 2030s will be the decade of neural link and brain interfaces
@Vaeldarg
@Vaeldarg 4 жыл бұрын
@@cybercrazy1059 Hate to break it to you, but "mind upload" doesn't mean what you think it means. When you upload a file, it doesn't disappear from your computer. It simply creates a new file at the new location that is a copy of the original information. However, full-sensory simulation for virtual environments just need how the brain uses neural transmitters/inhibitors to be figured out for that to be achieved.
@turkosicsaba
@turkosicsaba 4 жыл бұрын
As we can see in the video above, we will always find ways to add cats to our new ARs and VRs.
@Vaeldarg
@Vaeldarg 4 жыл бұрын
@@nagualdesign Only an actual baby would be bothered by such an obvious troll.
@Jackp2003
@Jackp2003 4 жыл бұрын
I get so excited whenever you upload!
@eenvleugjegoeiegames
@eenvleugjegoeiegames 4 жыл бұрын
Your videos always give me the motivation to keep working on my AI degree with enthousiasm, so a huge thanks for that!
@TwoMinutePapers
@TwoMinutePapers 4 жыл бұрын
Absolutely amazing, kind thanks for sharing this!
@boujeejams3086
@boujeejams3086 4 жыл бұрын
Could you share some Al resources for people who aspire to do it too please
@tobi6758
@tobi6758 4 жыл бұрын
The next iPhone is expected to have a "LiDAR" sensor. Wonder if that will give that "perfect" AR effects.
@JBB685
@JBB685 4 жыл бұрын
We’re getting into some really exciting territory here!
@Navhkrin
@Navhkrin 4 жыл бұрын
It most certainly should, unless Apple messes up somehow. In my opinion, future is going to be about combining a very-cheap (sparse) but accurate lidar with depth estimator for filling in the gaps, it gives us precise values to work with. Lack of reference points is the biggest problem with monocular depth estimation.
@cunty
@cunty 4 жыл бұрын
doubt it. the ipad's lidar sensor, which i'm assuming is the same going into the iphone, is not nearly as fine on the details as faceid. the dots the lidar projector projects are spaced pretty far apart, which leaves the ipad to have to fill in the rest of the info.
@DamianReloaded
@DamianReloaded 4 жыл бұрын
@@cunty I'm confident higher resolution cameras and stronger parallel processors will enable NNs to do this from video only.
@dykam
@dykam 4 жыл бұрын
@SCUUBE Got reminded of the same. My tries with arcore didn't have flickering, tho it did sometimes take some time for it to discover the depth in certain areas. There's an arcore labs app in the store to try it out.
@georgianfishbowl170
@georgianfishbowl170 4 жыл бұрын
every video you make blows me away! these advances are crazy and its so great to see these thing happening RIGHT NOW! Thanks for making these videos and bringing it to our attention
@elammertsma
@elammertsma 4 жыл бұрын
It'll be exciting to see this get sped up to support real-time depth mapping. From the paper, it took 40 minutes to process a video of 244 frames (approx a 9-second video on most devices) so there's quite a bit of work necessary to get to the AR-stage, but these results are already incredibly impressive. Now it's going to be all about speeeeeed!
@jameshughes3014
@jameshughes3014 4 жыл бұрын
how do your videos so consistently blow my mind? You have a true talent for presenting dry academic information in a way that makes it exciting and understandable. Thank you for what you do.
@DriesduPreez
@DriesduPreez 4 жыл бұрын
No way? That depth solve is so crisp and consistent, and that's all without any depth sensor or additional camera? Man I can only imagine what this means for VR and AR.
@SamDutter
@SamDutter 4 жыл бұрын
Amazing!!! This probably has tons of application for photogrammetry as well.
@rasp1628
@rasp1628 4 жыл бұрын
Maybe google street view could use this to make more detailed 3D buildings?
@luck3949
@luck3949 4 жыл бұрын
This isn't something radically new, it's an improvement of existing methods. Making 3d models out of images is possible since 2009, find "building Rome in a day" paper and video. So I guess google street view isn't 3d not for technical reasons, but because their management don't want to bother making it in 3d.
@rasp1628
@rasp1628 4 жыл бұрын
@@luck3949 that's cool, I wonder what's possible with todays technology🤔.
@luck3949
@luck3949 4 жыл бұрын
@@rasp1628 I don't know, as I don't monitor this field. Look at LSD SLAM if you want to see something impressive, but it's 5 years old.
@tendermoisturized4199
@tendermoisturized4199 4 жыл бұрын
@@luck3949 Well, yes but improvements in the process could potentially make it more cost-effective and worth it for google, specially if you can teach an AI to just run through the preexisting library of images and produce decent results on its own.
@MrRobotrax
@MrRobotrax 4 жыл бұрын
@@luck3949 it's already 3d
@neoqueto
@neoqueto 4 жыл бұрын
The glowing particles effect... holy crap, this method comes with re-lighting FREE OF CHARGE!
@sanboxengine
@sanboxengine 4 жыл бұрын
I've been following the Two Minute Papers channel for quite some time, I think it's been a few years since I found the channel and got in love with the content. Even if I know the intro says "Dear fellow scholars, this is Two Minute Papers with Dr. Károly Zsolnai-Fehér.", because Ive heard and read it so many times, even then I still hear the intro like if it says "Dear fellow scholars, this is too many papers with Carlos Jonas Ifahir". Please, tell me I'm not the only one xD
@NosAltarion
@NosAltarion 4 жыл бұрын
I found your channel by pure luck and god damn. That's been the best subscription I ever made to a youtube channel. I can't express how most of your video just blew my mind. And the presentation is perfect. Bite sized yet explained so clearly. Thank you for your content.
@RavenDuran231
@RavenDuran231 4 жыл бұрын
"What a time to be alive!" - I always loved that line. I can feel the sheer passion! :D
@pladselsker8340
@pladselsker8340 4 жыл бұрын
very very very cool! Can't wait to see the follow up papers
@sabofx
@sabofx 4 жыл бұрын
Excellent! I cannot wait for somebody to code this into Adobe After Effects.
@moby_vyk
@moby_vyk 4 жыл бұрын
I miss the times where you were also explaining the papers :( From what I understand from their video: kzbin.info/www/bejne/a4XMkmWll9F9d8k , it's not real-time, it takes a video as input, and by taking from 2 random frames a pixel and trying to estimate it's depth, you'll get 2 approximations. The difference between these 2 is the error that is then backpropagated through a network that, after doing this a lot of times, will end up giving a way better and consistent approximation.
@HarryHeck2020
@HarryHeck2020 4 жыл бұрын
two papers down the line...
@MOOBBreezy
@MOOBBreezy 4 жыл бұрын
Yeah, that's what I starting thinking. I noticed that none of the examples shown was real-time, so this wouldn't work as well in driving ai.
@SillyMakesVids
@SillyMakesVids 4 жыл бұрын
What happens for moving objects?
@НикитаКотенко-ц8э
@НикитаКотенко-ц8э 4 жыл бұрын
@@MOOBBreezy I think you still would be able to train an ai on the results.
@martinsmolik2449
@martinsmolik2449 4 жыл бұрын
I still think that this will be the future of CGI tho
@JoakimMoesgaard
@JoakimMoesgaard 4 жыл бұрын
I love keeping myself updated with your videos. Thank you.
@majorjohnson8001
@majorjohnson8001 4 жыл бұрын
This is exactly what I want/expect from a device like the Microsoft Hololens. Mind, the dev kit I worked with back in 2016 was really pretty good for the form factor (it would scan space around it and produce a mesh from a voxelized space using 5cm voxels) but it couldn't handle real time changes to that environment. Things that didn't move (floor, ceiling, walls, furniture) would be generally stable, but if you were lucky enough to have a person walk by, you'd have a physics ghost of them for about five minutes before that region was rechecked. But it could not see glass at all (and I suspect that will always be a problem) and I never got a chance to see how it dealt with mirrors.
@emrahyalcin
@emrahyalcin 4 жыл бұрын
thank you W&B. because of you, we can watch this channel. Thank you really.
@muizzy
@muizzy 4 жыл бұрын
Hi Károly, I've been following this channel and the general AI field for a few years now, but since I finished school and started working, I've really started to do something with it in my free time and apply the concepts I've been hearing about so much. I would love your insight on where to go to continue my learning. Especially looking towards network architectures.
@fureversalty
@fureversalty 4 жыл бұрын
2:34 At first I was kind of skeptical about the accuracy of the depth effect, until I saw that. The way the water refracts the fire hydrant is so cool.
@misaalanshori
@misaalanshori 4 жыл бұрын
I'm really excited for this, not really for the AR effects, but an accurate depth maps from videos means smartphone cameras could finally record videos with bokeh, there is portrait mode for photos, but there is no good version for videos yet (i think)
@un1b4ll
@un1b4ll 4 жыл бұрын
I'd love to see a video about your deal with W/B and learn about how partnerships and ownership circularly benefit the content creation, W/B, yourself, and your viewers.
@axiom1650
@axiom1650 4 жыл бұрын
The flickering ball on the table looks more like z-fighting to me. Can't wait for your take on GPT-3!
@somethingthatpops
@somethingthatpops 4 жыл бұрын
Imagine how this could be used with art sculpting software, where you can reach your hands inside an object and have them be properly occluded (or not occluded) based on this depth map. I had thought this would only be possible using something like lidar or a lytro camera, but it looks like itll work for any camera soon enough! AR is awesome
@mm-rj3vo
@mm-rj3vo 4 жыл бұрын
Holy CRAP, I cannot WAIT for this to be implemented into XR, AND driverless cars!
@degiguess
@degiguess 4 жыл бұрын
one of the things I'm most excited for with these depth algorithms is the idea of 360 degree video being able to have depth information so it can be displayed properly in VR. Imagine being able to watch live events in proper 3D VR like you're actually there.
@Uhfgood
@Uhfgood 4 жыл бұрын
I don't really understand half this stuff, but it's really cool to watch. It means automatic rotoscoping so then vfx artists don't have to do so much work by hand roto-ing stuff.
@thedrunknmunky6571
@thedrunknmunky6571 4 жыл бұрын
You read my mind! Just a few hours ago (I swear) I was thinking of having to search online to find an algorithm to get depth maps from a video stream. Although I can't use it yet for my project (as the depth maps are not detailed enough yet), imagine how detailed and fast it will be a few papers down the line! I can't wait!
@YusiDJordan
@YusiDJordan 4 жыл бұрын
This will be incredible for VFX artists like myself. God, the future looks beautiful.
@luis96xd
@luis96xd 4 жыл бұрын
Wow, this effects are so Amazing! Thanks for sharing this with us 😄
@kingpet
@kingpet 4 жыл бұрын
shout out to the cat being the patient subject of this video!
@benshakespeare268
@benshakespeare268 4 жыл бұрын
That's amazing. I work with images everyday and I can't imagine improving the result any further ... without including extra data sources, that is.
@MrMCKlebeband
@MrMCKlebeband 4 жыл бұрын
that this works so well is beyond nutty.
@Bos_Meong
@Bos_Meong 4 жыл бұрын
that cat is so adorable by the way
@FlyingBanana78
@FlyingBanana78 4 жыл бұрын
There is another app out there that is relatively new called camtrackAR on the apple store that is free and auto tracks footage recorded from iPad or iPhone and creates a camera solve that can be used in blender. Free version allows one point to be used and paid version allows for more than 1 but one can still get some real nice results.
@dissonanceparadiddle
@dissonanceparadiddle 4 жыл бұрын
YES!!! you humans are doing so good! Keep going
@EddyKorgo
@EddyKorgo 4 жыл бұрын
2:41 neural RGB - thats how i see images in my head when i have headache. That flickering.
@debangan
@debangan 4 жыл бұрын
"Hold on to your papers" Damn! That was a nice one
@mcantisnake
@mcantisnake 4 жыл бұрын
Nice video! wish you success, keep it simple like you do and people will come back!
@notnullptr
@notnullptr 4 жыл бұрын
What a time to be alive!
@Tailslol
@Tailslol 4 жыл бұрын
binocular camera and temporal frame by frame cleaning would help this a lot.
@HarhaMedia
@HarhaMedia 4 жыл бұрын
Having worked with AR stuff, I can say that this looks amazingly promising.
@confuseatronica
@confuseatronica 4 жыл бұрын
2 more cats down the line, the picture will be even fuzzier
@EpicVideoClips101
@EpicVideoClips101 4 жыл бұрын
Snapchat: I own this now
@somethingwithbungalows
@somethingwithbungalows 4 жыл бұрын
Who are you?
@KelvenOne
@KelvenOne 4 жыл бұрын
@@somethingwithbungalows Joe
@EpicVideoClips101
@EpicVideoClips101 4 жыл бұрын
ًSomething with Bungalow joe mama
@somethingwithbungalows
@somethingwithbungalows 4 жыл бұрын
Joe okay. understandable. have a nice day, Joe Mama.
@nolifeonearth9046
@nolifeonearth9046 4 жыл бұрын
a new era for "here in my garage"-style videos.
@MAJ0RTOM
@MAJ0RTOM 4 жыл бұрын
Can't wait to see the applications that this technology will have in "that" industry.
@tendermoisturized4199
@tendermoisturized4199 4 жыл бұрын
We're dangerously close to neural AI-generated context-aware AR catgirl hentai directly in your living room and I couldn't be more excited.
@pd8613
@pd8613 4 жыл бұрын
Tender & Moisturized “what a time to be alive!”
@MrSpektyr
@MrSpektyr 4 жыл бұрын
I'm... too ignorant, to say the least, to understand a bulk of this but the way you explain things makes it easier to understand. The video graphics, another topic I wish I was good at but my skills set is lacking, are amazing as well.
@DavidMcCoul
@DavidMcCoul 4 жыл бұрын
What an exciting time to be alive!
@beaconofwierd1883
@beaconofwierd1883 4 жыл бұрын
This is going to completely replace bluescrren in the film industry in a few years. No more blue shining lights, no expensive LED background screens, just put up a gray neutral screen as background, or just use any old room. 1000 budget home made films could have the same visual effects as todays block busters :O
@angledcoathanger
@angledcoathanger 4 жыл бұрын
That's amazing. I'm glad I was holding onto my papers.
@imaUFO672
@imaUFO672 4 жыл бұрын
This could speed up the process of adding visual effects in movies drastically
@vestlen
@vestlen 4 жыл бұрын
I love your videos! Could you do an update to your How To Get Started With Machine Learning video? It's been 4 years and so much has changed!
@fulanodetaldoorkut
@fulanodetaldoorkut 4 жыл бұрын
Now I want to see what that can do for selfdriving cars since it is much better than previous method.
@asilserhan685
@asilserhan685 4 жыл бұрын
Try it out on optical illusions regarding depth perception. Would be interesting if the models were superhuman for those.
@monad_tcp
@monad_tcp 4 жыл бұрын
Two Minute Papers, I almost hear "too many papers". of course, there's never too many papers !!
@Axewayboy
@Axewayboy 4 жыл бұрын
Beautiful. Add some sensors (optional, so it can run without them) to the input of that and create a perfect 3D world for a car or machine. Imagine know mixing 2 cameras for a robot, cheap 3D world.
@MichaelJONeill333
@MichaelJONeill333 4 жыл бұрын
This. Is. AMAZING!
@TheSparrowLooksUp
@TheSparrowLooksUp 4 жыл бұрын
That cat like "wtf are u doin'?"
@adek445
@adek445 4 жыл бұрын
I would propose to use method with 2 side by side cameras. If we want to have "human like" results the best and easiest way would be make it works like human - pair of eyes. I know there are stereoscopic camera systems but wonder if we could achieve that with smartphone cameras.
@noway2831
@noway2831 4 жыл бұрын
Do the learning algorithms make use of the continuous video feed? I think it's much easier to perceive depth when the camera is moving - like how we have two eyes. Two different perspectives, or in the case of a video feed many perspectives per second, would (/do?) improve this vastly.
@bobsmithers4895
@bobsmithers4895 4 жыл бұрын
This is so fricking cool!
@TrogdorBurnin8or
@TrogdorBurnin8or 4 жыл бұрын
I'm academically familiar with trying to apply monocular structure from motion. I feel like if you have the baseline for multiple cameras (eg on opposite sides of a pair of goggles), absolutely use that baseline. It would be a waste not to. Whatever you can do by learning a model for how things are supposed to look in 3D from a 2D image, you can do better by applying those same learning processes to two or four simultaneous parallax 2D images. Add in a depth cam if you've got it. The more sensors the better.
@fqidz
@fqidz 4 жыл бұрын
Holy shit this is groundbreaking
@imjody
@imjody 4 жыл бұрын
Looks like it's going to be a LOT sooner than expected before we start seeing clips made by everyday people (not just movie studios) that look 100% real, but are in fact 50% (or more) fake/rendered. Although kind of scary for people such as Presidents, other powerful people and famous persons, I'm looking forward to all of this very much for other reasons such as movies, video games and visuals. This is honestly absolutely incredible. Thanks as always for sharing! 😁
@SandroSanakoti
@SandroSanakoti 4 жыл бұрын
I wonder what impact could have the addition of LiDAR data captured along with the video. Seems like that's the kind of device that would be delivered with the new generation smartphones, so it's worth considering.
@johnedwardtaylor
@johnedwardtaylor 4 жыл бұрын
"Bliss was it in that dawn to be alive, But to be young was very heaven!" -Wordsworth, on the French Revolution (www.poetryfoundation.org/poems/45518/the-french-revolution-as-it-appeared-to-enthusiasts-at-its-commencement)
@JamesJazzz
@JamesJazzz 4 жыл бұрын
This really reminds me of that 2012 game Kinect Party. Just makes you wonder how these experiences will look with current tech.
@Scio_
@Scio_ 4 жыл бұрын
I want this in ARCore, last week!
@dixinormus5143
@dixinormus5143 4 жыл бұрын
Holy hold my paper this is insane! the rate of improvement is just nuts fuck yeah :)
@DamianReloaded
@DamianReloaded 4 жыл бұрын
This is really outstanding.
@Tondadrd
@Tondadrd 4 жыл бұрын
2^7th!! This paper blew my mind so hard. I attended class about computer vision for a semester and the vision is so hard and complex (not to learn, to do at all)! Even with depth cameras! I don't really care about video effects, it would be so amazingly useful without them alltogether!
@connormichalec
@connormichalec 3 жыл бұрын
Absolutely incredible
@felix-ht
@felix-ht 4 жыл бұрын
One thing to note is, that the system does not work well for augmented reality/ real time applications. It's much more suited for post processing. The reason for this two fold: Firstly the method from the paper uses the entire video, including frames from the "future", which renders it unusable for real time applications such as AR. Secondly, the method requires fine tuning at test time. This is pretty expensive, and could not be done on mobile devices or be used for real time applications, excluding AR again.
@-Blender
@-Blender 4 жыл бұрын
😲 😲 Wowoahwewow. What a time to be alive indeed 😲 😲
@besknighter
@besknighter 4 жыл бұрын
IMHO, the quality of their results are already pretty usable for almost all consumer products!
@shan_singh
@shan_singh 4 жыл бұрын
everyone : hold my beer 2minpaper : hold my paper
@Lucibus
@Lucibus 4 жыл бұрын
0:12 "If we had the time, patience and skill... "
@katakana1
@katakana1 4 жыл бұрын
Finally I can download AR apps for my measly iPhone 5S
@jascrandom9855
@jascrandom9855 4 жыл бұрын
This would be Amazing for VFX. Blender so needs this.
@adrianvasquez4351
@adrianvasquez4351 4 жыл бұрын
That cat is adorable!
@finnaustin4002
@finnaustin4002 4 жыл бұрын
I wonder if this could be combined with an actual depth camera/stereoscopic cameras for higher performance
@SHCreeper
@SHCreeper 4 жыл бұрын
I think one important aspect that was left out is the time it takes to calculate these depth maps. If I remember correctly, it was around 20 minutes per frame.
@lost4468yt
@lost4468yt 4 жыл бұрын
You mentioned there was methods to get the depth in vr and computer games at the start? But why on earth would you need to? Both rasterization and ray tracing directly give you the exact numerical value of the depth... In fact most modern rendering pipelines are extremely dependant on getting that information. Maybe I misunderstood you. But I can't think of a single reason why you would need some technique and algorithm when you pretty much get the depth directly from the rendering?
@stepansigut1949
@stepansigut1949 4 жыл бұрын
I read the paper and had one key takeaway: 244 frames (I assume max 10 s video clip) took 40 minutes to process and all the frames need to be available ahead of inference. Online processing is unfortunately still miles away. :-(
@KuraIthys
@KuraIthys 4 жыл бұрын
Well, this is impressive... This should be very interesting in future...
@alfred4264
@alfred4264 4 жыл бұрын
Imagine if we have complete images of Mars and use this to identify the depths of craters and hills, we could have a good street view of mars.
@dshlai
@dshlai 4 жыл бұрын
Nice but you should also read the limitation section
@kevalan1042
@kevalan1042 4 жыл бұрын
Dr! First time I hear the title, congrats (maybe it was a long time ago?)
@VaradMahashabde
@VaradMahashabde 4 жыл бұрын
I thought this is how AR worked already! Any news to learn about current AR implementations?
@mascuudsaid9791
@mascuudsaid9791 3 жыл бұрын
What a time to be alive is in two minute paper vedios
@AlexeySeverin
@AlexeySeverin 4 жыл бұрын
It looks way more precise than what we get in ARCore and ARKit... Will you release this as an SDK? Or at least the code?
@ProfessionalTycoons
@ProfessionalTycoons 4 жыл бұрын
this is such a breakthrough
@kylebowles9820
@kylebowles9820 4 жыл бұрын
Wow, that's good depth quality; better than stereo depth sensors! Watch out Google Depth API :)
This is Geometry Processing Made Easy!
7:31
Two Minute Papers
Рет қаралды 125 М.
Unreal Engine 5.5 Is Here - Mega Greatness!
6:11
Two Minute Papers
Рет қаралды 68 М.
哈莉奎因怎么变骷髅了#小丑 #shorts
00:19
好人小丑
Рет қаралды 55 МЛН
哈哈大家为了进去也是想尽办法!#火影忍者 #佐助 #家庭
00:33
23 AI Tools You Won't Believe are Free
25:19
Futurepedia
Рет қаралды 2 МЛН
Has Generative AI Already Peaked? - Computerphile
12:48
Computerphile
Рет қаралды 997 М.
The LIES That Make Your Tech ACTUALLY Work
11:29
Enrico Tartarotti
Рет қаралды 834 М.
OpenAI Plays Hide and Seek…and Breaks The Game! 🤖
6:02
Two Minute Papers
Рет қаралды 10 МЛН
Neuralink Begins First Human Experiments!
14:39
The Tesla Space
Рет қаралды 2,2 МЛН
AI vs Artists - The Biggest Art Heist in History
44:23
Yes I'm a Designer
Рет қаралды 348 М.
Adobe is horrible. So I tried the alternative
25:30
Bog
Рет қаралды 1 МЛН
What Is A Graphics Programmer?
30:21
Acerola
Рет қаралды 428 М.
Why the Future of AI & Computers Will Be Analog
17:36
Undecided with Matt Ferrell
Рет қаралды 547 М.
How Well Can DeepMind's AI Learn Physics? ⚛
7:18
Two Minute Papers
Рет қаралды 1,6 МЛН