Depth Camera - Computerphile

  Рет қаралды 243,172

Computerphile

Computerphile

Күн бұрын

Depth can be a useful addition to image data. Mike Pound shows off a realsense camera and explains how it can help with Deep Learning
/ computerphile
/ computer_phile
This video was filmed and edited by Sean Riley.
Computer Science at the University of Nottingham: bit.ly/nottscomputer
Computerphile is a sister project to Brady Haran's Numberphile. More at www.bradyharan.com

Пікірлер: 261
@nikanj
@nikanj 2 жыл бұрын
Ah the Kinect. Such a massive failure as a gaming peripheral but pivotal in so much computer vision research/DIY projects.
@MINDoSOFT
@MINDoSOFT 2 жыл бұрын
And even freelance production projects ! As part of a team I've created one game with kinect v1, and another one with kinect v2. What a great piece of hardware.
@glass1098
@glass1098 2 жыл бұрын
@@MINDoSOFT Which ones?
@MINDoSOFT
@MINDoSOFT 2 жыл бұрын
@@glass1098 hi ! Unfortunately I don't have a portfolio page. But the first one was an air-hockey style game, where the player held a broom with an ir led, which was detected via kinect, and the players had to put the trash in the correct recycling bins. The other game was a penalty shootout game which detected the players kick. :)
@xeonthemechdragon
@xeonthemechdragon 2 жыл бұрын
I have three of the v2, and 2 of the v1
@JulesStoop
@JulesStoop 2 жыл бұрын
Kinect technology became faceID in iPhone and iPad. Not a failure at all but providing very secure and just about invisible biometric authentication to about a billion people on a daily basis.
@Pystro
@Pystro 2 жыл бұрын
4:27 "I should put an artwork up or something." Take a depth-field picture of that wall, print it out and hang it back onto the wall. Now it's a piece of art!
@Checkedbox
@Checkedbox 2 жыл бұрын
@yefdafad I think you might have forgotten to switch Windows
@cussyplays
@cussyplays 2 жыл бұрын
I just LOVE that he talks to the camerman and not us, makes it so much more candid and easier to watch as a viewer!
@oskrm
@oskrm 2 жыл бұрын
- "Probably have to give it back" - "Oh no, it fell off... my car"
@Yupppi
@Yupppi 2 жыл бұрын
Mike always has something exciting.
@smoothmarx
@smoothmarx 2 жыл бұрын
That comment at 2:41 was magic. Caught me red handed!
@maciekdziubinski
@maciekdziubinski 2 жыл бұрын
Alas, Intel discontinued the RealSense line of products. The librealsense library will be still maintained (if I'm correct) but no new hardware is going to be released.
@joels7605
@joels7605 2 жыл бұрын
I wish they'd maintain the L515 a little better. The 400 series seem to be well supported, but the 500 series is a vastly superior sensor.
@arcmchair_roboticist
@arcmchair_roboticist 2 жыл бұрын
There is still kinect which actually works better in pretty much every way afaik
@joels7605
@joels7605 2 жыл бұрын
@@arcmchair_roboticist There is some truth to this. KinectV2 and V1 are both excellent. I think it's mostly down to a decade of software refinement though. From a hardware perspective the RealSense L515 should mop the floor with everything. It's a shame it was dropped.
@paci4416
@paci4416 2 жыл бұрын
Intel has discontinued some of the products, but the stereo cameras would continue to be sold (D415, D435I, D455) for sure. The librealsense library is still maintained (new release today).
@CrazyDaneOne
@CrazyDaneOne 2 жыл бұрын
Wrong
@MmmVomit
@MmmVomit 2 жыл бұрын
I wonder what this might do with a mirror. I expect it would see the mirror as a "window" where there's a lot more depth, but I wonder how it would handle the weird reflections of the IR dots.
@meispi9457
@meispi9457 2 жыл бұрын
Wow 🤯 Interesting thought!
@FlexTCWin
@FlexTCWin 2 жыл бұрын
Now I’m curious too!
@260Xander
@260Xander 2 жыл бұрын
Someone needs to do this please!
@hulavux8145
@hulavux8145 2 жыл бұрын
It does not do well really. Same things with transparent objects
@zybch
@zybch 2 жыл бұрын
The dots necessarily spread out from the projector, so even if a mirror was placed perfectly perpendicular to their flight path barely any would reflect back in the right way to generate a coherent depth image.
@ajv35
@ajv35 2 жыл бұрын
I wish he would've done a more in depth explanation about the device. Like what data type is used for the depth field? Is it a 2D array of floating point values since depth can technically be infinite? Is it calibrated to only detect so far? Or does it use a variable-depth rate with a finite sized data type (like an integer, as in the other rgb fields) that adjusts the value according to the furthest object it senses?
@b4ux1t3-tech
@b4ux1t3-tech 2 жыл бұрын
So, thinking about it, it's likely that the RGB aspect is an integer or a fraction between 0 and 1. That's pretty common, and for RGB, those two are going to be functionally identical, since a computer is likely only going to be able to display in 24-bit color anyway. So, for the color, it probably doesn't matter, and it could go either way. The depth is probably a fraction between zero and one. That would allow you to map between the visible colors pretty accurately, and display a fine-grained depth map, which we see in the video. After all, you only need 32 million values, and the resolution of a 32-bit floating point between 0 and 1 gives you that reliably. Re: 2d array, I wouldn't be surprised if it's indexable as a 2d array in the API, but it's probably stored as a 1d array, since translating from coordinates to an index (and vice versa) is trivial. I don't know if that's actually what's going on, mind you, just making some assumptions based on similar technologies.
@Norsilca
@Norsilca 2 жыл бұрын
I'll bet it's just an extra byte, just like R, G, and B are each 1 byte. 256 integers, maybe in a logarithmic scale so there's more precision for near values than far ones.
@b4ux1t3-tech
@b4ux1t3-tech 2 жыл бұрын
Keep in mind, you don't have to store colors as 24-bit (three byte) colors, that's just a convention because that's what most monitors support. If you're working with optical data, you may or may not be limited to a 24-bit color. For the depth, only having 256 "depth steps" seems _really, really_ restrictive.
@Norsilca
@Norsilca 2 жыл бұрын
@@b4ux1t3-tech Yeah, I just meant the common 24-bit RGB format. 8 bits for depth could be too little, though I thought it might be enough to give the extra boost a neural net needs. You could easily do more bits. I was wondering if instead of inventing a new format they actually just produce a separate file that's a grayscale image for the depth. Then you can combine them yourself or just use the standard RBG image when you don't need depth.
@danieljensen2626
@danieljensen2626 2 жыл бұрын
I imagine if you look up a manual it'll tell you.
@stef9019
@stef9019 2 жыл бұрын
Always great to learn from Mike Pound!
@TheGreatAtario
@TheGreatAtario 2 жыл бұрын
He's a lot better than Mike Pence
@kieronparr3403
@kieronparr3403 2 жыл бұрын
Entering poundland
@araghon007
@araghon007 2 жыл бұрын
A sidenote to Kinect: The Kinect v2 uses time of flight, which some people like, some people hate. What I find most fascinating is that the Kinect lives on, both as Kinect for Azure, and the depth sensing tech the Hololens has. While not successful as a motion control method, it's still really useful when used with a PC.
@TiagoTiagoT
@TiagoTiagoT 2 жыл бұрын
Why people hate it?
@daltonbrady2492
@daltonbrady2492 2 жыл бұрын
Mike Pound always has the stuff to really get you going! More Mike Pound!
@Sth0r
@Sth0r 2 жыл бұрын
i would love to see this and Intel RealSense LiDAR L515 side by side.
@jenesuispasbavard
@jenesuispasbavard 2 жыл бұрын
I still use my Kinect - mostly to just log into Windows with my face, but also as a night camera to keep an eye on our new foster dog when he's home alone. It's amazing that a piece of hardware almost a decade old is still so good at what it does!
@ianbdb7686
@ianbdb7686 2 жыл бұрын
This channel is insane. Never stop uploading
@MattGriffin1
@MattGriffin1 2 жыл бұрын
Another great video from Mike, love computerphile!
@rachel_rexxx
@rachel_rexxx 2 жыл бұрын
Thank you this was exactly the breakdown I was hunting for last week!
@marioh9926
@marioh9926 2 жыл бұрын
Exceptional once again, Mike, congratulations!
@jerrykomas1248
@jerrykomas1248 2 жыл бұрын
This is really insightful. We are using stereomaping, similar to the techniques used by Landsat and World View satelites, for my Master's Thesis! This technology is super cool, glad you are showig folks how it works becasue there are so many applications beyond the kinect!
@Snair1591
@Snair1591 2 жыл бұрын
This device, Intel RealSense D435 and its peers, are so under appreciated. The hardware is brilliant but at the same time the wide range of support its packages offers is amazing. They have regular support with ROS, edge computation platforms like Jetson nano and as a stand alone relasense SDK. If more people knew about this and used it, Intel would not have dared to thought of shutting this down. There are other cameras similar this, like Zed for example, but the wise array of support realsense offers ha no competition.
@thecheshirecat5564
@thecheshirecat5564 2 жыл бұрын
You don’t even need an SDK. If you have a network card, there are devices that run driverless and are compatible with industrial and FOSS software. We are building one of these.
@astropgn
@astropgn 2 жыл бұрын
lol I put my finger on my face at the exact instant before the screen said I was looking at my finger
@AaronHilton
@AaronHilton 2 жыл бұрын
For everyone looking for a realsense alternative, occipital are still shipping their structure sensors and structure cores. Works on similar principles.
@Bstrolch
@Bstrolch 2 жыл бұрын
MIKE IS BACK
@stefanguiton
@stefanguiton 2 жыл бұрын
Excellent video!
@jonva13
@jonva13 2 жыл бұрын
Oh, thank you! 🙏 This is exactly the video I've been looking for.
@bluegizmo1983
@bluegizmo1983 2 жыл бұрын
Image Depth is a quantification of the camera's ability to take a picture that makes a deep philosophical statement! 🤣
@utp216
@utp216 2 жыл бұрын
I loved your video and hopefully you’ll get to hang on to the hardware so you can keep working with it.
@soejrd24978
@soejrd24978 2 жыл бұрын
Ohh yes! Mike videos are the best
@omerfarukpaker7551
@omerfarukpaker7551 Жыл бұрын
I am literally enlightened! Thanks ever so much!
@blenderpanzi
@blenderpanzi 2 жыл бұрын
I thought the kinect when announced promised to no use any processing power of the console, but in the end because of cuts actually did? Am I misremembering?
@Jacob-yg7lz
@Jacob-yg7lz 2 жыл бұрын
Could you take one of these, then attach it to a mirror setup which separates each len's vision by far more distance, and then use it for longer distance range finding (like a WW2 stareoscopic rangefinder)?
@Jacob-yg7lz
@Jacob-yg7lz 2 жыл бұрын
@Pedro Abreu I just meant having the view of each camera be really far away from each other so that there's more parallax
@haziqsembilanlima
@haziqsembilanlima 2 жыл бұрын
Just a question, is image depth included in regular JPEG? There was a case back with my final year that I was thinking to add image depth to improve shape recognition (dataset was regular JPEGs) but target object tend to mix with surrounding objects that makes regular bounding box less accurate. not to mention I needed the target to be painted accurately as possible so I could perform transformation and finally turn the target object into a scale of sort (target object has fixed, defined dimension)
@jeroenkoehorst4056
@jeroenkoehorst4056 2 жыл бұрын
No, it's a separate image. Just like the RGB en IR pictures.
2 жыл бұрын
Is there already a video conferencing tool which takes advantage of this? This seems huge for being able to eliminate background and focus on the face.
@Garvm
@Garvm 2 жыл бұрын
I think FaceTime could be already doing that since iPhones have one of these depth sensors in each of the cameras
@delusionnnnn
@delusionnnnn 2 жыл бұрын
I'm reminded of my sadly unsupported Lytro Illum camera, a "lightfield" device. Being able to share "live" images was fun, and it's a shame they didn't release that back-end code as open source so something like flickr or instagram could support it. You can still make movies of it, but the fun of the live images was that the viewer could control the focus view of your photograph.
@adekunleafolabi1040
@adekunleafolabi1040 2 жыл бұрын
A beautiful beautiful beautiful video
@ZandarKoad
@ZandarKoad 2 жыл бұрын
12:13 "THAT'S A QUANTUM BIT!!! SO IT'S NOT JUST ZERO OR ONE..."
@nonyafletcher601
@nonyafletcher601 2 жыл бұрын
We need more cameos of Sean!
@anujpartihar
@anujpartihar 2 жыл бұрын
Hit the like button so that Mike can get to keep the camera.
@Hacktheplanet_
@Hacktheplanet_ 2 жыл бұрын
Id like to hear a video with mike pound talking about the occukus quest 2, i bet that uses a similar method. What a brilliant machine!
@hexenkingTV
@hexenkingTV 2 жыл бұрын
But image depth could also lead to poor performance if it catches more noises leading to a general data shift. I guess the processing step should be carefully done.
@JadeNeoma
@JadeNeoma 2 жыл бұрын
Interesting the ultraleap leapmotion camera uses three cameras to try and resolve depth and position m, all of which are near ir
@katymapsa
@katymapsa 2 жыл бұрын
More Mike videos, please!!
@elmin2323
@elmin2323 2 жыл бұрын
Mike needs to have his own channel Dona vlog
@UTVNEPAL
@UTVNEPAL 2 жыл бұрын
Genius idea, Exactly a multiple image sensor can capture various algorithms. Specially Heat signature. That can see through doors.
@Athens1992
@Athens1992 Жыл бұрын
Very informative!! This camera will work far better at night in a car instead in the morning?
@thisisthefoxe
@thisisthefoxe 2 жыл бұрын
Question: *How* is the depth stored? RGB uses values between 0-255 to store the intensity and you can work out the percentage of that could in that pixel. How about depth? Does it also have 1byte? What does it mean? Can you calculate the actual distance from the camera?
@ciarfah
@ciarfah 2 жыл бұрын
I mostly worked with depthimage, which is essentially a greyscale image where lighter pixels are closer and darker pixels are further away. On the other hand there is pointcloud, which is an array of 3D points. Typically that can be structured or unstructured, e.g. a 1000x1000 array of points, or a vector of 1000000 points. Perhaps this isn't as detailed as you'd have liked but this is as in depth as I've gone
@ciarfah
@ciarfah 2 жыл бұрын
The handy thing about depthimage is you can compress it like any other image, which is great for saving bandwidth in a distributed system
@threeMetreJim
@threeMetreJim 2 жыл бұрын
What does it do if you hold a stereogram (SIRDS) picture in front of it?
@DavidLindes
@DavidLindes 2 жыл бұрын
Now if we can get IRGBUD (adding (near-)Infrared and Ultraviolet), that'd be cool. (Even cooler would be FIRGBUD, but far-IR tends to require sufficiently different optics that I definitely won't be holding my breath for that one.)
@TiagoTiagoT
@TiagoTiagoT 2 жыл бұрын
Is the depth calculated on the hardware itself or on software running on the computer?
@CineGeeks001
@CineGeeks001 2 жыл бұрын
I am search for this yesterday and now you put video 😀
@suryavaraprasadalla8511
@suryavaraprasadalla8511 Жыл бұрын
Great explanation
@CrystalblueMage
@CrystalblueMage 2 жыл бұрын
Hmm, so the camera can be used to detect color imperfections on supposedly singlecolored flat surfaces. Can that be used to detect beginning fungus?
@maxmusterman3371
@maxmusterman3371 2 жыл бұрын
Its been so long 😭 finally
@johanhendriks
@johanhendriks 2 жыл бұрын
What's the link to the video where the stuff on the whiteboard was written and discussed?
@sermadreda399
@sermadreda399 Жыл бұрын
Great video, thank you for sharing
@arash_mehrabi
@arash_mehrabi Жыл бұрын
Nice explanation. Thanks!
@unvergebeneid
@unvergebeneid 2 жыл бұрын
I mean, the Kinect 2 did time-of-flight, not structured light like the first one. And it was still pretty cheap, being a mass-market device.
@billconiston8091
@billconiston8091 2 жыл бұрын
where do they get the dot matrix printer paper from... ?
@tsunghan_yu
@tsunghan_yu 2 жыл бұрын
7:16 but why does faceid work under sunlight? Is the laser just stronger in face id?
@lopzag
@lopzag 2 жыл бұрын
Would be cool to see Mike talk about 'event cameras' (aka artificial retinas). They're really on the rise in machine vision.
@ciarfah
@ciarfah 2 жыл бұрын
Agreed. Hoping to work with those soon
@sikachukuning2473
@sikachukuning2473 2 жыл бұрын
I believe this is also how Face ID works. It used the dot projector and IR camera to get the 3D image of the face and do the authentification.
@arkemal
@arkemal Жыл бұрын
indeed, TrueDepth
@functionxstudios1674
@functionxstudios1674 2 жыл бұрын
Made it early. Computerphile is the Best
@GameNOWRoom
@GameNOWRoom 2 жыл бұрын
3:12 The camera knows where it is because it knows where it isn't
@rustycherkas8229
@rustycherkas8229 2 жыл бұрын
So it calculates where it should be... :-)
@Hacktheplanet_
@Hacktheplanet_ 2 жыл бұрын
Mike pound the legend 🙌
@NoahSpurrier
@NoahSpurrier 2 жыл бұрын
Do open tools support this? OpenCV, UVC, V4L2?
@Lodinn
@Lodinn 2 жыл бұрын
Ah, just gotten a couple of 435's for the lab this year. The funniest bit so far is how it sometimes does a perspective distortion of featureless walls much more realistically than photoshop does :D
@srry198
@srry198 2 жыл бұрын
Wouldn’t LiDAR be more accurate/achieve the same thing concerning depth perception for machines?
@Phroggster
@Phroggster 2 жыл бұрын
Yes, LiDAR would be way better, but it's going to cost you ten or twenty times more than this device. This is geared more for prosumer tinkering, while LiDAR is more for autonomous driving, or other situations where human lives hang in the balance.
@ZT1ST
@ZT1ST 2 жыл бұрын
I imagine it would also be more useful in time based solutions - because Lidar requires it to count the time for the signal to return back to do calculations, and the infrared emitter could be used to get the depth information a little bit faster - because you're only waiting for the image to get back the first time, and you get more information on the lens at once, based on the pattern in the image. You could probably get even more accurate depth perception if you combined lidar with this.
@niccy266
@niccy266 2 жыл бұрын
@@ZT1ST also unless the lidar laser is changing direction for each pixel, which would have to happen extremely quickly, you would have to use a number of lidars that probably can't move and will get a much lower resolution depth channel. Maybe it could supplement the stereo information or help calibrate the camera but overall not super useful
@TiagoTiagoT
@TiagoTiagoT 2 жыл бұрын
What happened to that time-of-flight RGBD webcam Microsoft bought just a little before they released the Kinect? Did they just buy it out to try to stifle competition and left the technology to rot?
@troeteimarsch
@troeteimarsch 2 жыл бұрын
Mike's the best
@bryan69087
@bryan69087 2 жыл бұрын
MORE MIKE POUND!
@asnothe
@asnothe 2 жыл бұрын
I have that laptop. Thank you for validating my purchase. ;-)
@GameOfThePlanets
@GameOfThePlanets 2 жыл бұрын
Would adding a UV emitter help?
@quanta8382
@quanta8382 2 жыл бұрын
I wish I had a teacher like him!
@ByteMe1980
@ByteMe1980 2 жыл бұрын
@computerphile Just wondering, rather than having the camera figure out depth, why not feed left and right RGB into the network instead?
@christophermcclellan8730
@christophermcclellan8730 2 жыл бұрын
The Realsense camera has a left and right infrared camera, but only a single RGB camera.
@ByteMe1980
@ByteMe1980 2 жыл бұрын
@@christophermcclellan8730 I understand that, my question was why not have a camera with just left and right rgb and let the neural net figure out the depth
@christophermcclellan8730
@christophermcclellan8730 2 жыл бұрын
@@ByteMe1980 you could try, but you would still need a labeled dataset for training, which would require a similar setup. There are actually some (non-neural net) algorithms for determining depth from stereoscopic RGB images. They require very precise calibration, which makes it impractical outside of the lab. My team looked into it and determined it was cheaper to just put the more expensive devices into our production run. The point is this technology was too expensive for consumer tech until recently. Now that the price has come down, it’s more accessible for applications, such as liveness detection for biometrics.
@carmatic
@carmatic 2 жыл бұрын
when will they make camera modules which can simultaneously capture RGB and the IR from the same lens? that way, we have no parallax error between the depth and colour data
@erikbrendel3217
@erikbrendel3217 2 жыл бұрын
Pretty sure that this is possible. Only problem is that you need two IR cameras to do the stereo matching
@marc_frank
@marc_frank 2 жыл бұрын
lenses for rbg cams usually have an ir filter built in
@acegh0st
@acegh0st 2 жыл бұрын
I like the 'Gingham/Oxford shirt with blue sweater' energy Mike projects in almost every video.
@rustyfox81
@rustyfox81 2 жыл бұрын
Can two cameras close together of different focal lengths detect depth ?
@teriyakipuppy
@teriyakipuppy 2 жыл бұрын
You get a stereoscopic image, but it doesn't make a depth map.
@TheTobias7733
@TheTobias7733 2 жыл бұрын
Mr. Pound i love you
@0thorderlogic
@0thorderlogic 2 жыл бұрын
does anyone know the name of the guy featured?
@Amonimus
@Amonimus 2 жыл бұрын
What if you use two of those?
@1endell
@1endell Жыл бұрын
You got a like just when you predicted i looked at my finger. Amazing video
@6kwecky6
@6kwecky6 2 жыл бұрын
huh.. Thought this was more solved than it is. Even with dedicated hardware, you can only get sub 30fps directly from the camera. I suppose the directly from the camera and cheaply is key words
@_yonas
@_yonas 2 жыл бұрын
You can get 30FPS of depth-aligned RGBD Images from the realsense camera with a resolution of 1280x720. Higher than that and it drops to 15, afaik.
@ciarfah
@ciarfah 2 жыл бұрын
You can also get 60 Hz at lower res and 6 Hz at higher res IIRC
@levmatta
@levmatta 2 жыл бұрын
How do you get depth for a single RGB image with AI?
@antonisvenianakis1047
@antonisvenianakis1047 2 жыл бұрын
Check megadepth
@PrashantBatule
@PrashantBatule 2 жыл бұрын
9:20 convolved using a convolution 👍
@LaRenard
@LaRenard 2 жыл бұрын
my professor literally delivered a lecture today regarding image depth, and i see it on Computerphile XD
@sanveersookdawe
@sanveersookdawe 2 жыл бұрын
Please make the next one on the time of flight camera
@castortoutnu
@castortoutnu Жыл бұрын
At work the computer-vision use an Intel D435 to segment parcels on a conveyor belt. And they DON'T USE THE DEPTH for that. Only the RGB. They use the depth for other things, but not for that. Also I'm pretty sure that they DON'T POST-PROCESS the depth image.
@Noobinski
@Noobinski 2 жыл бұрын
Why not take four instead of two sensors and fill in all the gaps for the two sensors inbetween the outer ones? That would imho absolutely complete the stereoscopy and further more increase the quality of all the interpolated/corresponsing pixels of the motive... or do I not get sth here? Why does it have to be two? Anthropomorphising a bit too much here?
@AcornElectron
@AcornElectron 2 жыл бұрын
Heh, Mike is always fun.
@StuartSouter
@StuartSouter 2 жыл бұрын
I'm a simple man. I see Mike Pound, I click.
@JohnDlugosz
@JohnDlugosz 2 жыл бұрын
I was hoping to learn how the time-of-flight depth sensors work.
@silakanveli
@silakanveli 2 жыл бұрын
Mike is too smart!
@JB-oj7bq
@JB-oj7bq 2 жыл бұрын
What if you did Stereo RGBD, using two devices
@ssshukla26
@ssshukla26 2 жыл бұрын
2:42 Yeah 🤦‍♂️ now that's why this video deserves a like.
@valisjan95
@valisjan95 2 жыл бұрын
2:41 Of course I have just looked at my finger. Sean et. al clearly know their audience.
@Jacob-yg7lz
@Jacob-yg7lz 2 жыл бұрын
Do any space rovers have anything like this?
@jms019
@jms019 2 жыл бұрын
So much better when I only saw image death
@bsvenss2
@bsvenss2 2 жыл бұрын
Looks like the Intel RealSense Depth Camera D435. Only 337 GBP (in Denmark). Let's send a couple to Mike. ;-)
@thuokagiri5550
@thuokagiri5550 2 жыл бұрын
Dr Pound is the Richard Feynman of computer science
@SimonCoates
@SimonCoates 2 жыл бұрын
Coincidentally, Richard Feynman had so many affairs he was known as Dr Pound 😂
@cogwheel42
@cogwheel42 2 жыл бұрын
I looked at my finger.
Iterative Closest Point (ICP) - Computerphile
16:25
Computerphile
Рет қаралды 134 М.
Sensor Showcase | Depth Cameras
4:34
Clearpath Robotics by Rockwell Automation
Рет қаралды 14 М.
когда достали одноклассники!
00:49
БРУНО
Рет қаралды 2,4 МЛН
[柴犬ASMR]曼玉Manyu&小白Bai 毛发护理Spa asmr
01:00
是曼玉不是鳗鱼
Рет қаралды 46 МЛН
Osman Kalyoncu Sonu Üzücü Saddest Videos Dream Engine 118 #shorts
00:30
WiFi's Hidden ____ Problem - Computerphile
12:05
Computerphile
Рет қаралды 596 М.
How WiFi Works - Computerphile
17:19
Computerphile
Рет қаралды 196 М.
3D Gaussian Splatting! - Computerphile
17:40
Computerphile
Рет қаралды 106 М.
Man in the Middle & Needham-Schroeder Protocol - Computerphile
24:32
DIY Laser Projector - Built from an old hard drive
20:07
Ben Makes Everything
Рет қаралды 1,3 МЛН
What is LiDAR? (& Why is It on Apple Devices All of a Sudden)
6:05
How to get your robot to see in 3D! (Depth Cameras in ROS)
23:21
Articulated Robotics
Рет қаралды 56 М.
SLAM Robot Mapping - Computerphile
11:35
Computerphile
Рет қаралды 121 М.
когда достали одноклассники!
00:49
БРУНО
Рет қаралды 2,4 МЛН