Imagine watching a film and being able to change the angles and watch things again and again. This is amazing.... wow.
@ric8ard14 жыл бұрын
That is some really excellent work - I was wondering how difficult it would be to combine the outputs of 2 Kinects, and now you have answered the question. Outstanding.
@TheOceanLoader14 жыл бұрын
This is such an inspiring video. My mind reels at the possibilities of how work meetings will be conducted in the future. Very very very impressed. This is literally the start of the era of the hologram - a time when there will be some amazing developments like this. Yes it's grainy and low-fi but that only serves to illustrate how this kind of thing can begin as a demo and let us see how it is in a few years time.
@infiniterealities4D14 жыл бұрын
You are really breaking barriers! This is an incredible hack on already superb technology. Can't wait to see what else you can do.
@GregoryLindsey197914 жыл бұрын
Simply amazing. What you have accomplished here utterly blows my mind!
@okreylos11 жыл бұрын
My Kinect package is available from my web site, but it doesn't do body tracking like OpenNI or the Kinect SDK. It creates a live 3D representation of the surface of any objects in the camera's field of view, including bodies, but doesn't reconstruct a 3D skeleton.
@okreylos14 жыл бұрын
@techpops The 640x480 resolution on depth and color camera is a problem. There are ways to increase the color resolution by using an external higher-definition camera, but we're stuck with the relatively low-res depth image until someone makes a better device. But to put it into perspective: The reason I'm excited about this is because the quality is already significantly better than what I had before.
@Lowlypeon14 жыл бұрын
Very impressive. I'm looking forward to seeing your future trials with this.
@okreylos14 жыл бұрын
@convoiter05 Correct, but the biggest problem is occlusing, not field-of-view. The Kinect cannot see through solid objects, so using more than one is a way to fill in the resulting "shadows." A fish eye lens doesn't help much with that.
@ViewHarvest14 жыл бұрын
This is pretty breakthrough. We could replicate any object to the perfect 1:1 ratio. Thank you for this and keep up the good work.
@okreylos14 жыл бұрын
@Hulan6 That's pretty much it. I use projective texture mapping to get the color image onto the 3D-reconstructed depth image to simplify things, but otherwise you're spot-on. You can download the software if you want to and see for yourself.
@Karpour14 жыл бұрын
Amazing! The Kinect truly gave us a whole lot of new ways for 3D reconstruction. Can't wait to get my hands on one and start coding. Keep up your amazing work!
@nekroneko14 жыл бұрын
I can see this being used in the future for various reasons. You could scan a 3D object into a computer and not have to do any polygon matrix designing. You could virtually appear in an environment away from home as well, or as someone said, have virtual actors in video games. The application for such a thing is mind blowing.
@epyongt314 жыл бұрын
Impressive use of the technology. I can't wait to see what you can do with it as you play with it more.
@okreylos14 жыл бұрын
@Hulan6 The Kinect generates up to 640x480 depth pixels per frame, and my software simply connects adjacent depth pixels via triangles or quads, if they are classified as being part of the same object. So each frame can contain up to 639x479x2 triangles, if there are no invalid pixels (there are always at least a few around the silhouettes). In these videos, my face is probably represented by a few hundred triangles, just to throw out a number.
@jcwhite128814 жыл бұрын
wow man, great job! keep up the good work! it seems like this really could be the beginning of a new way of rendering 3d images for videogames, movies etc.
@NINKNOONAN14 жыл бұрын
@okreylos Thanks Oliver I managed to get a good calibration captured and create a file. Calibration is a real problem for all of us on the Kinect so if you manage to come up with an auto calibration procedure would be fantastic. You are one step ahead of everyone else and we are all watching your progress with excitement. Look forward to the multiple kinect support (6 would be good :-)
@okreylos14 жыл бұрын
@3IackCat That's correct, but part of those $50 is an active-sensing depth camera, which is essential for reliable real-time 3D reconstruction.
@okreylos14 жыл бұрын
@Max404s I'm using a regular mouse in all the videos, but I have a SpaceBall 4000FLX (old school), and a loaner SpaceNavigator sitting on my desk right now. Both (and other models as well) are supported natively by Vrui, there's just a bit of configuration to be done to get them to work exactly the way you want to. Vrui-2.0 already contains configuration file fragments to get them working with my personally preferred settings.
@okreylos14 жыл бұрын
@bitplane I haven't tried that yet. I'm nearly certain the Kinect uses IR lasers, which are inherently polarized, so you'd only need filters on the cameras. Unfortunately, reflection of the scenery would destroy polarization, so I'm not too optimistic.
@okreylos14 жыл бұрын
@onurbabacan I didn't do anything special, since I don't know yet whether there is a way to rapidly turn on / off the projector via software. But I believe the IR pattern is pulsed internally, and that might be why there is not more interference.
@Jogwheel14 жыл бұрын
This is wild... reminds me of that crazy impossible tech used in the "Enemy Of The State" movie... rotating around an object based on extrapolation and stuff. Very cool!
@Spyder63814 жыл бұрын
You do not let your subscribers down, this is amazing.
@danoli314 жыл бұрын
this is amazing. Nice work. Can't wait to see what you do with this further
@okreylos12 жыл бұрын
Yes, it's a very clever method. I'm focusing more on the software side of things, so I haven't tried it myself. I'm holding out for some Kinect 2.0 that comes in a variety of IR "colors" so that interference is solved for good.
@okreylos14 жыл бұрын
@0m3n1337 It's primarily the internal processing in the Kinect, and the limited bandwidth on a USB bus. 1280x1024 has more than four times the number of pixels as 640x480.
@StaleMeat5514 жыл бұрын
I wanna say this is amazing, but I would be lying. In truth, sir, this video is nothing short of being revolutionary! I can only begin to imagine all of the possibilities for media applications this imposes! I do wonder though if Microsoft ever experimented with merging two seperate steams...
@okreylos14 жыл бұрын
@robvh2 Vrui has built-in head tracking; doing it via Kinect is a matter of extracting a head position (and ideally orientation) from the video streams, and turning that into a module for Vrui's device driver. Pretty much the same way I have integrated the Wiimote as a 6-DOF input device. I should probably get on that, since it's a clear application.
@okreylos14 жыл бұрын
@MegaZoneEXE There is some amount of interference, as you can tell from the video. Adding more Kinects will add more interference, while not improving coverage much more. So it's a diminishing returns thing. But according to anonymous sources, there is a working 4-Kinect setup, and someone is trying to build a 30-Kinect system.
@toastyone14 жыл бұрын
I've seen a few of these videos now. Fascinating stuff. Good work!
@okreylos14 жыл бұрын
@DrewsAnimation Yes. Compression is going to be a big problem, since the raw data rate is about 28MB per second, but just dumping the data to a fast (and big) disk is trivial.
@janmarat14 жыл бұрын
@okreylos Hey, thanks for the reply. I now realize that my question stemmed from complete lack of understanding of 3D technology and programming behind it. Just makes me appreciate this work so much more
@DJDeathflea Well, the device is basically sending raw frames over the USB channel, and I'm doing some conversion to get real depth values and colors, and stuff frames into the render buffer as they arrive. It's a no-brainer to simply timestamp the frames and write them out to disk at the same time. After conversion, the data rate is around 46MB/s, which a modern hard drive can handle easily.
@okreylos14 жыл бұрын
@somud1 I have a hypothesis regarding how the depth sensing works, which could explain why there is less interference than I had thought. But it's too long to fit into a KZbin comment box; I'll write it up on my web site at some point. I think the reason the Kinect is robust against TV remotes and sunlight is because it looks for a pattern of small IR dots, and sunlight or remotes would only increase the ambient IR level, but dots would still stand out.
@okreylos14 жыл бұрын
@watcheem It's only 640x480 pixels, both for the color and depth images. People are already complaining it's too expensive; image the uproar had Microsoft used higher-resolution components. :)
@DeathStrobe14 жыл бұрын
This may very well be how 3D holographic cameras work in the future. These experiments are really cool.
@beseeingyou614 жыл бұрын
This is some crazy new technology. I hope this catches on.
@BraveMooseMan14 жыл бұрын
looking really good can't wait to see whats the next vid is like.
@okreylos14 жыл бұрын
@DJDeathflea Right. These videos are screen captures of real-time playback, hence the extemporaneous narration. But instead of dumping screenshots to disk and make a movie of those, I could just as well dump the color and depth streams from the Kinect to disk. I'm currently working on a compression scheme to reduce the required bandwidth, but a modern hard drive can handle it even uncompressed.
@SuperFinGuy14 жыл бұрын
Dude you rock, wish more people were this straightforward. I think the kinects would have a better coverage if they are placed at 137 degrees (golden angle) to each other at both z y and z x, that way each kinect has a maximum coverage of the volume the other one is not covering and less interference. Like the least amount of overlap.
@okreylos12 жыл бұрын
There is no hard limit on the number, but you need one USB bus per Kinect, and the more Kinects you add, the more interference between them you'll get. It's a matter of diminishing returns. The "shake & sense" method someone posted in the comments below is one way of reducing interference; ideally, Microsoft would sell the Kinect in a variety of colors. Normal users wouldn't care, but power users could buy a set of different colors, so they won't interfere with each other at all.
@johndoe214 жыл бұрын
amazing, so genius. can't wait to see how it will be developed
@okreylos14 жыл бұрын
@xUltimateOfficial It's my own software; the multi-Kinect version is available from my web page. It's still experimental, but it supports more than two Kinects if your PC has enough USB buses.
@DarrenHough14 жыл бұрын
Unreal, I can't wait for your next upload!!!
@okreylos14 жыл бұрын
@GWebMa We need an improved user's manual, but here goes: 1. Press some button/key to bring up tool menu. 2. Select "Transformer" -- "Mouse -> Screen Projector" 3. Press same button/key again to confirm. 4. Press same button/key again to bring up tool menu. 5. Select "Utilities" -- "Measurement Tool" 6. Now you can place 3D measurement points in the screen plane by pressing the chosen button/key, and those will be saved. Dolly to get points of interest into the screen plane.
@fuzzidelic14 жыл бұрын
been intrigued by your series of videos. Great stuff!
@okreylos14 жыл бұрын
@Shakespeare1612 Incidentally, we have one. I think the Kinect could be used as a scanner; you'd simply move the device around the object a few times for full coverage (or move the object instead). However, since 3D printing requires water-tight surfaces, you'd have to run the resulting point cloud through a topological repair, as done by software like Geomagic. But the basic workflow is all there.
@okreylos14 жыл бұрын
@Rosenroterfreak Basically yes, but right now there's some post-processing involved to clean up the resulting meshes. But there's very good existing software for that (Geomagic).
@okreylos14 жыл бұрын
@ValkyrieIce That would work, but I don't know yet whether there's a way to rapidly toggle the IR projector under program control.
@kyaami14 жыл бұрын
this is amazing! great work!
@okreylos14 жыл бұрын
@sheaton319 That's very possible. Pro-level 3D cameras have been around for a while, and using them to capture viewpoint-free video from sporting events is a very marketable application. The nice thing is that now we can do it for $150. Or $300.
@okreylos14 жыл бұрын
@TroutFink That's one part; the other is that, as a camera, the Kinect is rather low-res.
@danp32200314 жыл бұрын
You are a Genius! Keep it up. Can't wait to see you continue to work on this.
@okreylos14 жыл бұрын
@SuperSmashDolls Not as far as I know, but there are lots of commands to the device for which we don't know yet what they do, so it's conceivable.
@okreylos13 жыл бұрын
@DrKaito10 My development branch has a new calibration method (intrinsic and extrinsic), but it's still semi-manual. You have to manually fit homographies to a sequence of images of a semi-clear rectangular grid in the 2D depth and color images. This is because the depth image is so fuzzy that automatic grid detection methods, at least the ones I've tried, don't work well enough. But the process is quick, and the results are very good.
@okreylos14 жыл бұрын
@tsilb Not having tried, that shouldn't really make a huge difference. The 3D reconstruction is supposed to be independent of object color because it's based on active sensing using structured light from an IR projector.
@okreylos14 жыл бұрын
@SciPoly There is a Python wrapper for the underlying Vrui 3D toolkit, which was developed by an independent contractor. I'm not sure right now if it still works with Vrui 2.0, since I don't use Python myself. There is no wrapper for the Kinect package itself, but that should be comparatively simple. Vrui is a humongous piece of software, and the Kinect package is tiny.
@okreylos13 жыл бұрын
@LegendaryAdrenaline You would need a larger number of cameras to avoid occlusions, i.e., object shadowing; but otherwise, that's exactly the idea.
@LiebsterFeind14 жыл бұрын
I would think that with any number of fixed cameras there will always be some "holes" creating problems (asking not stating)? Would it be better to have perhaps 2 fixed ones and one on a mobile platform that made fast small movements in a semi-circular pattern from side to side repeatedly, emulating the Saccadic movement of the human eye and thus being the provider of a "hole coverage" video feed that could be used to fill in any gaps (detectable by their deviant chroma profile)?
@timmytumbler14 жыл бұрын
Fantastic (I do hope Microsoft sent you the second Kinect, with some money in the box), @okreylos glad you had a better result than you expected, it is really very good as the first attempt, I'm sure specific re-calibration will help. Also, I can't help but think that an individual separate image camera (as mentioned) is bound to help, rather than the slightly strange virtual camera position (future news casters will require an empty box to look at - just dont tell them there is nothing inside).
@TrashTawk14 жыл бұрын
Sir, you are a true prodigy. A visionary, in fact.
@1xtra29914 жыл бұрын
this is great stuff, congrats! hope to see what else you are able to do
@okreylos14 жыл бұрын
@AdeonWriter I don't think the angle would make much of a difference. I didn't choose 90 degrees for a reason; it was just the way it worked out because my home office is quite small.
@okreylos13 жыл бұрын
@DerUnbekannte Yes, it's possible. There isn't anything special about the built-in color camera; the software already has to align the two images (color+depth) explicitly, and aligning an external (higher-resolution) color camera wouldn't be any different. Soon.
@EdwinR89014 жыл бұрын
Stop putting up negative comments. I thought this was very impressive to observe. Good work okreylos!
@spencerchamp14 жыл бұрын
Amazing! After weeks of work we have (almost) done it!
@gelisob14 жыл бұрын
Could i suggest, if not already suggested, putting the two kinects near the ceiling opposite sides of the room, pointing 45 degrees down to the center of the room, should capture the person/objects from both sides to give very nice overview
@okreylos14 жыл бұрын
@PacoJalapeno7 Hey, thanks for the suggestion! I hadn't thought of that.
@gravityisweak14 жыл бұрын
The potential for your application impresses me! This is excellent work, I hope you continue with the kinect. I have no interest in it for any gaming reasons, but I may end up buying one just to mess around with the ingenious hacks people have made for it.
@okreylos14 жыл бұрын
@Octamed Someone should really try that. There are adaptive filters that could potentially do a good job distinguishing between static and moving objects.
@okreylos14 жыл бұрын
@bitplane There's a lot of secret sauce in the device still, so it might be possible that there are ways to do what you suggest, but I'm not aware of any at this point. I believe using external shutters is a viable idea, but they would have to be carefully tuned to be in synch with the internal cameras or you'd get bad results.
@TheHouseBlog14 жыл бұрын
@okreylos Thank you for your feedback, I certainly look forward to your future works.
@okreylos13 жыл бұрын
@mathewcohen1 It's theoretically possible, but requires a level of fine control over the devices that I don't think is supported by the USB protocol as I know it. Meaning, this would require some help from the device manufacturer, but I don't think Microsoft anticipated that people would be using more than one Kinect.
@kibnib14 жыл бұрын
Stereoscopic imagery with kinect = AWESOME. This would be a great inexpensive 3D presentation tool for manufacturing companies.
@MWhybird14 жыл бұрын
@okreylos, you rock my world. It will be interesting to see how the interference falls off as the angle between the kinects is increased. Also, I'd still love to see if anything happens if you put a polarising lens from some sunglasses in front of the IR camera (I'm still not convinced that the reflected IR light will still be polarised at all). If it works as one might hope, you can turn one kinect on its side. :)
@emaanet14 жыл бұрын
1/2 of this post : One : We know that, in the shadow of the Okreylos's hand, the dirty effect problem disapear. Two : GWebMa told us : Even if you have 2 Kinects close to each other (maybe one just above the other) facing the same target, if you cover one of the IR emitters, this cam´s depth image gets black. That means there is something that enables each cam to see only the pattern sent by this device.
@okreylos13 жыл бұрын
@demonicowl Most desktop PCs have two USB busses to the outside; typically one to the front panel, and one to the back panel. That's what I'm using. Alternatively, you can buy additional PCI/PCIe USB cards, OR you can buy a USB 3 card and hope that the drivers work. A single USB 3 bus can theoretically support around 15 Kinects.
@OnurBabacan14 жыл бұрын
@okreylos Thanks for the reply. If the pattern clock is accessible in some future driver, I believe this might help reduce the artifacts significantly. Keep up the good work!
@okreylos14 жыл бұрын
@emaanet The software doesn't merge the depth streams in any way; it simply projects them into the same 3D space. This means, during the times in the video when I turn off one of the depth streams (but not the IR projector; that stays on the entire time), what you see is exactly the raw data sent by the other camera. If there are holes that would not be there with one camera only -- and there are -- they must be due to interference.
@okreylos14 жыл бұрын
@moonlitesymphony We've had this discussion in the other video; the consensus was that yes, it's a viable approach, but it involves major hardware hackery to get it right. It's something I myself cannot do.
@Holammer14 жыл бұрын
Pretty cool indeed. I'd love to see more when it is better calibrated.
@Dvich14 жыл бұрын
Amazing! You're a very skilled programmer and mathematician!
@okreylos14 жыл бұрын
@sacredgeometry No, all 3D information is from the IR depth cameras.
@okreylos14 жыл бұрын
@FireLineStudios This already captures a true 3D video stream, so displaying it stereoscopically is a no-brainer. The released software can already do it, on a variety of stereo display technologies.
@okreylos14 жыл бұрын
@emilioolivares Nothing fancy. I improved the available Kinect software package to handle two or more Kinects, and just plugged both of them into the same PC -- using two separate USB buses, because each Kinect mostly saturates one USB bus.
@okreylos14 жыл бұрын
@digitaldud One Kinect mostly saturates a single USB2 bus. Fortunately most new PCs have several independent buses, or it would be a real problem.
@HenkvanderVelden14 жыл бұрын
I'm no programmer, but it's just really fun to watch your progress and see where you're going to take this! :D have fun!
@okreylos14 жыл бұрын
@GWebMa Great; I added a note to the Kinect download page. Thanks for bringing it up.
@YoungDaddyTC2714 жыл бұрын
everyone is talking about adding 2, or 3 more kinects. if you notice this progression you will realise that eventually a person will reach a 'sweet spot' whereas they will have added enough cameras to completely eliminate all the shadows. that will result in a perfect 3d image. the question then will be... what do we do with this image? instead of it being shown internally on a flat screen, can this now complete 3d image be projected? i would think that would be the logical next step.
@pebre7914 жыл бұрын
Excellent proof of concept. Great work!
@nytecam14 жыл бұрын
amazing work [and best on YT] so keep these experiments and videos coming.
@okreylos14 жыл бұрын
@karandex I don't think I understand what you mean exactly. Please elaborate.
@SuperSmashDolls14 жыл бұрын
@okreylos I could imagine they were more concerned with getting a decent FPS over USB then getting great images. With a USB3 or firewire connection, you probably could do 720p Kinecting at a good, lag-free framerate.
@BurningdiverUK13 жыл бұрын
this is seriously impressive, i wonder how many kinects you would need to link up to remove the occlusion
@sacredgeometry14 жыл бұрын
@okreylos i think he meant try it where there isnt stuff to reflect ir all around it. Maybe with the object and checkerboard in an open space like outside
@okreylos14 жыл бұрын
@pillslanger Using two regular cameras, either to generate single-viewpoint stereoscopic video, or using depth-from-stereo to generate a single 3D image, is not exactly the same thing as what's shown here. A bit hard to explain.
@robvh214 жыл бұрын
How's the project coming along, Oliver? We're getting hungry for a new video or update! If there's not much new to actually see, I think your subscribers would still enjoy seeing a short video of you simply talking about what you're currently working on and perhaps the vision for the future. All the best, my friend.
@okreylos13 жыл бұрын
@kasm279 Firewire has about the same bandwidth as USB2, but more efficient allocation for multiple streamers. But a "professional" version of the Kinect could use 1G or 10G Ethernet, Myrinet, or other high-bandwidth interconnects. Not at $150 a pop, though.
@SuckItLily14 жыл бұрын
@bobca123 we were both wrong, 3 aren't enough to cover top AND bottom, however using 6 is just as silly. you could get all around footage with 4 if you placed the first 3 around the object in a way that they see its top, and use the last one to capture what's under it. but what would you even wanna do that for anyway? do you want to capture a floating object? cause things are generally affected by gravity, their bottom sides are normally impossible to capture.