thank you for the explanation, this is really helping me to do my final year project, sir. I hope you going rich and can buy Lamborgini
@sebhch2446 жыл бұрын
Man, awasome tutorial, i was watching Cyberpunk the documentary, and suddenly start to think in the kinect, matrix, altered carbon, 3d, and here I am. Thanks a lot.
@Ovni1217 жыл бұрын
Mapping the min max treshold with the mouse was a brilliant idea! Nice video. Keep em up!
@ahtishamali4315 жыл бұрын
lmao 😂😂
@culpritdesign7 жыл бұрын
I watch your videos and when I finish them I think I suddenly know how to do the work you just presented, and then reality sets in and I take another sip of beer and go to sleep.
@explodewithlove8 жыл бұрын
I'm so grateful we have enthusiastic teachers like yourself helping people like me get excited about programming!! :D May I ask how you are pulling off your magic with having the computer screen show behind you?
@TheCodingTrain8 жыл бұрын
i'm using a greenscreen and wirecast software.
@tametonic39852 жыл бұрын
I do have a suggestion for determining the threshold. You can combine your code with a library like DeepVision that will detect where your hand is on screen. Then, you can use a mathematical formula (which I'll leave as a reply) that gets the distance between the hand and the camera. Using the distance of such in millimeters, and where the DeepVision library detects your hand is on screen with pixels, you can make a threshold that is not a constant, but rather changes based on where your hand is. Therefore, you won't have to worry about standing a specific distance from the camera-the camera will just know where your hand is and base the thresholds off of that
this is awesome, thank you! The possibilities with this is limitless, can't wait to play around with the Kinect myself.
@mindaugasdudenas6247 жыл бұрын
You are the person who inspired thousands not to be afraid to start coding. Thanks for that. I watched all the tutorials and when there are pixel related loops my processing render goes with a lag and I do not know what is the main reason, because it is MacBook Pro 2015 which should do the job quite fast. Maybe you can tell what do you use and maybe it could be other reasons for that happening. Sorry for the question that is not 100 percent related to the processing. Would like to hear more about the algorithms which optimise the computer work so everything would go faster. Thanks again, You are the best, Keep Rockin.
@Monetai6 жыл бұрын
I used this technique combined with blob detection for an artistic installation where the kinect was filming people touching a wall, from top . And I used the same "calibration technique" =) Happy to see I wasn't alone "hacking" kinect this way !
@javibaeza26442 жыл бұрын
I think I´ve never enjoyed more I video of coding like I did with this one! You are so inspiring!!
@besiix4 жыл бұрын
Your enthusiasm is infectious. Thank you so much
@fiattypanich13068 жыл бұрын
thanks a lot, you are one of the greatest teacher i've ever met!!
@TheCodingTrain8 жыл бұрын
+Fiatty Panich Thanks for watching!
@kongkongterton98058 жыл бұрын
Thanks alot for your very precise and fun tutorials! You made my efforts to tame the Kinect as a graphical tool so much easier. This deserves so much attention. Thank you for taking the time.
@TheCodingTrain8 жыл бұрын
+Kong Kongterton you're welcome, thanks for the nice feedback!
@uushk66312 жыл бұрын
So dope...btw can see the screen through your body lol, the green bar on your shirt.
@THEcucufate4 жыл бұрын
You are very enjoyable to watch. I think I can get the idea in my head working thanks to your videos- so thanks =]
@dennisholscher31824 жыл бұрын
How can I move the rotation point/axis to the center of the scene? Right now the scene rotates out of the screen. Tried to find an answer - did not find anything. Please help. Thanks for the great tutorial!
@borjonx6 жыл бұрын
This is one of the most useful videos I've found - thanks for sharing!!!
@katiemitchell73874 жыл бұрын
I'm having issues with this, could you copy and paste the code to me as I feel there is somewhere I'm going wrong!
@studiobits2atoms7064 жыл бұрын
I am very in love with your tutorials. Can always come back to them :)
@heydindd3 жыл бұрын
I kept getting an error saying "depthWidth cannot be resolved or is not a field", is there any way to go around this? Thank you!
@jakewelch.design3 жыл бұрын
Using this for a design project, thank you !!
@ahmedshingaly7634 жыл бұрын
I love how you explain your ideas keep making these awesome videos
@stickyb1t8778 жыл бұрын
this video should be used in every computer vision class to teach students how to reverse camera projection using depth information and focal distance (instead of learning it the hard way without any experimentation)
@TheCodingTrain8 жыл бұрын
+davide sito thanks, I'm glad to hear it's useful!
@jeffreycordova90828 жыл бұрын
I gave a thumbs up as soon as you hugged yourself... hahaha. Thanks for the great video!
@TheCodingTrain8 жыл бұрын
+Jeffrey Cordova hah, thank you!
@worthy2dy45 жыл бұрын
Awesome! I'm trying to map RPlidar data and a stepper motor step/angle to create real time room mapping
@zeetangled8 жыл бұрын
Hi again! so in V1 it's kinect.width instead of Kinect.depthWidth. It works well now! thanks
@TheCodingTrain8 жыл бұрын
+Zaina Squid indeed that's right sorry to be slow in the reply. I need to add that an annotation!
@Gabirell8 жыл бұрын
+Zaina Squid Thanks for the tip! Do you have another about "initDevice"?
@TheCodingTrain8 жыл бұрын
+Gabriel Netto I believe initDevice is not needed for the v1. My apologies for this, I need to make v1 versions of all the examples will get to that soon. Keep reminding me!
@Gabirell8 жыл бұрын
+Daniel Shiffman Thank you! I was wondering about forcing a initDevice because I'm writing a Kinect-to-syphon sketch (using your code) for 2 devices (v1 1473) and only the first Kinect shows up when rendered on canvas but "println" shows them as recognized. I'm new to processing... Maybe something wrong sending "createGraphics" instead of PGraphics?
@Gabirell8 жыл бұрын
+Gabriel Netto Your example "MultipleServers" works great but I couldn't send PGraphics to Syphon's canvas. It's my fault! But I would appreciate very much some guidance... ;)
@mrtnmur5 жыл бұрын
Hi there, How can you set the min/max threshold with the data point visualization? I'm using KinectV2 on a mac. Huge Thanks!
@johnclark13647 жыл бұрын
That was the most helpful guide ever
@jordanwright73988 жыл бұрын
Hi Daniel, Very exciting stuff. I have the point cloud rendering based on your tutorials but am now at a loss regarding: 1.recording the point cloud data 2.exporting it as a CSV file. I would like to import the CSV into Cinema 4d and I have seen some python scripts online that I may be able to use. Anyway you may be able to advise me on this would be great. Conversely if anyone here knows how to make this into some sort of production usable pipeline out there I would be happy to compensate/hire them for something usable. Thank you for your efforts here Daniel - I know they are appreciated by many of us! Respectfully, Jordan
@GUINTHERKOVALSKI6 жыл бұрын
hi! Had you find any solution? I will need to do the same things
@patrikkucavik1486 жыл бұрын
since the op has not answered and i need the same thing, have you found any solution?
@benjaminpmartin7 жыл бұрын
great videos. thanks so much! how would one create multiple depth thresholds in the same sketch?
@fruslanguag52567 жыл бұрын
More than interesting, if i would represent 3D points with a xyz camera would work with that formula ? Because u don't use z value of a camera
@AlexanderIvanovOfficial7 жыл бұрын
Ahhh what a missed opportunity - instead of clipping the wall, you should have used the fact that it's already a green screen and "just" keyed it out. Keep up the awesome stuff!
@lucasalamo34486 жыл бұрын
Love these videos and thank you for sharing your knowledge!! How do I obtain the code for a simple pointcloud feed? And how do I plug it into processing? Just started coding yesterday literally just for the Kinect.
@user-pd4ew5li4b Жыл бұрын
Is there a way to record this data and enter it into a program where all the dots move and you can orbit pan zoom in and observe it after capturing? pls thanks.
@ChopLabalagun8 жыл бұрын
si the sample accesible this look amazing
@TheCodingTrain8 жыл бұрын
code is all here: github.com/CodingRainbow/Rainbow-Code
@angledcoathanger7 жыл бұрын
Dude, you are just bloody amazing.
@nanovolt37177 жыл бұрын
The distance you are using is in terms of what, cm or inches? Nice video by the way!
@narutro25 жыл бұрын
Hi shiffman. I am think about making an interactive project using kinect. Where can I get those kinects now? Or can I made one with my own camera and computer vision libraries such as openCV?
@CDBelfer47 жыл бұрын
I was hoping you would set the colour of the visible values to the depth value :p Also I feel like keeping track of only the pixels that change might be useful for tracking movement since you could just keep track of your hands position and just check against the changing values to see where and how much your hands moved. I need a Kinect :(
@behrzz4 жыл бұрын
Hi Daniel , great tutorials, I was wondering if you can help me with some sources for particle effect with Kinect + Processing , I would like to capture live video and turn the movement into particle effect , not the background just the individual .
@allanhagelstrom23998 жыл бұрын
Hi thanks a lot for all your videos! they are great, i am trying to learn processing for doing projection mapping, i wanted to ask you if there is an example of this for kinect v1? thanks a lot again!
@zeetangled8 жыл бұрын
Hello, when I tried running this it gives me the error "depthWidth" cannot be resolved or is not a field. do you know if it is because I'm using a V1 Kinect?
@nicholasadrian51858 жыл бұрын
+Zaina Squid I'm using kinect v1 as well. I found that you can just change kinect2.depthWidth to kinect.width
@Jonascarlsson804 жыл бұрын
I have the Intel RealSense D415 depth camera and I would really like to do this with that camera, but since I am quite new to Processing I don't know where to start. I have added the RealSense library to processing and the examples there work great, but I would like to visualize a real time point cloud like in this video. Any help would be much appreciated!
@vedantkumar60754 жыл бұрын
How can I save the point cloud coordinates so that I can use them for 3d reconstruction or correspondence matching? Also, can I use the PCL library in processing ide?
@chrislos79448 жыл бұрын
Dear Dan, Thanks so much again for your perfect epic tutorials. I've one question. It should be possible somehow to compare a static background image with your recent incoming body-movements, am I right? (In case I took a background picture with not having myself on the photo). Would it be difficult to realize something like that? I'm trying to track a average x,y,z body position in a space. but My problem are some obsicals striaght next to me, that also got tracked....
@TheCodingTrain8 жыл бұрын
+Chris Los Yes, you'll want to make a copy of depth data in a separate array or image and then compare that to the current depth map to see which pixels are different. If you work on windows with the MS SDK it will do almost all of this for you also! github.com/ThomasLengeling/KinectPV2 Going to make some video tutorials about this soon.
@chrislos79448 жыл бұрын
+Daniel Shiffman thanks so much for your quick response. Makes sense to me. Unfortunately I have to work on OSX with a Kinect1 because lot's of my students are working with this configuration. I'll give it a try today. Hopefully I get the machine to surprise me with a valid xyz position. Thanks again. Best, Christian
@shahidulabir2617 жыл бұрын
Hello I have been following your tutorials for quite a while and these are really cool and awesome. This might be a little bit off topic but is it possible to save still point cloud in P3D as file formats like ptx, pts, xyz, txt etc. that 3D rendering software like MeshLab, 123D, reality capture, context capture etc. can import. Can you help me out here? I am actually doing a project on room scanning robot and I did manage to scan my room and make a 3D point cloud display of my room in processing.
@madmaxkal6 жыл бұрын
Did you figure out a solution? I am looking to do something similar.
@_astrolabius71048 жыл бұрын
Hey, thanks for the tutorials, I been learning a lot of them. I'm trying to use fisica + openKinect and I'm having some problems. You think you could give me a hand? I would really appreciate it. The code is simple, but i don't know how add the values of the depth to a FBlob in fisica, to make them move with the data take from the kinect.
@Nanotopia7 жыл бұрын
Love your tutorials! Managed to get the first one to run. Anything past that is a no. Now I keep receiving, The function kinect.initIR(); does not exist. Working on OSx kinect v1 ?? Thanks again!
@anginiotorres82927 жыл бұрын
Muchas gracias ,desde Guadalajara Jalisco , if you come some day don´t think twice to call, you have a home and friends heare , thanks for your videos.
@brendanjames3078 жыл бұрын
Hey Daniel, how would you do the depth threshold when your using the point cloud?
@ahtiolavi7 жыл бұрын
I've done it this way, using Daniel's pointcloud code: First declare the min and max threshold at the beginning of the code. Then go to the loop where point(); is drawn, and create a boolean code around it. If d (depth variable) > minThresh and < maxThresh, draw a point(0,0);. Else, offset the point's color to black. This way only the pixels inside your desired area will be colored white and the ones outside it will be black. Try it out. If you can't get it working, ask me. I'm not a pro though.
@raultelliskivi4155 Жыл бұрын
Hello. I know this video is rather old, but are these examples compatible with processing4? I´m very new to programming and I got and error that some dependent libraries are missing.
@raultelliskivi4155 Жыл бұрын
UnsatisfiedLinkError: ..\Documents\Processing\libraries\openkinect_processing\library\v2\msvc\libusb-1.0.dll: Can't find dependent libraries java.lang.UnsatisfiedLinkError: ..\Documents\Processing\libraries\openkinect_processing\library\v2\msvc\libusb-1.0.dll: Can't find dependent libraries A library used by this sketch relies on native code that is not available. UnsatisfiedLinkError: ..\Documents\Processing\libraries\openkinect_processing\library\v2\msvc\libusb-1.0.dll: Can't find dependent libraries at processing.opengl.PSurfaceJOGL.lambda$initAnimator$2(PSurfaceJOGL.java:426) at java.base/java.lang.Thread.run(Thread.java:833) UnsatisfiedLinkError: ..\Documents\Processing\libraries\openkinect_processing\library\v2\msvc\libusb-1.0.dll: Can't find dependent libraries
@qzorn44406 жыл бұрын
HI, How can the Kinect measure a 3D object and reproduce an object on a 3D printer? Thanks.
@moisesh86008 жыл бұрын
Hi! thanks for the tutorials, they're great. On this sketch u use the CameraParams for the V2 Kinect: do you have the CameraParams for V1 or somewhere I can find them?
@TheCodingTrain8 жыл бұрын
+Moisés H The CameraParams don't exist for V1 unfortunately. But if you look in the library example files there is a version of this example for Kinect V1s.
@MegaMellojello5 жыл бұрын
@@TheCodingTrain Amazing thank you !
@bitsofadi7 жыл бұрын
Thanks for the tutorial! I was wondering if there is a way to create an outline of the users body?
@TheCodingTrain7 жыл бұрын
Search for "edge detection" algorithms. Also, I would suggest asking this at forum.processing.org? It's easier for me and others to help that way (you can share code there easily!).
@valentinaserratisisa4027 жыл бұрын
can I use the raw depth data of a cloud and combine it with the game of life parameters? maybe its easier if I take the RGB (mostly greys) parameters and applied to the game of life rules? thanks!
@ZeinaKoreitem8 жыл бұрын
Hi Daniel. Thank you for a fantastic tutorial series. I am an architect and I have been looking into exporting the point cloud generated by a Kinect in processing into an .obj file. I am using Kinect v.1. Exporting the points in real time would be perfect but I am happy to only export a few static frames for now, the kinect would essentially act as a scanning tool. I found a few old tutorials on the superCAD library but the library seems to be outdated and no longer existant. Do you have any suggestions? Just to give you a idea, the goal is to then import the .obj files (.ply or .stl would work too) into 3D modeling softwares such as Rhinoceros, Meshlab, Blender or Maya. Thank you very much and I really appreciate your generosity in sharing all this knowledge! Z
@TheCodingTrain8 жыл бұрын
+Z Krtm take a look at this library (not sure if it's been kept up to date). github.com/nervoussystem/OBJExport I would ask on forum.processing.org, there must be a library that does this!
@madmaxkal6 жыл бұрын
If you've had success with this, please let me know. I will need to do something similar shortly.
@gsebastiansg3 жыл бұрын
@@madmaxkal did you get close?
@Ashwin4367 жыл бұрын
Thank you so much!! Helps a lot.
@ambientsoda1066 жыл бұрын
Hi Where is the OpenKinect API reference documentation? As I found Kinectv1.getDepthHeight() does not work for v1 and you have to just use Kinect.height(), just because I tried it in an earlier example...
@darinbasile67547 жыл бұрын
This is gold. Thank you.
@TheCodingTrain7 жыл бұрын
you're welcome!
@darinbasile67547 жыл бұрын
The Coding Train Seriously thank you. Just starting out with coding, and it's a bit intimidating to me, but your vids make it a little less so.
@MegaReplay4 жыл бұрын
Hello, first I'd like to say that your videos are amazing and very informative (and you are great as well). i need your advice please. can I map the room and draw only in pixels that the Kinect detected a change? can you help me with that please?
@MegaReplay4 жыл бұрын
Basically I want to remove the background including the floor
@dennisholscher31824 жыл бұрын
Try out a threshold. int minDepth = 0; int maxDepth = 1200; -> everything between closest detectable point and 1.2 meters. When you draw the point: if (depth[offset] >= minDepth && depth[offset]
@lilytigerify7 жыл бұрын
Man your video is AMAZING!!!! Love your enthusiasm XDD
@TheCodingTrain7 жыл бұрын
Thank you!
@carl123pune7 жыл бұрын
Hi! Is it possible to create an interactive projection using depth maps and processing language??
@DIYminicomputadores5 жыл бұрын
Hi Dear! Well done. I'm new in the 3d images and i'd like know where can i get a script like you showing, but to get a file from kinect image. Something like a .stl. Thanks million
@realcygnus7 жыл бұрын
awesome stuff
@buzzkaeokwuanoi61887 жыл бұрын
Hey Danie, How can i get raw data of RGB & infrared in face area ? Then i need to save it to .csv or .txt. I found many way but i still can't got solution. please help me! If it possible. best reagrds Buzz
@MichiSzerman8 жыл бұрын
I'm having an issue when it comes to the PVector depthToPointCloudPos. It's throwing me an error that cameraParams.cx & CameraParams.cy does not exist. Has something changed? Where can I view that in the library?
@TheCodingTrain8 жыл бұрын
+Michelle Sherman Make sure you have the most recent Processing (3.0.2) and version of the library. I think this issue has been resolved. If not you can post here: github.com/shiffman/OpenKinect-for-Processing/issues/
@franciscoaliaga96983 жыл бұрын
um this is really old, but we can calculate the plane of the wall and then filter just in front of the wall
@matiaszzz4 жыл бұрын
where is the playlist for this video series?
@fable598 жыл бұрын
Hi Daniel, when I tried to run the example code, the console says "kinect.depthWidth cannot be resolved or is not a field." How do I resolve this issue? I am using the Kinect v1. Thank you very much!
@TheCodingTrain8 жыл бұрын
+fable59 It's kinect.width for v1. Will be updating the examples that go along with this tutorial soon!
@Gabirell8 жыл бұрын
Hi! I'm trying to select a portion of kinect's depth to select a custom threshold to separate a body from the background. The problem is that I'm using a kinect 1 so when I tweak the code (changing kinect2 to kinect on the PointCloud2 and other examples) Processing returns that some functions and variables don't exist, like "initDevice", "depthHeight", "depthWidth", and the class "KinectTracker" at this example. I've tried your "Kinect Processing Demo" example and it does work like a charm... any clues? (I've posted a similar question at github too) Thanks!
@Gabirell8 жыл бұрын
+Gabriel Netto My question is partly answered by Zaina Squid below. The functions part (depthHeight is height on kinect1 and so...) Where can I find the correct syntax for kinect1? As I'm starting with Processing coding recently I just can't realize some things... and I'm a little lost. I try to understand the index provided (shiffman.net/p5/kinect/reference/org/openkinect/processing/Kinect.html) but I don't understand it as much as I would like... any hints? Thanks!
@TheCodingTrain8 жыл бұрын
+Gabriel Netto I've now got on my list to make v1 versions of these examples, stay tuned!
@imtlac4 жыл бұрын
Pride run ☺️❤️
@ezequielrivas29667 жыл бұрын
Hello, i like your tutorials and explanations how to use the kinect, Is possible to put in contact by email or video call? I'm working in a project to get depth image by kinect to use like interpreter of sign language, thanks
@AngeVelandia8 жыл бұрын
Hi! Im trying to use this code line for Kinect v1: int[] depth = kinect.getRawDepth(); but it doesnt seems to work. Anyone knows why?
@karinalopez67446 жыл бұрын
Hello, I'm having trouble getting the rawdepth to control the tint from an mov file. Does anyone know how to make a video interactive with kinect?
@allenwyen60545 жыл бұрын
Hey, I love your video, but the script downloaded from GitHub appear No Device Connected Cannot Find Devices because I was running on Windows? I look forward to your answer.THX
@briandinh95044 жыл бұрын
Use Zadig to install all libusbK drivers. Remember to go to option -> List all devices to install all drivers needed. You may want to use the Zadig 2.0.1 version if you're using Kinect v1.
@lechugatecnica2998 Жыл бұрын
I watched most of your videos but I can't find my goal, in my project I have to separate what the knect image sees in 640x480 into 100 frames, where each frame has a reading from 0 to 10 where 0 is 0 meters and 10 is 2 meters. then each frame has to send that information to move a servo assigned to each frame, for example if it sees a 5 move 45°. It's possible, I don't know how. if necessary I subscribe for a year! but tell me if it's possible. i try con AI but the same Thk regars!
@allanhagelstrom23998 жыл бұрын
when i try using the example of point cloud that came with the library in the console apear a message saying isochronous transfer error 1
@TheCodingTrain8 жыл бұрын
+Allan Hagelstrom does the example run ok? i think you can ignore the error.
@mariakomal95343 жыл бұрын
how can i get a raw depth values in python????
@joseordaz32895 жыл бұрын
Anyone knows why these examples are running so slow ? I'm using a MacBook pro 2015.
@hanscristian97426 жыл бұрын
Hi Daniel! How can I export the points of cloud into excel or any other program? Since I need the coordinates to make a 3D model of that. Thanks!
@madmaxkal6 жыл бұрын
Im in the same boat. Let me know if you figure its out please.
@gsebastiansg3 жыл бұрын
@@madmaxkal did you find an answer?
@JohnSmith-ms8cf8 жыл бұрын
My processing said "The function kinect.getRawDepth(); does not exist." Can you help me?
@TheCodingTrain8 жыл бұрын
+John Smith Could post your full code and ask at forum.processing.org? Feel free to link from here.
@samuelcoldicutt15708 жыл бұрын
hi does the link that you have included in the description have the actual code that is used at about 6 minutes and if so it what files in git hub are they ? :) thanks
@TheCodingTrain8 жыл бұрын
+Samuel coldicutt That one is here: github.com/shiffman/Video-Lesson-Materials/tree/master/code_kinect/PointCloud2
In kinect v1 kinect .depthWidth doen't work . What to do?
@TheCodingTrain8 жыл бұрын
+Sankalp porwal it's just kinect.width for v1, sorry!
@jacobdavidcunningham14404 жыл бұрын
5:14 song starts playing: "My Girls by Animal Collective"
@end-quote4 жыл бұрын
peacebone
@Mrsztt7 жыл бұрын
when i run this it says i'm runnin static and active dodes with [stroke (255); ] help!!
@katiemitchell73874 жыл бұрын
did you solve this issue?
@rafaelccolmanetti7 жыл бұрын
hey! You are fucking amazing! Thank you very much for this awsome tutorial! I study graphic design in Netherlands and I started learning processing because one of my class! I get in contact with kinect because I needed to do an installation and then my teacher recommend me your videos! I cant express with words how much this video help me! I did the tutorial plus I added more 6 layers side by side with differents colours! Now I`m in the second stage of my installation and I would like to ask you one question! I want to use those differents layers with the tint command, but it only works with image! For the last, in the background I`m running music, do you think is possible to controlo the speed of the music with my hands? For instance, if I put my right hand towards to right the music would go faster, with the left hand, the music would go slower and if you draw a circle in the air you would make a looping! I was trying to show you the video I did, but I cant put videos here! I was so happy because after severals weeks trying running thoses codes well I manage to do it with your videos!! Thank you very very much for all this classes! Its because of the effort of people like you that more and more people can get access to top level knowledge! Spread this magic again and again! Cheers!
@foofoighter7 жыл бұрын
has anybody a step by step tutorial how to get the kinect to work with processing on win 10?
@wearemiddream4 жыл бұрын
lol i like this dude
@furqanfirat35585 жыл бұрын
Which language are you using...
@TheCodingTrain5 жыл бұрын
This video uses Processing (which is built on top of the Java programming language). For more info, visit processing.org and also this video might help kzbin.info/www/bejne/d57PcpyBqM6sZtE.
@Confuseddave5 жыл бұрын
Thanks to xkcd, every time I see a Moiré pattern (such as at 0:49) I get Dean Martin stuck in my head.
@estefanomolina80714 жыл бұрын
That so cool man, now I was wondering how should be the code to send the frames to a syphon server? Could you help me
@Mirandorl6 жыл бұрын
Why does it not just use a 2d array?
@cvabds6 жыл бұрын
how could i ony find this now
@sankalpporwal33378 жыл бұрын
How to remove Isonchronous transfer error
@TheCodingTrain8 жыл бұрын
+Sankalp porwal I wish I knew! Do the examples work ok for you?
@sankalpporwal33378 жыл бұрын
+Daniel Shiffman It was because my USB hub was not working properly By the way I am thinking If there is a tutorial for hand tracking. I actually want to control leds through kinect hand tracking and arduino
@TheCodingTrain8 жыл бұрын
+Sankalp porwal the next videos show a way of doing hand tracking. i would investigate github.com/ThomasLengeling/KinectPV2 also, I wil lbe making more tutorials with this library soon.
@user-id1kx6cs5k5 жыл бұрын
Sir, how to use .png files of rgb and depth images and convert to point cloud.
@allanhagelstrom23998 жыл бұрын
i am talking of the point cloud sketch, sorry
@bobsmall28707 жыл бұрын
Just a tip next time you record a video don't wear anything with green. It looks like there is a hole through you.
@subscripciones7 жыл бұрын
How can I detect an object? let's say I want to find a toy