12.3: Raw Depth Data - Point Clouds and Thresholds - Kinect and Processing Tutorial

  Рет қаралды 243,784

The Coding Train

The Coding Train

Күн бұрын

Пікірлер: 169
@justfortrollpeople8531
@justfortrollpeople8531 4 жыл бұрын
thank you for the explanation, this is really helping me to do my final year project, sir. I hope you going rich and can buy Lamborgini
@sebhch244
@sebhch244 6 жыл бұрын
Man, awasome tutorial, i was watching Cyberpunk the documentary, and suddenly start to think in the kinect, matrix, altered carbon, 3d, and here I am. Thanks a lot.
@Ovni121
@Ovni121 7 жыл бұрын
Mapping the min max treshold with the mouse was a brilliant idea! Nice video. Keep em up!
@ahtishamali431
@ahtishamali431 5 жыл бұрын
lmao 😂😂
@culpritdesign
@culpritdesign 7 жыл бұрын
I watch your videos and when I finish them I think I suddenly know how to do the work you just presented, and then reality sets in and I take another sip of beer and go to sleep.
@explodewithlove
@explodewithlove 8 жыл бұрын
I'm so grateful we have enthusiastic teachers like yourself helping people like me get excited about programming!! :D May I ask how you are pulling off your magic with having the computer screen show behind you?
@TheCodingTrain
@TheCodingTrain 8 жыл бұрын
i'm using a greenscreen and wirecast software.
@tametonic3985
@tametonic3985 2 жыл бұрын
I do have a suggestion for determining the threshold. You can combine your code with a library like DeepVision that will detect where your hand is on screen. Then, you can use a mathematical formula (which I'll leave as a reply) that gets the distance between the hand and the camera. Using the distance of such in millimeters, and where the DeepVision library detects your hand is on screen with pixels, you can make a threshold that is not a constant, but rather changes based on where your hand is. Therefore, you won't have to worry about standing a specific distance from the camera-the camera will just know where your hand is and base the thresholds off of that
@tametonic3985
@tametonic3985 2 жыл бұрын
Distance to Object(mm) = ( f(mm) * real height(mm) * image height(px) ) / ( object height(px) * sensor height(mm) )
@Sim0nPeter
@Sim0nPeter 7 жыл бұрын
this is awesome, thank you! The possibilities with this is limitless, can't wait to play around with the Kinect myself.
@mindaugasdudenas624
@mindaugasdudenas624 7 жыл бұрын
You are the person who inspired thousands not to be afraid to start coding. Thanks for that. I watched all the tutorials and when there are pixel related loops my processing render goes with a lag and I do not know what is the main reason, because it is MacBook Pro 2015 which should do the job quite fast. Maybe you can tell what do you use and maybe it could be other reasons for that happening. Sorry for the question that is not 100 percent related to the processing. Would like to hear more about the algorithms which optimise the computer work so everything would go faster. Thanks again, You are the best, Keep Rockin.
@Monetai
@Monetai 6 жыл бұрын
I used this technique combined with blob detection for an artistic installation where the kinect was filming people touching a wall, from top . And I used the same "calibration technique" =) Happy to see I wasn't alone "hacking" kinect this way !
@javibaeza2644
@javibaeza2644 2 жыл бұрын
I think I´ve never enjoyed more I video of coding like I did with this one! You are so inspiring!!
@besiix
@besiix 4 жыл бұрын
Your enthusiasm is infectious. Thank you so much
@fiattypanich1306
@fiattypanich1306 8 жыл бұрын
thanks a lot, you are one of the greatest teacher i've ever met!!
@TheCodingTrain
@TheCodingTrain 8 жыл бұрын
+Fiatty Panich Thanks for watching!
@kongkongterton9805
@kongkongterton9805 8 жыл бұрын
Thanks alot for your very precise and fun tutorials! You made my efforts to tame the Kinect as a graphical tool so much easier. This deserves so much attention. Thank you for taking the time.
@TheCodingTrain
@TheCodingTrain 8 жыл бұрын
+Kong Kongterton you're welcome, thanks for the nice feedback!
@uushk6631
@uushk6631 2 жыл бұрын
So dope...btw can see the screen through your body lol, the green bar on your shirt.
@THEcucufate
@THEcucufate 4 жыл бұрын
You are very enjoyable to watch. I think I can get the idea in my head working thanks to your videos- so thanks =]
@dennisholscher3182
@dennisholscher3182 4 жыл бұрын
How can I move the rotation point/axis to the center of the scene? Right now the scene rotates out of the screen. Tried to find an answer - did not find anything. Please help. Thanks for the great tutorial!
@borjonx
@borjonx 6 жыл бұрын
This is one of the most useful videos I've found - thanks for sharing!!!
@katiemitchell7387
@katiemitchell7387 4 жыл бұрын
I'm having issues with this, could you copy and paste the code to me as I feel there is somewhere I'm going wrong!
@studiobits2atoms706
@studiobits2atoms706 4 жыл бұрын
I am very in love with your tutorials. Can always come back to them :)
@heydindd
@heydindd 3 жыл бұрын
I kept getting an error saying "depthWidth cannot be resolved or is not a field", is there any way to go around this? Thank you!
@jakewelch.design
@jakewelch.design 3 жыл бұрын
Using this for a design project, thank you !!
@ahmedshingaly763
@ahmedshingaly763 4 жыл бұрын
I love how you explain your ideas keep making these awesome videos
@stickyb1t877
@stickyb1t877 8 жыл бұрын
this video should be used in every computer vision class to teach students how to reverse camera projection using depth information and focal distance (instead of learning it the hard way without any experimentation)
@TheCodingTrain
@TheCodingTrain 8 жыл бұрын
+davide sito thanks, I'm glad to hear it's useful!
@jeffreycordova9082
@jeffreycordova9082 8 жыл бұрын
I gave a thumbs up as soon as you hugged yourself... hahaha. Thanks for the great video!
@TheCodingTrain
@TheCodingTrain 8 жыл бұрын
+Jeffrey Cordova hah, thank you!
@worthy2dy4
@worthy2dy4 5 жыл бұрын
Awesome! I'm trying to map RPlidar data and a stepper motor step/angle to create real time room mapping
@zeetangled
@zeetangled 8 жыл бұрын
Hi again! so in V1 it's kinect.width instead of Kinect.depthWidth. It works well now! thanks
@TheCodingTrain
@TheCodingTrain 8 жыл бұрын
+Zaina Squid indeed that's right sorry to be slow in the reply. I need to add that an annotation!
@Gabirell
@Gabirell 8 жыл бұрын
+Zaina Squid Thanks for the tip! Do you have another about "initDevice"?
@TheCodingTrain
@TheCodingTrain 8 жыл бұрын
+Gabriel Netto I believe initDevice is not needed for the v1. My apologies for this, I need to make v1 versions of all the examples will get to that soon. Keep reminding me!
@Gabirell
@Gabirell 8 жыл бұрын
+Daniel Shiffman Thank you! I was wondering about forcing a initDevice because I'm writing a Kinect-to-syphon sketch (using your code) for 2 devices (v1 1473) and only the first Kinect shows up when rendered on canvas but "println" shows them as recognized. I'm new to processing... Maybe something wrong sending "createGraphics" instead of PGraphics?
@Gabirell
@Gabirell 8 жыл бұрын
+Gabriel Netto Your example "MultipleServers" works great but I couldn't send PGraphics to Syphon's canvas. It's my fault! But I would appreciate very much some guidance... ;)
@mrtnmur
@mrtnmur 5 жыл бұрын
Hi there, How can you set the min/max threshold with the data point visualization? I'm using KinectV2 on a mac. Huge Thanks!
@johnclark1364
@johnclark1364 7 жыл бұрын
That was the most helpful guide ever
@jordanwright7398
@jordanwright7398 8 жыл бұрын
Hi Daniel, Very exciting stuff. I have the point cloud rendering based on your tutorials but am now at a loss regarding: 1.recording the point cloud data 2.exporting it as a CSV file. I would like to import the CSV into Cinema 4d and I have seen some python scripts online that I may be able to use. Anyway you may be able to advise me on this would be great. Conversely if anyone here knows how to make this into some sort of production usable pipeline out there I would be happy to compensate/hire them for something usable. Thank you for your efforts here Daniel - I know they are appreciated by many of us! Respectfully, Jordan
@GUINTHERKOVALSKI
@GUINTHERKOVALSKI 6 жыл бұрын
hi! Had you find any solution? I will need to do the same things
@patrikkucavik148
@patrikkucavik148 6 жыл бұрын
since the op has not answered and i need the same thing, have you found any solution?
@benjaminpmartin
@benjaminpmartin 7 жыл бұрын
great videos. thanks so much! how would one create multiple depth thresholds in the same sketch?
@fruslanguag5256
@fruslanguag5256 7 жыл бұрын
More than interesting, if i would represent 3D points with a xyz camera would work with that formula ? Because u don't use z value of a camera
@AlexanderIvanovOfficial
@AlexanderIvanovOfficial 7 жыл бұрын
Ahhh what a missed opportunity - instead of clipping the wall, you should have used the fact that it's already a green screen and "just" keyed it out. Keep up the awesome stuff!
@lucasalamo3448
@lucasalamo3448 6 жыл бұрын
Love these videos and thank you for sharing your knowledge!! How do I obtain the code for a simple pointcloud feed? And how do I plug it into processing? Just started coding yesterday literally just for the Kinect.
@user-pd4ew5li4b
@user-pd4ew5li4b Жыл бұрын
Is there a way to record this data and enter it into a program where all the dots move and you can orbit pan zoom in and observe it after capturing? pls thanks.
@ChopLabalagun
@ChopLabalagun 8 жыл бұрын
si the sample accesible this look amazing
@TheCodingTrain
@TheCodingTrain 8 жыл бұрын
code is all here: github.com/CodingRainbow/Rainbow-Code
@angledcoathanger
@angledcoathanger 7 жыл бұрын
Dude, you are just bloody amazing.
@nanovolt3717
@nanovolt3717 7 жыл бұрын
The distance you are using is in terms of what, cm or inches? Nice video by the way!
@narutro2
@narutro2 5 жыл бұрын
Hi shiffman. I am think about making an interactive project using kinect. Where can I get those kinects now? Or can I made one with my own camera and computer vision libraries such as openCV?
@CDBelfer4
@CDBelfer4 7 жыл бұрын
I was hoping you would set the colour of the visible values to the depth value :p Also I feel like keeping track of only the pixels that change might be useful for tracking movement since you could just keep track of your hands position and just check against the changing values to see where and how much your hands moved. I need a Kinect :(
@behrzz
@behrzz 4 жыл бұрын
Hi Daniel , great tutorials, I was wondering if you can help me with some sources for particle effect with Kinect + Processing , I would like to capture live video and turn the movement into particle effect , not the background just the individual .
@allanhagelstrom2399
@allanhagelstrom2399 8 жыл бұрын
Hi thanks a lot for all your videos! they are great, i am trying to learn processing for doing projection mapping, i wanted to ask you if there is an example of this for kinect v1? thanks a lot again!
@zeetangled
@zeetangled 8 жыл бұрын
Hello, when I tried running this it gives me the error "depthWidth" cannot be resolved or is not a field. do you know if it is because I'm using a V1 Kinect?
@nicholasadrian5185
@nicholasadrian5185 8 жыл бұрын
+Zaina Squid I'm using kinect v1 as well. I found that you can just change kinect2.depthWidth to kinect.width
@Jonascarlsson80
@Jonascarlsson80 4 жыл бұрын
I have the Intel RealSense D415 depth camera and I would really like to do this with that camera, but since I am quite new to Processing I don't know where to start. I have added the RealSense library to processing and the examples there work great, but I would like to visualize a real time point cloud like in this video. Any help would be much appreciated!
@vedantkumar6075
@vedantkumar6075 4 жыл бұрын
How can I save the point cloud coordinates so that I can use them for 3d reconstruction or correspondence matching? Also, can I use the PCL library in processing ide?
@chrislos7944
@chrislos7944 8 жыл бұрын
Dear Dan, Thanks so much again for your perfect epic tutorials. I've one question. It should be possible somehow to compare a static background image with your recent incoming body-movements, am I right? (In case I took a background picture with not having myself on the photo). Would it be difficult to realize something like that? I'm trying to track a average x,y,z body position in a space. but My problem are some obsicals striaght next to me, that also got tracked....
@TheCodingTrain
@TheCodingTrain 8 жыл бұрын
+Chris Los Yes, you'll want to make a copy of depth data in a separate array or image and then compare that to the current depth map to see which pixels are different. If you work on windows with the MS SDK it will do almost all of this for you also! github.com/ThomasLengeling/KinectPV2 Going to make some video tutorials about this soon.
@chrislos7944
@chrislos7944 8 жыл бұрын
+Daniel Shiffman thanks so much for your quick response. Makes sense to me. Unfortunately I have to work on OSX with a Kinect1 because lot's of my students are working with this configuration. I'll give it a try today. Hopefully I get the machine to surprise me with a valid xyz position. Thanks again. Best, Christian
@shahidulabir261
@shahidulabir261 7 жыл бұрын
Hello I have been following your tutorials for quite a while and these are really cool and awesome. This might be a little bit off topic but is it possible to save still point cloud in P3D as file formats like ptx, pts, xyz, txt etc. that 3D rendering software like MeshLab, 123D, reality capture, context capture etc. can import. Can you help me out here? I am actually doing a project on room scanning robot and I did manage to scan my room and make a 3D point cloud display of my room in processing.
@madmaxkal
@madmaxkal 6 жыл бұрын
Did you figure out a solution? I am looking to do something similar.
@_astrolabius7104
@_astrolabius7104 8 жыл бұрын
Hey, thanks for the tutorials, I been learning a lot of them. I'm trying to use fisica + openKinect and I'm having some problems. You think you could give me a hand? I would really appreciate it. The code is simple, but i don't know how add the values of the depth to a FBlob in fisica, to make them move with the data take from the kinect.
@Nanotopia
@Nanotopia 7 жыл бұрын
Love your tutorials! Managed to get the first one to run. Anything past that is a no. Now I keep receiving, The function kinect.initIR(); does not exist. Working on OSx kinect v1 ?? Thanks again!
@anginiotorres8292
@anginiotorres8292 7 жыл бұрын
Muchas gracias ,desde Guadalajara Jalisco , if you come some day don´t think twice to call, you have a home and friends heare , thanks for your videos.
@brendanjames307
@brendanjames307 8 жыл бұрын
Hey Daniel, how would you do the depth threshold when your using the point cloud?
@ahtiolavi
@ahtiolavi 7 жыл бұрын
I've done it this way, using Daniel's pointcloud code: First declare the min and max threshold at the beginning of the code. Then go to the loop where point(); is drawn, and create a boolean code around it. If d (depth variable) > minThresh and < maxThresh, draw a point(0,0);. Else, offset the point's color to black. This way only the pixels inside your desired area will be colored white and the ones outside it will be black. Try it out. If you can't get it working, ask me. I'm not a pro though.
@raultelliskivi4155
@raultelliskivi4155 Жыл бұрын
Hello. I know this video is rather old, but are these examples compatible with processing4? I´m very new to programming and I got and error that some dependent libraries are missing.
@raultelliskivi4155
@raultelliskivi4155 Жыл бұрын
UnsatisfiedLinkError: ..\Documents\Processing\libraries\openkinect_processing\library\v2\msvc\libusb-1.0.dll: Can't find dependent libraries java.lang.UnsatisfiedLinkError: ..\Documents\Processing\libraries\openkinect_processing\library\v2\msvc\libusb-1.0.dll: Can't find dependent libraries A library used by this sketch relies on native code that is not available. UnsatisfiedLinkError: ..\Documents\Processing\libraries\openkinect_processing\library\v2\msvc\libusb-1.0.dll: Can't find dependent libraries at processing.opengl.PSurfaceJOGL.lambda$initAnimator$2(PSurfaceJOGL.java:426) at java.base/java.lang.Thread.run(Thread.java:833) UnsatisfiedLinkError: ..\Documents\Processing\libraries\openkinect_processing\library\v2\msvc\libusb-1.0.dll: Can't find dependent libraries
@qzorn4440
@qzorn4440 6 жыл бұрын
HI, How can the Kinect measure a 3D object and reproduce an object on a 3D printer? Thanks.
@moisesh8600
@moisesh8600 8 жыл бұрын
Hi! thanks for the tutorials, they're great. On this sketch u use the CameraParams for the V2 Kinect: do you have the CameraParams for V1 or somewhere I can find them?
@TheCodingTrain
@TheCodingTrain 8 жыл бұрын
+Moisés H The CameraParams don't exist for V1 unfortunately. But if you look in the library example files there is a version of this example for Kinect V1s.
@MegaMellojello
@MegaMellojello 5 жыл бұрын
@@TheCodingTrain Amazing thank you !
@bitsofadi
@bitsofadi 7 жыл бұрын
Thanks for the tutorial! I was wondering if there is a way to create an outline of the users body?
@TheCodingTrain
@TheCodingTrain 7 жыл бұрын
Search for "edge detection" algorithms. Also, I would suggest asking this at forum.processing.org? It's easier for me and others to help that way (you can share code there easily!).
@valentinaserratisisa402
@valentinaserratisisa402 7 жыл бұрын
can I use the raw depth data of a cloud and combine it with the game of life parameters? maybe its easier if I take the RGB (mostly greys) parameters and applied to the game of life rules? thanks!
@ZeinaKoreitem
@ZeinaKoreitem 8 жыл бұрын
Hi Daniel. Thank you for a fantastic tutorial series. I am an architect and I have been looking into exporting the point cloud generated by a Kinect in processing into an .obj file. I am using Kinect v.1. Exporting the points in real time would be perfect but I am happy to only export a few static frames for now, the kinect would essentially act as a scanning tool. I found a few old tutorials on the superCAD library but the library seems to be outdated and no longer existant. Do you have any suggestions? Just to give you a idea, the goal is to then import the .obj files (.ply or .stl would work too) into 3D modeling softwares such as Rhinoceros, Meshlab, Blender or Maya. Thank you very much and I really appreciate your generosity in sharing all this knowledge! Z
@TheCodingTrain
@TheCodingTrain 8 жыл бұрын
+Z Krtm take a look at this library (not sure if it's been kept up to date). github.com/nervoussystem/OBJExport I would ask on forum.processing.org, there must be a library that does this!
@madmaxkal
@madmaxkal 6 жыл бұрын
If you've had success with this, please let me know. I will need to do something similar shortly.
@gsebastiansg
@gsebastiansg 3 жыл бұрын
@@madmaxkal did you get close?
@Ashwin436
@Ashwin436 7 жыл бұрын
Thank you so much!! Helps a lot.
@ambientsoda106
@ambientsoda106 6 жыл бұрын
Hi Where is the OpenKinect API reference documentation? As I found Kinectv1.getDepthHeight() does not work for v1 and you have to just use Kinect.height(), just because I tried it in an earlier example...
@darinbasile6754
@darinbasile6754 7 жыл бұрын
This is gold. Thank you.
@TheCodingTrain
@TheCodingTrain 7 жыл бұрын
you're welcome!
@darinbasile6754
@darinbasile6754 7 жыл бұрын
The Coding Train Seriously thank you. Just starting out with coding, and it's a bit intimidating to me, but your vids make it a little less so.
@MegaReplay
@MegaReplay 4 жыл бұрын
Hello, first I'd like to say that your videos are amazing and very informative (and you are great as well). i need your advice please. can I map the room and draw only in pixels that the Kinect detected a change? can you help me with that please?
@MegaReplay
@MegaReplay 4 жыл бұрын
Basically I want to remove the background including the floor
@dennisholscher3182
@dennisholscher3182 4 жыл бұрын
Try out a threshold. int minDepth = 0; int maxDepth = 1200; -> everything between closest detectable point and 1.2 meters. When you draw the point: if (depth[offset] >= minDepth && depth[offset]
@lilytigerify
@lilytigerify 7 жыл бұрын
Man your video is AMAZING!!!! Love your enthusiasm XDD
@TheCodingTrain
@TheCodingTrain 7 жыл бұрын
Thank you!
@carl123pune
@carl123pune 7 жыл бұрын
Hi! Is it possible to create an interactive projection using depth maps and processing language??
@DIYminicomputadores
@DIYminicomputadores 5 жыл бұрын
Hi Dear! Well done. I'm new in the 3d images and i'd like know where can i get a script like you showing, but to get a file from kinect image. Something like a .stl. Thanks million
@realcygnus
@realcygnus 7 жыл бұрын
awesome stuff
@buzzkaeokwuanoi6188
@buzzkaeokwuanoi6188 7 жыл бұрын
Hey Danie, How can i get raw data of RGB & infrared in face area ? Then i need to save it to .csv or .txt. I found many way but i still can't got solution. please help me! If it possible. best reagrds Buzz
@MichiSzerman
@MichiSzerman 8 жыл бұрын
I'm having an issue when it comes to the PVector depthToPointCloudPos. It's throwing me an error that cameraParams.cx & CameraParams.cy does not exist. Has something changed? Where can I view that in the library?
@TheCodingTrain
@TheCodingTrain 8 жыл бұрын
+Michelle Sherman Make sure you have the most recent Processing (3.0.2) and version of the library. I think this issue has been resolved. If not you can post here: github.com/shiffman/OpenKinect-for-Processing/issues/
@franciscoaliaga9698
@franciscoaliaga9698 3 жыл бұрын
um this is really old, but we can calculate the plane of the wall and then filter just in front of the wall
@matiaszzz
@matiaszzz 4 жыл бұрын
where is the playlist for this video series?
@fable59
@fable59 8 жыл бұрын
Hi Daniel, when I tried to run the example code, the console says "kinect.depthWidth cannot be resolved or is not a field." How do I resolve this issue? I am using the Kinect v1. Thank you very much!
@TheCodingTrain
@TheCodingTrain 8 жыл бұрын
+fable59 It's kinect.width for v1. Will be updating the examples that go along with this tutorial soon!
@Gabirell
@Gabirell 8 жыл бұрын
Hi! I'm trying to select a portion of kinect's depth to select a custom threshold to separate a body from the background. The problem is that I'm using a kinect 1 so when I tweak the code (changing kinect2 to kinect on the PointCloud2 and other examples) Processing returns that some functions and variables don't exist, like "initDevice", "depthHeight", "depthWidth", and the class "KinectTracker" at this example. I've tried your "Kinect Processing Demo" example and it does work like a charm... any clues? (I've posted a similar question at github too) Thanks!
@Gabirell
@Gabirell 8 жыл бұрын
+Gabriel Netto My question is partly answered by Zaina Squid below. The functions part (depthHeight is height on kinect1 and so...) Where can I find the correct syntax for kinect1? As I'm starting with Processing coding recently I just can't realize some things... and I'm a little lost. I try to understand the index provided (shiffman.net/p5/kinect/reference/org/openkinect/processing/Kinect.html) but I don't understand it as much as I would like... any hints? Thanks!
@TheCodingTrain
@TheCodingTrain 8 жыл бұрын
+Gabriel Netto I've now got on my list to make v1 versions of these examples, stay tuned!
@imtlac
@imtlac 4 жыл бұрын
Pride run ☺️❤️
@ezequielrivas2966
@ezequielrivas2966 7 жыл бұрын
Hello, i like your tutorials and explanations how to use the kinect, Is possible to put in contact by email or video call? I'm working in a project to get depth image by kinect to use like interpreter of sign language, thanks
@AngeVelandia
@AngeVelandia 8 жыл бұрын
Hi! Im trying to use this code line for Kinect v1: int[] depth = kinect.getRawDepth(); but it doesnt seems to work. Anyone knows why?
@karinalopez6744
@karinalopez6744 6 жыл бұрын
Hello, I'm having trouble getting the rawdepth to control the tint from an mov file. Does anyone know how to make a video interactive with kinect?
@allenwyen6054
@allenwyen6054 5 жыл бұрын
Hey, I love your video, but the script downloaded from GitHub appear No Device Connected Cannot Find Devices because I was running on Windows? I look forward to your answer.THX
@briandinh9504
@briandinh9504 4 жыл бұрын
Use Zadig to install all libusbK drivers. Remember to go to option -> List all devices to install all drivers needed. You may want to use the Zadig 2.0.1 version if you're using Kinect v1.
@lechugatecnica2998
@lechugatecnica2998 Жыл бұрын
I watched most of your videos but I can't find my goal, in my project I have to separate what the knect image sees in 640x480 into 100 frames, where each frame has a reading from 0 to 10 where 0 is 0 meters and 10 is 2 meters. then each frame has to send that information to move a servo assigned to each frame, for example if it sees a 5 move 45°. It's possible, I don't know how. if necessary I subscribe for a year! but tell me if it's possible. i try con AI but the same Thk regars!
@allanhagelstrom2399
@allanhagelstrom2399 8 жыл бұрын
when i try using the example of point cloud that came with the library in the console apear a message saying isochronous transfer error 1
@TheCodingTrain
@TheCodingTrain 8 жыл бұрын
+Allan Hagelstrom does the example run ok? i think you can ignore the error.
@mariakomal9534
@mariakomal9534 3 жыл бұрын
how can i get a raw depth values in python????
@joseordaz3289
@joseordaz3289 5 жыл бұрын
Anyone knows why these examples are running so slow ? I'm using a MacBook pro 2015.
@hanscristian9742
@hanscristian9742 6 жыл бұрын
Hi Daniel! How can I export the points of cloud into excel or any other program? Since I need the coordinates to make a 3D model of that. Thanks!
@madmaxkal
@madmaxkal 6 жыл бұрын
Im in the same boat. Let me know if you figure its out please.
@gsebastiansg
@gsebastiansg 3 жыл бұрын
@@madmaxkal did you find an answer?
@JohnSmith-ms8cf
@JohnSmith-ms8cf 8 жыл бұрын
My processing said "The function kinect.getRawDepth(); does not exist." Can you help me?
@TheCodingTrain
@TheCodingTrain 8 жыл бұрын
+John Smith Could post your full code and ask at forum.processing.org? Feel free to link from here.
@samuelcoldicutt1570
@samuelcoldicutt1570 8 жыл бұрын
hi does the link that you have included in the description have the actual code that is used at about 6 minutes and if so it what files in git hub are they ? :) thanks
@TheCodingTrain
@TheCodingTrain 8 жыл бұрын
+Samuel coldicutt That one is here: github.com/shiffman/Video-Lesson-Materials/tree/master/code_kinect/PointCloud2
@marcelochsendorf9521
@marcelochsendorf9521 7 жыл бұрын
github.com/CodingTrain/Rainbow-Code/tree/master/Tutorials/Processing/12_kinect/sketch_12_3_PointCloud2
@Dareeo
@Dareeo 7 жыл бұрын
You are great!
@karankatiyar5414
@karankatiyar5414 6 жыл бұрын
where can i get the data to practice tutorial
@eedymonnij7900
@eedymonnij7900 3 жыл бұрын
can we do this with a webcam?
@mukeshsingh9907
@mukeshsingh9907 7 жыл бұрын
how can we record depth in webm format
@sankalpporwal3337
@sankalpporwal3337 8 жыл бұрын
In kinect v1 kinect .depthWidth doen't work . What to do?
@TheCodingTrain
@TheCodingTrain 8 жыл бұрын
+Sankalp porwal it's just kinect.width for v1, sorry!
@jacobdavidcunningham1440
@jacobdavidcunningham1440 4 жыл бұрын
5:14 song starts playing: "My Girls by Animal Collective"
@end-quote
@end-quote 4 жыл бұрын
peacebone
@Mrsztt
@Mrsztt 7 жыл бұрын
when i run this it says i'm runnin static and active dodes with [stroke (255); ] help!!
@katiemitchell7387
@katiemitchell7387 4 жыл бұрын
did you solve this issue?
@rafaelccolmanetti
@rafaelccolmanetti 7 жыл бұрын
hey! You are fucking amazing! Thank you very much for this awsome tutorial! I study graphic design in Netherlands and I started learning processing because one of my class! I get in contact with kinect because I needed to do an installation and then my teacher recommend me your videos! I cant express with words how much this video help me! I did the tutorial plus I added more 6 layers side by side with differents colours! Now I`m in the second stage of my installation and I would like to ask you one question! I want to use those differents layers with the tint command, but it only works with image! For the last, in the background I`m running music, do you think is possible to controlo the speed of the music with my hands? For instance, if I put my right hand towards to right the music would go faster, with the left hand, the music would go slower and if you draw a circle in the air you would make a looping! I was trying to show you the video I did, but I cant put videos here! I was so happy because after severals weeks trying running thoses codes well I manage to do it with your videos!! Thank you very very much for all this classes! Its because of the effort of people like you that more and more people can get access to top level knowledge! Spread this magic again and again! Cheers!
@foofoighter
@foofoighter 7 жыл бұрын
has anybody a step by step tutorial how to get the kinect to work with processing on win 10?
@wearemiddream
@wearemiddream 4 жыл бұрын
lol i like this dude
@furqanfirat3558
@furqanfirat3558 5 жыл бұрын
Which language are you using...
@TheCodingTrain
@TheCodingTrain 5 жыл бұрын
This video uses Processing (which is built on top of the Java programming language). For more info, visit processing.org and also this video might help kzbin.info/www/bejne/d57PcpyBqM6sZtE.
@Confuseddave
@Confuseddave 5 жыл бұрын
Thanks to xkcd, every time I see a Moiré pattern (such as at 0:49) I get Dean Martin stuck in my head.
@estefanomolina8071
@estefanomolina8071 4 жыл бұрын
That so cool man, now I was wondering how should be the code to send the frames to a syphon server? Could you help me
@Mirandorl
@Mirandorl 6 жыл бұрын
Why does it not just use a 2d array?
@cvabds
@cvabds 6 жыл бұрын
how could i ony find this now
@sankalpporwal3337
@sankalpporwal3337 8 жыл бұрын
How to remove Isonchronous transfer error
@TheCodingTrain
@TheCodingTrain 8 жыл бұрын
+Sankalp porwal I wish I knew! Do the examples work ok for you?
@sankalpporwal3337
@sankalpporwal3337 8 жыл бұрын
+Daniel Shiffman It was because my USB hub was not working properly By the way I am thinking If there is a tutorial for hand tracking. I actually want to control leds through kinect hand tracking and arduino
@TheCodingTrain
@TheCodingTrain 8 жыл бұрын
+Sankalp porwal the next videos show a way of doing hand tracking. i would investigate github.com/ThomasLengeling/KinectPV2 also, I wil lbe making more tutorials with this library soon.
@user-id1kx6cs5k
@user-id1kx6cs5k 5 жыл бұрын
Sir, how to use .png files of rgb and depth images and convert to point cloud.
@allanhagelstrom2399
@allanhagelstrom2399 8 жыл бұрын
i am talking of the point cloud sketch, sorry
@bobsmall2870
@bobsmall2870 7 жыл бұрын
Just a tip next time you record a video don't wear anything with green. It looks like there is a hole through you.
@subscripciones
@subscripciones 7 жыл бұрын
How can I detect an object? let's say I want to find a toy
12.4: Average Point Hand Tracking - Kinect and Processing Tutorial
10:22
The Coding Train
Рет қаралды 72 М.
Coding Challenge 166: ASCII Text Images
22:42
The Coding Train
Рет қаралды 1,1 МЛН
My daughter is creative when it comes to eating food #funny #comedy #cute #baby#smart girl
00:17
ПРИКОЛЫ НАД БРАТОМ #shorts
00:23
Паша Осадчий
Рет қаралды 5 МЛН
How Strong is Tin Foil? 💪
00:26
Preston
Рет қаралды 70 МЛН
Iterative Closest Point (ICP) - Computerphile
16:25
Computerphile
Рет қаралды 137 М.
AI can't cross this line and we don't know why.
24:07
Welch Labs
Рет қаралды 577 М.
What La Niña Will do to Earth in 2025
19:03
Astrum
Рет қаралды 285 М.
Coding Challenge 11: 3D Terrain Generation with Perlin Noise in Processing
22:44
No, Einstein Didn’t Solve the Biggest Problem in Physics
8:04
Sabine Hossenfelder
Рет қаралды 177 М.
I Tried Creating a Game Using Real-World Geographic Data
31:37
Sebastian Lague
Рет қаралды 6 МЛН
Oh, wait, actually the best Wordle opener is not “crane”…
10:53
So You Think You Know Git - FOSDEM 2024
47:00
GitButler
Рет қаралды 1,1 МЛН
Coding Challenge #90: Floyd-Steinberg Dithering
28:51
The Coding Train
Рет қаралды 437 М.