That's exactly what I meant with showing examples of the result before starting the tutorial. Helps a ton understanding what I'm going to learn before watching the whole video. Do you have any plans on creating a tutorial for recommender systems? For example KZbin recommending videos to me that I could watch. Keep up the good work! :)
@NicholasRenotte3 жыл бұрын
You're the reason I did it!! Awesome suggestion 🙏 definitely, I don't have it planned soon but it's definitely on the cards!
@infiniteimprovement3 жыл бұрын
I love his tutorials. I'm having a problem when trying to load the model. I keep getting, an error message: "Didn't find op for builtin opcode 'RESIZE_BILINEAR' version '3' Registration failed.
@NicholasRenotte3 жыл бұрын
@@infiniteimprovement try using tensorflow 2.4.1
@infiniteimprovement3 жыл бұрын
@@NicholasRenotte I tried that, for some reason, it keeps restarting my kernel after I import TensorFlow. When I try to import tensorflow in a regular python file i get the error "illegal hardware instruction"
@dhruvdatta1055Ай бұрын
@@NicholasRenotte damn, giving you a like just for this
@zhanezar3 жыл бұрын
I wish all tutorials would use your format , it so obvious and user friendly to structure things
@satoshinakamoto57103 жыл бұрын
this is my 6th Renotte Tutorial. I hope to go thru all your amazing tutorials! :D
@TejrajParab3 жыл бұрын
I love the fact that you upload regularly so much.
@Mrdarkcloud882 жыл бұрын
Nicholas, Thanks for the well explained tutorial. You did mentioned that it is accurate for a relatively close distance. To make the plotting of the coordinate more accurate, you can crop the frame to an aspect ratio 1:1 before you send it to the interpreter. For example I have 640x480 webcam. I crop the image to 480x480. It seem to be more accurate, but you lose 80p on each side of the frame. Code I used: width = int(cap.get(cv2.CAP_PROP_FRAME_WIDTH)) height = int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT)) print(width, height) cropTopX = int((width - height) / 2) cropBotX = int(width - cropTopX) roi = [(cropTopX, 0), (cropBotX, height)] cutout = frame[roi[0][1]:roi[1][1], roi[0][0]:roi[1][0], :] Use cutout to send to the interpreter for processing
@bernienepper2 жыл бұрын
nice fix
@ryanleung82022 жыл бұрын
@pirondinimarco You are correct, I used cv2.copyMakeBorder() to make the aspect ratio 1:1 with a black border for my project. I needed the bigger view for my project.
@mdshahbaz21283 жыл бұрын
I see your video regularly. Although i am not following your code as right now i m not working on building models and python. But you are rocking and working so hard. Your video worths a lot. Still love to see your captions.
@NicholasRenotte3 жыл бұрын
Thanks soooo much man 🙏🙏🙏
@birdropping2 жыл бұрын
Your tutorials are incredible. Thank you for making this information so easily available!
@DrShaikAhmad3 жыл бұрын
All of your tutorials are really awesome. I like the way you present and the flow.
@chenvinh92323 жыл бұрын
Every AI tutorial presented in your channel is well elaborated. I really appreciated that. Folks who watch your AI tutorials, should obviously give you a LIKE and SUBSCRIBE. By the way, I am wondering whether you will launch a full course of Data Science on your channel along with AI lessons on the horizon.
@raziehshahsavar9649 Жыл бұрын
It was amazing, I used this and changed my code with input frames to another target and I want to give the input coordinates with 32 joints
@KriGeta3 жыл бұрын
Wow pose estimation, can we extract BVH out of it?
@NicholasRenotte3 жыл бұрын
Still gotta look into that for you! Working on it atm :)
@tonoi173 жыл бұрын
I really like the format , with clear goals and objectives and structured ( like s German way 😂) . Is movenet only implemented in python ? Is there a react native tensor flow version of it , or a tutorial series you plan to put out ?
@tonoi173 жыл бұрын
Great to hear about the js version . I tried to search for it but didn’t really find it !
@matthewfritzie18612 жыл бұрын
I got the code working! I had to put the #draw keypoints and #draw edges before #make detections code to make it run
@priyambordoloi7713 жыл бұрын
Amazing, just amazing. So what's the biggest difference that you have analyzed between mediapipe vs this one?
@Gabbosauro3 жыл бұрын
I was about to post the same question haha.
@usamanaseer64643 жыл бұрын
Same Question
@dantealonso71743 жыл бұрын
+1
@NicholasRenotte3 жыл бұрын
It's way faster and can run on just about any device! I've got a feeling they'll port this over to mediapipe soon!
@PUBUDUCG3 жыл бұрын
Great stuff . . .!, I can not keep up with all tutorials from you . . . :)
@meetvardoriya25503 жыл бұрын
Amazing as always 🔥
@NicholasRenotte3 жыл бұрын
Thanks a bunch @Meet Vardoriya!!
@svh023 жыл бұрын
Great! Please remember your finance series ;)
@NicholasRenotte3 жыл бұрын
Definitely, still on the list, wanted to do something a little diff this week :)
@moses54073 жыл бұрын
The BEST and right on the cutting edge!
@varunvijaywargi54973 жыл бұрын
Absolutely terrific as usual 👍
@NicholasRenotte3 жыл бұрын
Thanks a bunch @Varun!
@sasquelle2 жыл бұрын
Hey Nicholas, Thanks for the in-depth video. I really learned a lot from this video. My question is, "how do I calculate the distances between two key points? eg. distance of right shoulder - to - left shoulder?. Thanks alot.
@obi6662 жыл бұрын
it depends if u wanna get dist in pixels or real life distance
@sasquelle2 жыл бұрын
@@obi666 Real life distance. so what is the distance from the shoulder to the elbow as represented in real life
@obi6662 жыл бұрын
@@sasquelle I have seen few videos on how to measure face/hand real life distance from camera but I had never tried something like ur describing. For sure you need to measure this dist with measuring tape or sth (you need to make few maesures, for example 1 meter from camera, 2 meters, 3 etc) or you can do research on how to make deep learning model for that. Check Murtaza's Workshop - Robotics and AI's Hand Distance Measurement with Normal Webcam video, maybe you will figure it out how to modify his code to get the result you're looking for.
@rajv45093 жыл бұрын
Outstanding stuff!
@Panny0__03 жыл бұрын
your video is very good influence for my study thank you!!!!
@sagara59823 жыл бұрын
Wooah, exactly wt i wanted thanks man
@NicholasRenotte3 жыл бұрын
YESSS! So glad you're checking it out!
@yasamanrajaeefard29403 жыл бұрын
so connected with your tutorial and the way you write the code first and then explain it line by line, I have question did you work with OpenPose library? if you did what's the difference between these two?
@NicholasRenotte3 жыл бұрын
Haven't done any work with it yet but it apparently is quite fast and does multi person tracking whereas this can only handle a single person!
@outofthebots31222 жыл бұрын
Thanks a lot that is super useful for running on the edge
@pradhansomu41003 жыл бұрын
Can you please suggest me how to create a custom dataset for pose estimation and which architecture is best to train pose estimation model.
@khaldonevans42953 жыл бұрын
Since you have experience using both MoveNet and MediaPipe, could you advice me on what is the best solution in your opinion? Also, are either of these solutions scalable?
@NicholasRenotte3 жыл бұрын
Both seem pretty good! Scalability would likely be dependent on underlying infrastructure and how it's deployed tbh!
@wouterfavoreel2 жыл бұрын
Thx for the nice tutorial! I believe I found a small error in the drawing function. To transform from MoveNet normalized coordinates back to image coordinates you should take into account the padding of the image, which is not the case in the current example. As a result the y-scale does is not entirely correct.
@muhammadwaseem_2 жыл бұрын
Can you please explain how to do it
@JFkingW2 жыл бұрын
Exaclty, this only works in his case because his video is aspect ratio of 1. Otherwise the keypoints are inaccurate.
@schlingelgen2 жыл бұрын
I got it to work but the predictions don't get drawn accurately. I guess it has something to do with the aspect ratio (1280/720) and the padding when feeding it to the model. But I don't see you addressing it. The predictions for my eyes for example, get drawn accurate when my head is in the vertical middle of the frame. But they rise above my eyes when I lower my head, and the sink below my eyes when I raise my head. I can help myself with manipulating the ky value, but Im struggling to find the source of this problem which im really interested in thanks for the tutorial btw!
@malikeaboss2 жыл бұрын
Hello mister, did you find how to fix it?
@schlingelgen2 жыл бұрын
@@malikeaboss I dropped the project - tried a few things with skewing the points in different ways, but that wasn't too successful
@atillaozdemir82975 ай бұрын
When you stand up, the points that should be on your head point to different places. How can we fix this?
@hamednasr30782 жыл бұрын
Could you please zoom in ?! it is very small the font size, difficult to read!!
@blankensmithing Жыл бұрын
Hey, great tutorial! It looks like even though you install tensorflow-gpu it's still running on the CPU since it's using TFLight. Things run fairly fast w/ lightning, but I'm wondering if you know of any way to actually get this model running on the GPU to speed things up?
@francogiulianopertile2793 жыл бұрын
Love your videos
@NicholasRenotte3 жыл бұрын
Thanks sooo much! Means a ton!!
@aseemmangla66493 жыл бұрын
Hi, first of all, this was a really wonderful workshop, thanks!! Can you also send some references regarding movenet model in tensorflowjs?
@lakpatamang28663 жыл бұрын
Could you do another tutorial of pose detection with LSTM and MoveNet as feature extractor?
@NicholasRenotte3 жыл бұрын
Ooooh, we should get back to that! Yup!
@sasquelle2 жыл бұрын
@@NicholasRenotte that would be amazing! looking forward to this tutorial
@hyunyoungkang43753 жыл бұрын
Hello. I have one question, that is the difference between this video and your previous video "AI pose estimation with python and mediapipe" is to use deeplearning model movenet? could you tell me about the difference? plz
@NicholasRenotte3 жыл бұрын
The task, pose estimation is the same between both models. This model however Movenet Lightning can make predictions a lot faster, meaning it's great for applications that need to track movement quickly.
@leon3959 Жыл бұрын
Hi this tutorial is very informative!! I just have a question, how would you make this to detect certain poses. I just need it to detect between 4 different poses.
@hellothere_howdy Жыл бұрын
yeah i'm also working on a similar project, did you do any progress, if yes could you share it?
@leon3959 Жыл бұрын
@hellothere_howdy hi we ended up doing it with trackers instead.
@hellothere_howdy Жыл бұрын
@@leon3959 would you be able to send your code, i want to refer it..
@montofarouk4522 жыл бұрын
I have error [ Cannot set tensor: Got value of type FLOAT32 but expected type UINT8 for input 0, name: serving_default_input:0 ] in the line [ interpreter.set_tensor(input_details[0]['index'], np.array(input_image)) ] can you help me fix it
@MustafaAkben11 ай бұрын
Good work!
@muditrustagi57753 жыл бұрын
Wow this one's smooth ❤️❤️
@NicholasRenotte3 жыл бұрын
Ikr?!
@anselme96362 жыл бұрын
Hi. Thank you for your tutorial. But with your exact code from your GitHub, a powerful computer with GTX1650, and a HD webcam 30fps, I have a really bad estimation, it's not at all as reliable as yours. Tracking of points get lost, many points are flickering. Tried différent threshold value, nothing is fixing it. I can't understand why ! May it be the background that is not a green screen ? Thanks
@sultanfahim9853 жыл бұрын
This video was very helpful to me...but i do have a question, how can i determine angles (like your previous video on pose estimation using mediapipe)?? eagerly waiting for your reply , thanks!
@NicholasRenotte3 жыл бұрын
Same process, just apply it to these keypoint structures!
@Hikarifps6 ай бұрын
Hello I need some help, during a certain time in the video with trying to get the keypoints from the camera, I get this error in my code, I need help please to anyone who may know the answer. ValueError: Cannot set tensor: Got value of type FLOAT32 but expected type UINT8 for input 0, name: serving_default_input:0
@nicholasdinis3392 жыл бұрын
Great tutorial man big thanks! *linux mint* Only adjust I had to make on was instead of a float32 I had to use uint8 to get it working runs just fine.
@nicholasdinis3392 жыл бұрын
Update the Thunder Model works with float32 and is the faster lightweight version, lightning uses uint8 and is the heavier more accurate model.
@thunderstack53653 жыл бұрын
This is amazing
@NicholasRenotte3 жыл бұрын
Yup, think we're sticking to YT!
@zaidahmed406911 ай бұрын
Can you extract the x,y,z coordinates of a certain keypoint using this particular algorithm ??? is it possible?
@Deshwal.mahesh3 жыл бұрын
Can you train the model on your personal dataset?
@NicholasRenotte3 жыл бұрын
For detection, sure can!
@ryanslive3 жыл бұрын
Hey there! How do we detect key points and connections in 3d (do we have to take z coordinate too? and is it possible with one camera) and will it improve our detection accuracy If we do so? I want to detect if the back is arched or straight...is it possible?
@NicholasRenotte3 жыл бұрын
I believe you get a z coordinate (I'm not too sure of it's exact representation though). I haven't tried integrating it but it might improve accuracy!
@gaddesaishailesh27723 жыл бұрын
Hey! If we want to train MoveNet model for real time detections can we use the same process as mediapipe library and add lstm?
@NicholasRenotte3 жыл бұрын
Sure can!
@hamzanaeem48383 жыл бұрын
What if we change the tensorflow lightning to tensorflow thunder , if i change it , it is giving dimensions error ?
@NicholasRenotte3 жыл бұрын
Ooooh, I think it expects a different input image size. Might need to resize your input in order to feed it!
@raziehshahsavar9649 Жыл бұрын
thank you for the tutorial, i have a problem, may you help me? i want to movenet plot skeleton based on my joint's coordinate, and i want to modify output-tensor and location of key point. may you please help me?
@93hothead2 жыл бұрын
How do you train a custom model and predict with your own ground truth?
@clayton60952 жыл бұрын
did you use runescape music at the beginning?
@ds_rocks51088 ай бұрын
Instead of live cam can we use input as video
@afriquemodel23752 жыл бұрын
i get this issue NameError: name 'draw_connections' is not defined
@microgamawave2 жыл бұрын
Its better than mediapipe pose??
@RaviKumar-fw8bv7 ай бұрын
can u please do a video about action detection using Webcam
@jackprot3512 жыл бұрын
Is it possible to apply the angle calculating from your media pipe video to the movenet frameworl?
@MUHAMMADOMER-ti6wm5 ай бұрын
Possible bro
@bolzanoitaly83602 жыл бұрын
hi bro, thanks for your wonderful video. I copied your code, but I add the set_tensor command, interpreter.set_tensor(input_details[0]['index'], np.array(input_image)) # this line creates troubles print("Shape Image:", img.shape) # (1440, 2560, 3) at this stage img = tf.image.resize_with_pad(np.expand_dims(img, axis=0), 192, 192) appears the following error, can you help me to resolve, thanks. ValueError: Cannot set tensor: Got value of type FLOAT32 but expected type UINT8 for input 0, name: serving_default_input:0 please, help me, if you can, thanks
@bolzanoitaly83602 жыл бұрын
Yes, same error in my case as well. it is requested highly if anyone can help in this regard, please. A quick response will highly be appreciated. thanks
@sergiofernandeztesta64332 жыл бұрын
Same
@lucasschimidt83383 жыл бұрын
im using google colab and the part "2 - Make detections" is not working with me, my webcam do not open
@NicholasRenotte3 жыл бұрын
First, try restarting your notebook, this should free up your webcam if it's inaccessible. Second, if it's still not working double check you have the VideoCapture device number set correctly.
@amaltlili56393 жыл бұрын
Thanks again for your amazing tutorial, Just I was wondering how to calculate velocity or angular velocity from mediapipe, I followed your tutorial and calculated the angles but I can't figure out how to do that.
@NicholasRenotte3 жыл бұрын
Would need to calculate the change in arc length first and divide by the time frame you're looking for
@amaltlili56393 жыл бұрын
@@NicholasRenotte Thank you very much
@OpenYoureyes3042 жыл бұрын
first of all you have good tutorials and you really explain it well but I need your help, at 23:37 when I tried to execute it this is my error ValueError: Cannot set tensor: Dimension mismatch. Got 192 but expected 256 for dimension 1 of input 0. I am not sure if its because of the tensorflow model that I download but I just followed your video
@93hothead2 жыл бұрын
Change the conversion to 256
@AmanKumar-cz6ht3 жыл бұрын
Can this be used for multiple person pose detection?
@NicholasRenotte3 жыл бұрын
Nope, but this can: kzbin.info/www/bejne/gXSan32qd611p8k
@viji16603 жыл бұрын
Hey, I know it maybe an anoing question, but there is a youtuber and he speaks into his mika and an AI is speaking them for him. I think its called speech to text and text to speach and I only found 1 viedeo on youtube, but its not live so, what I want is,I want to play games with my frinds and speak with them but like with an AI and I jusgt dont know, how its done…
@NicholasRenotte3 жыл бұрын
Oh, like a voice cloner?
@SnazalSinghАй бұрын
Will it work for multi person detection
@hamednasr30782 жыл бұрын
Dear Nicholas, thank you for your fantastic tutorials! If you look at 47:00 of this video, when the face is in the middle of the camera frame, the position of eyes and nose are correct, but when you go backward, from distance, at 47:25, the eye points and nose are not correct, they are around your neck and when I bring my head below center, the detected points come over my head, and this also happens for the legs because they are below the center of the frame, do you know why that happens?! because the code from TensorFlow does not have this issue, but it is very slow in my laptop (the code from TensorFlow), your code is very good but this issue happens, do you know why?!
@muhammadwaseem_2 жыл бұрын
Same doubt
@malikeaboss2 жыл бұрын
hello mister, did you find how to fix it?
@hamednasr30782 жыл бұрын
@@malikeaboss unfortunately not, but I did not use face points , only shoulders and buttocks
@sehulviras3 жыл бұрын
do you happen to have JavaScript links for this ... just like mediapipe ????
@NicholasRenotte3 жыл бұрын
Same link, just hit the JS tab here: tfhub.dev/google/tfjs-model/movenet/singlepose/lightning/4
@liveitlikeyouwantit.97742 жыл бұрын
hi, can you help me to know to draw points on other shapes but not humans and thank you you bean very helpfull.
@aaronwee59563 жыл бұрын
What about the speed of movements, is it possible?
@NicholasRenotte3 жыл бұрын
Possible yes, I haven't done it yet though!
@aaronwee59563 жыл бұрын
@@NicholasRenotte Would really be interested to see a tutorial from you on it. Your explanations are amazing and it would be a tremendous help for me!
@dwang31423 жыл бұрын
Thanks again for your amazing tutorial!! Could you do one on adding face filters like the ones on snapchat for web? Thanks!
@NicholasRenotte3 жыл бұрын
In the pipeline!
@syun84753 жыл бұрын
hello Can I use tensorflow-gpu2.0 ? thank you!
@NicholasRenotte3 жыл бұрын
Haven't tested it with 2.0.0.
@syun84753 жыл бұрын
@@NicholasRenotte Thank you for your reply , I had use it that is ok
@ayarzuki3 жыл бұрын
What if I run it in Tensorflow 2.5.0?? It will any problem?
@NicholasRenotte3 жыл бұрын
I think you should be fine!
@RaselAhmed-ix5ee2 жыл бұрын
how can we estimate the action ?
@piotrjastrzebski97793 жыл бұрын
How i may run this with GPU?
@NicholasRenotte3 жыл бұрын
Check this out: www.tensorflow.org/lite/performance/gpu_advanced
@유영재-c9c3 жыл бұрын
Bro I love you~~~
@NicholasRenotte3 жыл бұрын
Love you too bro!
@Gordan-s8q3 жыл бұрын
Do you have this model's paper?
@NicholasRenotte3 жыл бұрын
Doesn't look like they wrote a paper for it, best is the model card: storage.googleapis.com/movenet/MoveNet.SinglePose%20Model%20Card.pdf
@Gordan-s8q3 жыл бұрын
Thanks, we just found the website
@mdasrafuzzamansakib98123 жыл бұрын
can it detect multi-person pls answer sir
@NicholasRenotte3 жыл бұрын
Nope
@sarahjamaal27413 жыл бұрын
@@NicholasRenotte i'd thank you for this tutorial it is really easy to understand .i have a question how can i perform multi-person pose estimation
@NicholasRenotte3 жыл бұрын
@@sarahjamaal2741 here you go: kzbin.info/www/bejne/gXSan32qd611p8k
@네네-o5w2 жыл бұрын
31:20
@dzhang1215 Жыл бұрын
Hello Nicholas, thank you so much for the great work! It saved me a lot of time to set up the programm. However, maybe you are already aware of this issue: The key points does not allign well to the image if the human is not perfectly in the middle of the image. I think, since most frame (resolution from the camera) does not exactly have the aspcet ratio 1:1, which the imput image for the MoveNet has, you can not easlily mutiply the relative keypoints (keypoints_with_scores = interpreter.get_tensor(...)) with the pixel coordinate from the image frame. Through the /tf.image.resize_with_pad/ the pixel arragment betwwen camera frame and input img (192x192) does not match exactly. In order to get the right assignment, you have to conduct an affine transform, where you have to compute the inverse of the affine transformation matrix from the fram image into the input img. Please refer this site stackoverflow.com/questions/73677854/movenet-pose-estimation-renders-inaccurate-keypoints.
@kanall1032 жыл бұрын
Tutorial how to convert this keyspoints to 3d animation files as BVH please