thanks mate, helped a lot. I was stressed before deploying the product, you saved me🤩
@robotmania88966 күн бұрын
Hi Kenzhebek Taniev! Thanks for watching my video! It is my pleasure if this video has helped you!
@장성숙-l9z11 ай бұрын
Hello? I'm jetsonmom, a 65-year-old living in Korea and a copycat (I don't know much about Python, but I like to try things out and get help from acquaintances and chatgpt4 for things I don't know). I tried following the teacher's video using ORIN NANO provided by NVIDIA. (I am also a Jesson Nano Ambassador)Thank you so so much for sharing. My image is 6.0dp, so the version of torch or torchvision in the shared video is different. I installed torch version 2.2.0 and torchvision 0.17.0. I don't know if it's because it's different from the teacher's video, but when I ran the Python sample program, the results came out well. But awareness is fleeting. So I thought it was strange, so I asked, and when I looked it up, I found that cuda was not used. When installing cudnn, it seemed like I needed version 12, but it didn't work. Do I need to change to the same version as the teacher? kzbin.info/www/bejne/foepd4GBg52jeJo
@robotmania889611 ай бұрын
Hello 장성숙! When you are using jetson Orin, you don’t have to install cuda or cudnn. It is installed by default. If cuda is available, it will be used automatically if you are using pytorch. So, it is strange that cuda had not been used. Have you installed pytorch version suitable for jetson (aarch architecture)?
@장성숙-l9z11 ай бұрын
@@robotmania8896 Yes, I installed it. It's just that the execution speed is too slow and I wanted to eliminate intermittent interruptions, so I checked with the command to see if CUDA is used when running it, and it came out as not, so I asked a question. Is there any way to make video processing faster? result is "0: 480x640 1 laptop, 1434.2ms Speed: 5.9ms preprocess, 1434.2ms inference, 2.1ms postprocess per image at shape (1, 3, 480, 640) 0: 480x640 1 laptop, 1155.6ms Speed: 4.0ms preprocess, 1155.6ms inference, 2.1ms postprocess per image at shape (1, 3, 480, 640) 0: 480x640 1 laptop, 1195.6ms Speed: 1.8ms preprocess, 1195.6ms inference, 3.8ms postprocess per image at shape (1, 3, 480, 640) 0: 480x640 1 laptop, 1128.3ms Speed: 3.8ms preprocess, 1128.3ms inference, 2.2ms postprocess per image at shape (1, 3, 480, 640) 0: 480x640 1 laptop, 1102.3ms Speed: 2.2ms preprocess, 1102.3ms inference, 2.8ms postprocess per image at shape (1, 3, 480, 640) 0: 480x640 1 laptop, 1160.8ms Speed: 3.4ms preprocess, 1160.8ms inference, 3.0ms postprocess per image at shape (1, 3, 480, 640) 0: 480x640 1 laptop, 1125.3ms Speed: 4.8ms preprocess, 1125.3ms inference, 2.6ms postprocess per image at shape (1, 3, 480, 640)"
@robotmania889611 ай бұрын
Sorry for the late response. What GPU are you using? Yes, it is possible to make inference faster with smaller image or model size.
@赵子铭-v1i Жыл бұрын
This tuto saved my life!! GREAT video
@robotmania8896 Жыл бұрын
Hi 赵子铭! Thanks for watching my video! It is my pleasure if this video has helped you!
@Music-t5i11 ай бұрын
Great man you are great. Since 3 days we have been tried with jetson orin nano and our gpu has not worked on yolov8, but your script is great and very nice guidance now our gpu has worked on jetpack 6 latest os on jetson orin nano appreciate your work 👌🏻
@robotmania889611 ай бұрын
Hi Music! Thanks for watching my video! It is my pleasure if this video has helped you!
@Music-t5i11 ай бұрын
@@robotmania8896 keep it up bring up new videos like this and ros with yolov8 on jetson orin nano
@thomasdunn190611 ай бұрын
How did you get pytorch to install? Everything has changed since this tutorial and I can not get pytorch to install using the steps in this tutorial.
@seanwhen9 ай бұрын
This is great, borther! This installation video is the answer to my urgent need.
@robotmania88969 ай бұрын
Hi SeanWhen! Thanks for watching my video! It is my pleasure if this video has helped you!
@aaronpena41023 ай бұрын
hi there Im on the step after building the librealsense shell script. The download finished but when i go into files and go to USR local directory to search for the OFF file containing pybackend and the others there is no file at all named OFF that appears. Am i able to keep LIB as the file name in bash rc ? im not sure what to do
@aaronpena41023 ай бұрын
also is pyrealsense supposed to download automatically when librealsense shell script is built or do we have to download it separately. I have no pybackend files or pyrealsense files at all whether it's in OFF folders or somewhere else so im not able to import pyrealsense after adjusting the code in the bashrc file. Im getting an error saying no module named pyrealsense when trying to import pyrealsense as rs. If you can give some advice id appreciate it.
@robotmania88963 ай бұрын
Hi Aaron Pena! Can you please try this command pip3 install pyrealsense2 probably it will work.
@gianlucademusis8455 Жыл бұрын
Hello... How I can recognise only an object like potholes using this project? Thank you
@robotmania8896 Жыл бұрын
Hi Gianluca De Musis! Thanks for watching my video! If you would like to recognize potholes, you have to use your own trained model (pt file). Change ‘yolov8m.pt’ (line 22 in the “yolov8_rs.py”) to your model’s name.
@AlainPilon Жыл бұрын
At line 39, shoudnt you call results = model(color_image) and provide the param `device=0` to use the GPU?
@robotmania8896 Жыл бұрын
Hi bijan esphand! Thanks for watching my video! At ultralytics github page there is no reference about how to use the model with CPU or GPU. I guess, if the GPU is available, yolo automatically chooses GPU to do inference.
@TheR0met4 ай бұрын
Hello! I'm facing an error saying: RuntimeError: Frame didn't arrive within 5000. Any solutions to this?
@robotmania88964 ай бұрын
Hi Romet Arak! Thanks for watching my video! It is difficult so say only from the information you have provided. Are you using USB cable that came with realsense? Low quality USB cable may cause problems.
@joacosolbes9283Ай бұрын
Hello mate do you have one of this this but with the new jetson nano, the possibilities are incredible Ip cameras + Jetson Nano for security just for starters, i will bought that course inmediatly
@robotmania8896Ай бұрын
Hi Joaco Solbes! Thanks for watching my video! In this tutorial I use Jetson Orin Nano, which is the newest model as far as I know. Technically, Jetson Orin Nano Super is the newest model, but since it was announced just a few days ago, I don’t have it.
@SyrupWizard13 күн бұрын
@@robotmania8896 Thanks for the vid. Do you know if this method will work on jetpack 6.1?
@nurulnajwakhamis26808 ай бұрын
Thank you for the tutorial but I got problem where I cannot find the OFF folder
@robotmania88968 ай бұрын
Hi nurulnajwa khamis! Thanks for watching my video! Is there any other generated folder, or the folder is not generated at all?
@nurulnajwakhamis26808 ай бұрын
@@robotmania8896 Hi thank you for your reply. there is no folder or other folder generated. Even I got error said that the pyrealsense2 module not found. But I try to change to another version of librealsense into v2.54.2 and it's working!
@robotmania88968 ай бұрын
@@nurulnajwakhamis2680 I am glad that you make it work!
@enesschebbaki1226 Жыл бұрын
Hi, I would like to ask you if it would be possible to do the same with 'Pose Detection'? Let me explain, I would like to take the keypoints on the colour view and put them in real time in the depth view. Basically to do the exact same thing you did but using not only the bounding box but also the keypoints but I don't understand how to achieve this, I would be grateful if you could answer me. Thanks!
@robotmania8896 Жыл бұрын
Hi Eness Chebbaki! Thanks for watching my video! Yes, it is possible. In case of “pose detection”, as described in the page below, you have to extract “keypoints” from results just as I did for bounding boxes in this tutorial. Then you will be able to plot coordinates of “keypoints” on depth image. docs.ultralytics.com/modes/predict/#masks
@enesschebbaki1226 Жыл бұрын
@@robotmania8896 Thank you for replying! I have read the documentation and am trying to do the same thing as you did with the boxes but I am not getting any results. I don't know if it's because of the format in which the tensor containing the coordinates of the keypoints is output. Could you help me out?
@robotmania8896 Жыл бұрын
@@enesschebbaki1226 Here is the sample code to extract coordinates of the keypoints. /////////////////////////////////////////////////////////// from ultralytics import YOLO import os model_directory = os.environ['HOME'] + '/pose/yolov8m-pose.pt' model = YOLO(model_directory) source = "sample.jpeg" results = model(source) for r in results: keypoints = r.keypoints for keypoint in keypoints: b = keypoint.xy[0].to('cpu').detach().numpy().copy() print(f"b : {b}")
@enesschebbaki1226 Жыл бұрын
@@robotmania8896 Fortunately, I was able to extract the keypoints. I used your same approach but thank you infinitely for your willingness and time! Your videos are always inspiring 💪🏼
@WangJin-ox4yf7 ай бұрын
Hi, wonderful video! I am wondering why I continue encountering the error "network is unreachable" when I "pip3 install ultralytics"? I really appreciate your help!
@robotmania88967 ай бұрын
Hi Wang Jin! Thanks for watching my video! It seems to be a network problem. Do you have internet connection?
@thomasdunn190611 ай бұрын
Are there any updates? I can not get PyTorch to install. The steps have changed since this video. I have had my Orin Nano for two weeks and still can not get it to inference on GPU. I am starting to lose hope in the Orin Nano.
@robotmania889611 ай бұрын
Hi Thomas Dunn! Thanks for watching my video! Where exactly are you experiencing a problem?
@thomasdunn190611 ай бұрын
@@robotmania8896Thank you so much for the response! The commands on the Installing PyTorch for Jetson has changed. I Had to re flash my sd card with the version you were using (5.1.2) and then pause your video as you were highlighting and enter manually. I was finally able to install PyTorch that way! I can now run yolov5 on my computer with GPU inference. I am having a problem getting yolov8 to work however. I can not get bounding boxes to show when I use v8 on my webcam.
@thomasdunn190611 ай бұрын
@@robotmania8896 The installing pytorch for jetson platform has changed. I believe it is to support jetpack 6. I had jetpack six installed and it would not work. I flashed jetpack 5.1.2 and it still did not work. I ended up watching your video and pausing as you highlighted the commands and entered them manually and it worked! Thank you so much for your help!!
@robotmania889610 ай бұрын
@thomasdunn1906 I am glad that my video has helped you! Were you able to run yolov8?
@thomasdunn190610 ай бұрын
@@robotmania8896yes! Thank you so much. I could not have done it without your video.
@TheRecep276 ай бұрын
hello, Is it possible to run yolov7 with these settings?
@robotmania88966 ай бұрын
Yes, I think you will be able to execute yolov7 with libraries that have been installed in this tutorial.
@TheRecep276 ай бұрын
@@robotmania8896 thank you
@xp-4yt Жыл бұрын
First of all i want to thank your for another great and very detailed tutorial with nice explanation. I am pretty new to Robotics and your videos are crucial to avoid a lot of troubles newbees are spotting in this field. And i have also small questions to the video - as i got the idea, the way an object detection should be used with all the Jetson, is a Deepstream Engine implementation. The engine seems to be much faster than a pt model. Am i wrong? Btw, can you give me a clue how to work with the navigation stack in ros2 to add the conditions and maybe some stoppong criteria? I want to use object detection to create an autopilot which takes road signs, traffic lights and etc into account. P.S. I am sorry for my English . P.P.S. Thanks for the new video once more!
@robotmania8896 Жыл бұрын
Hi xp-4yt! Thanks for watching my video! Yes, if you need to push your inference time to a limit, you should use DeepStream Engine. But I think describing several topics in one tutorial could be confusing, so I will make another tutorial for a DeepStream. As for navigation stack, some of the features you have mentioned could probably be achieved using Waypoint Task Executors. navigation.ros.org/plugins/index.html#waypoint-task-executors Also, in this tutorial I explained how to navigate to detected objects. It may also help you. kzbin.info/www/bejne/hZObnXqFfaeln8k
@xp-4yt Жыл бұрын
@@robotmania8896 Thanks a lot! This information is extremely helpful! ☺️
@KobeNein Жыл бұрын
@@robotmania8896 I can't wait to see DeepStream tutorial
@robotmania8896 Жыл бұрын
@@KobeNein I am planning to make a tutorial about DeepStream in a few weeks.
@xp-4yt Жыл бұрын
@@robotmania8896 I am also very inpatient about it. I am dealing with Deep stream right now I have no idea how to use an engine with topic data. Seems that topic information should be converted into video stream, but I have synchronisation problems...Is it possible to use Deep stream engine with Ros2 in real time at all? Forums say no
@kawthertrabelsi49965 ай бұрын
Hello :) is it possible to use Jetson Nano 2 GB with 2 USB Webcams and yolov8!!
@robotmania88965 ай бұрын
Hi Kawther Trabelsi! Thanks for watching my video! With small image size and small model, you probably will be able to execute yolov8 on the Jetson Nano. But you will not be able to execute inference simultaneously on both cameras. You will need to execute inference sequentially.
@ahmetcaneskikale84236 ай бұрын
Hello, the video is very nice, but I want to do it using a USB camera instead of a realsense camera. Is it enough to skip the realsense part?
@robotmania88966 ай бұрын
Hi Ahmet Can Eskikale! Thanks for watching my video! Yes, it is enough to skip realsense part. To obtain camera frames, you just have to use next code. cap = cv2.VideoCapture(0) ret, frame = cap.read()
@deepaknr76166 ай бұрын
Hey Great video.... Thanks I had one question where you mentioned the input of the stream? Like rtsp or webcam
@robotmania88966 ай бұрын
Hi Deepak NR! Thanks for watching my video! Input of the stream (obtaining of the frames) is at line 27 (frames = pipeline.wait_for_frames()) in the “yolov8_rs.py” script.
@sricharankaipa3435 ай бұрын
are you using which version of intel realsense camera? can i go with reaisense d455 with same procedure as shown in above video?
@robotmania88965 ай бұрын
Hi Sri Charan Kaipa! Thanks for watching my video! In this tutorial I used RealSense D435 camera. I think you can use D455 with the same procedure.
@autumnsoybean5952 Жыл бұрын
Awesome! Great tutorial!
@robotmania8896 Жыл бұрын
Hi autumn soybean! Thanks for watching my video! It is my pleasure if this video has helped you!
@OrtacbAilesi7 ай бұрын
Can you please explain on a Windows computer?
@robotmania88967 ай бұрын
Hi MrOrtach! Thanks for watching my video! I currently not planning to make a video regarding realsense and windows. But here you can find a detailed explanation on how to build librealsense on windows. dev.intelrealsense.com/docs/compiling-librealsense-for-windows-guide
@충현이-p1r Жыл бұрын
Thanks for the video which was greatly useful!!! But I couldn't find a way to download the yolov8_rs file that you said to download it from google drive. How and where can I get this file?
@robotmania8896 Жыл бұрын
Hi 충현이! As mentioned near the ending of the video, the google drive link is in the description. Please open description and you will find the link.
@충현이-p1r Жыл бұрын
@@robotmania8896 Thanks for your comment. Sorry for bothering, but can you tell me the version of openCV you are using?
@robotmania8896 Жыл бұрын
@@충현이-p1r I haven’t got Jetson Orin by my side right now, so I cannot check. But I didn’t do anything special while installing opencv. If you install the version specified in the “requirements.txt” file, the program should work. Do you have any troubles with opencv?
@WangJin-ox4yf8 ай бұрын
Thanks a lot for making this video!!! I am just wondering if this tutorial is also suitable for Jetson Nano?
@robotmania88968 ай бұрын
Hi Wang Jin! Thanks for watching my video! If you would like to use Yolov8 with Jetson Nano, this video will help you. kzbin.info/www/bejne/oKCki3iLl7-Nr5o
@guillermovc Жыл бұрын
Hi, Do you plan to make videos related to the intel neural stick? Thanks for your tutorials!
@robotmania8896 Жыл бұрын
Hi Guillermo Velazquez! Thanks for watching my video! For now, I am not considering making a tutorial for the intel neural stick.
@truong-7490 Жыл бұрын
Please!! What is your ubuntu and python version you using. I'm a newbie Yolo and Robotic! Respect for your helping
@robotmania8896 Жыл бұрын
Hi Trường -! Thanks for watching my video! It is ubuntu20.04. Python version is 3.8.
@csrasel61339 ай бұрын
love this
@robotmania88969 ай бұрын
Hi CS RASEL! Thanks for watching my video! It is my pleasure if this video has helped you!
@준호김-k6y Жыл бұрын
Great video
@robotmania8896 Жыл бұрын
Hi 준호 김! Thanks for watching my video! It is my pleasure if this video has helped you!
@prescriptionoatmeal79704 ай бұрын
Quick and easy!
@robotmania88964 ай бұрын
Hi Prescription Oatmeal! Thanks for watching my video! It is my pleasure if this video has helped you!
@soasuitegc11 ай бұрын
thanks for sharing!! what fps did you manage to get? I can't get more than 5 fps :(
@robotmania889611 ай бұрын
Hi soasuitegc! Thanks for watching my video! The FPS largely depends on yolo model size and orin nano model. Are you using exactly the same yolo model and orin nano model as my tutorial? Since it seems to me that orin nano can do much faster than 5 FPS.
@robotmania889611 ай бұрын
@@장성숙-l9z 감사합니다!
@nhatpham57978 ай бұрын
Hello, What python version are you using?
@robotmania88968 ай бұрын
Hi Nhật Phạm! Thanks for watching my video. I use python 3.10 in this tutorial.
@nhatpham57978 ай бұрын
@@robotmania8896 Can I use Python 3.11 to install Ultralytics on Jetson Nano, I get a mistake when I use "Pip3 install Ultralytics"
@robotmania88968 ай бұрын
Yes, you should be able to install ultralytics on python 3.11. Please refer to this page. docs.ultralytics.com/quickstart/
@nhatpham57977 ай бұрын
@@robotmania8896 Hey, how can I check my version's jetpack
@nhatpham57977 ай бұрын
I use jetson nano not jetson orin nano, Is setting these up any different? I can setup the Pyrealsense2 lib