are you using which version of intel realsense camera? can i go with reaisense d455 with same procedure as shown in above video?
@robotmania88962 ай бұрын
Hi Sri Charan Kaipa! Thanks for watching my video! In this tutorial I used RealSense D435 camera. I think you can use D455 with the same procedure.
@장성숙-l9z8 ай бұрын
Hello? I'm jetsonmom, a 65-year-old living in Korea and a copycat (I don't know much about Python, but I like to try things out and get help from acquaintances and chatgpt4 for things I don't know). I tried following the teacher's video using ORIN NANO provided by NVIDIA. (I am also a Jesson Nano Ambassador)Thank you so so much for sharing. My image is 6.0dp, so the version of torch or torchvision in the shared video is different. I installed torch version 2.2.0 and torchvision 0.17.0. I don't know if it's because it's different from the teacher's video, but when I ran the Python sample program, the results came out well. But awareness is fleeting. So I thought it was strange, so I asked, and when I looked it up, I found that cuda was not used. When installing cudnn, it seemed like I needed version 12, but it didn't work. Do I need to change to the same version as the teacher? kzbin.info/www/bejne/foepd4GBg52jeJo
@robotmania88968 ай бұрын
Hello 장성숙! When you are using jetson Orin, you don’t have to install cuda or cudnn. It is installed by default. If cuda is available, it will be used automatically if you are using pytorch. So, it is strange that cuda had not been used. Have you installed pytorch version suitable for jetson (aarch architecture)?
@장성숙-l9z8 ай бұрын
@@robotmania8896 Yes, I installed it. It's just that the execution speed is too slow and I wanted to eliminate intermittent interruptions, so I checked with the command to see if CUDA is used when running it, and it came out as not, so I asked a question. Is there any way to make video processing faster? result is "0: 480x640 1 laptop, 1434.2ms Speed: 5.9ms preprocess, 1434.2ms inference, 2.1ms postprocess per image at shape (1, 3, 480, 640) 0: 480x640 1 laptop, 1155.6ms Speed: 4.0ms preprocess, 1155.6ms inference, 2.1ms postprocess per image at shape (1, 3, 480, 640) 0: 480x640 1 laptop, 1195.6ms Speed: 1.8ms preprocess, 1195.6ms inference, 3.8ms postprocess per image at shape (1, 3, 480, 640) 0: 480x640 1 laptop, 1128.3ms Speed: 3.8ms preprocess, 1128.3ms inference, 2.2ms postprocess per image at shape (1, 3, 480, 640) 0: 480x640 1 laptop, 1102.3ms Speed: 2.2ms preprocess, 1102.3ms inference, 2.8ms postprocess per image at shape (1, 3, 480, 640) 0: 480x640 1 laptop, 1160.8ms Speed: 3.4ms preprocess, 1160.8ms inference, 3.0ms postprocess per image at shape (1, 3, 480, 640) 0: 480x640 1 laptop, 1125.3ms Speed: 4.8ms preprocess, 1125.3ms inference, 2.6ms postprocess per image at shape (1, 3, 480, 640)"
@robotmania88968 ай бұрын
Sorry for the late response. What GPU are you using? Yes, it is possible to make inference faster with smaller image or model size.
@TheR0metАй бұрын
Hello! I'm facing an error saying: RuntimeError: Frame didn't arrive within 5000. Any solutions to this?
@robotmania8896Ай бұрын
Hi Romet Arak! Thanks for watching my video! It is difficult so say only from the information you have provided. Are you using USB cable that came with realsense? Low quality USB cable may cause problems.
@deepaknr76163 ай бұрын
Hey Great video.... Thanks I had one question where you mentioned the input of the stream? Like rtsp or webcam
@robotmania88963 ай бұрын
Hi Deepak NR! Thanks for watching my video! Input of the stream (obtaining of the frames) is at line 27 (frames = pipeline.wait_for_frames()) in the “yolov8_rs.py” script.
@ahmetcaneskikale84233 ай бұрын
Hello, the video is very nice, but I want to do it using a USB camera instead of a realsense camera. Is it enough to skip the realsense part?
@robotmania88963 ай бұрын
Hi Ahmet Can Eskikale! Thanks for watching my video! Yes, it is enough to skip realsense part. To obtain camera frames, you just have to use next code. cap = cv2.VideoCapture(0) ret, frame = cap.read()
@AlainPilon10 ай бұрын
At line 39, shoudnt you call results = model(color_image) and provide the param `device=0` to use the GPU?
@robotmania889610 ай бұрын
Hi bijan esphand! Thanks for watching my video! At ultralytics github page there is no reference about how to use the model with CPU or GPU. I guess, if the GPU is available, yolo automatically chooses GPU to do inference.
@WangJin-ox4yf4 ай бұрын
Hi, wonderful video! I am wondering why I continue encountering the error "network is unreachable" when I "pip3 install ultralytics"? I really appreciate your help!
@robotmania88964 ай бұрын
Hi Wang Jin! Thanks for watching my video! It seems to be a network problem. Do you have internet connection?
@Music-t5i8 ай бұрын
Great man you are great. Since 3 days we have been tried with jetson orin nano and our gpu has not worked on yolov8, but your script is great and very nice guidance now our gpu has worked on jetpack 6 latest os on jetson orin nano appreciate your work 👌🏻
@robotmania88968 ай бұрын
Hi Music! Thanks for watching my video! It is my pleasure if this video has helped you!
@Music-t5i8 ай бұрын
@@robotmania8896 keep it up bring up new videos like this and ros with yolov8 on jetson orin nano
@thomasdunn19067 ай бұрын
How did you get pytorch to install? Everything has changed since this tutorial and I can not get pytorch to install using the steps in this tutorial.
@soasuitegc8 ай бұрын
thanks for sharing!! what fps did you manage to get? I can't get more than 5 fps :(
@robotmania88968 ай бұрын
Hi soasuitegc! Thanks for watching my video! The FPS largely depends on yolo model size and orin nano model. Are you using exactly the same yolo model and orin nano model as my tutorial? Since it seems to me that orin nano can do much faster than 5 FPS.
@robotmania88968 ай бұрын
@@장성숙-l9z 감사합니다!
@kawthertrabelsi49962 ай бұрын
Hello :) is it possible to use Jetson Nano 2 GB with 2 USB Webcams and yolov8!!
@robotmania88962 ай бұрын
Hi Kawther Trabelsi! Thanks for watching my video! With small image size and small model, you probably will be able to execute yolov8 on the Jetson Nano. But you will not be able to execute inference simultaneously on both cameras. You will need to execute inference sequentially.
@WangJin-ox4yf5 ай бұрын
Thanks a lot for making this video!!! I am just wondering if this tutorial is also suitable for Jetson Nano?
@robotmania88965 ай бұрын
Hi Wang Jin! Thanks for watching my video! If you would like to use Yolov8 with Jetson Nano, this video will help you. kzbin.info/www/bejne/oKCki3iLl7-Nr5o
@gianlucademusis84559 ай бұрын
Hello... How I can recognise only an object like potholes using this project? Thank you
@robotmania88969 ай бұрын
Hi Gianluca De Musis! Thanks for watching my video! If you would like to recognize potholes, you have to use your own trained model (pt file). Change ‘yolov8m.pt’ (line 22 in the “yolov8_rs.py”) to your model’s name.
@thomasdunn19067 ай бұрын
Are there any updates? I can not get PyTorch to install. The steps have changed since this video. I have had my Orin Nano for two weeks and still can not get it to inference on GPU. I am starting to lose hope in the Orin Nano.
@robotmania88967 ай бұрын
Hi Thomas Dunn! Thanks for watching my video! Where exactly are you experiencing a problem?
@thomasdunn19067 ай бұрын
@@robotmania8896Thank you so much for the response! The commands on the Installing PyTorch for Jetson has changed. I Had to re flash my sd card with the version you were using (5.1.2) and then pause your video as you were highlighting and enter manually. I was finally able to install PyTorch that way! I can now run yolov5 on my computer with GPU inference. I am having a problem getting yolov8 to work however. I can not get bounding boxes to show when I use v8 on my webcam.
@thomasdunn19067 ай бұрын
@@robotmania8896 The installing pytorch for jetson platform has changed. I believe it is to support jetpack 6. I had jetpack six installed and it would not work. I flashed jetpack 5.1.2 and it still did not work. I ended up watching your video and pausing as you highlighted the commands and entered them manually and it worked! Thank you so much for your help!!
@robotmania88967 ай бұрын
@thomasdunn1906 I am glad that my video has helped you! Were you able to run yolov8?
@thomasdunn19067 ай бұрын
@@robotmania8896yes! Thank you so much. I could not have done it without your video.
@충현이-p1r9 ай бұрын
Thanks for the video which was greatly useful!!! But I couldn't find a way to download the yolov8_rs file that you said to download it from google drive. How and where can I get this file?
@robotmania88969 ай бұрын
Hi 충현이! As mentioned near the ending of the video, the google drive link is in the description. Please open description and you will find the link.
@충현이-p1r9 ай бұрын
@@robotmania8896 Thanks for your comment. Sorry for bothering, but can you tell me the version of openCV you are using?
@robotmania88969 ай бұрын
@@충현이-p1r I haven’t got Jetson Orin by my side right now, so I cannot check. But I didn’t do anything special while installing opencv. If you install the version specified in the “requirements.txt” file, the program should work. Do you have any troubles with opencv?
@truong-74909 ай бұрын
Please!! What is your ubuntu and python version you using. I'm a newbie Yolo and Robotic! Respect for your helping
@robotmania88969 ай бұрын
Hi Trường -! Thanks for watching my video! It is ubuntu20.04. Python version is 3.8.
@enesschebbaki122610 ай бұрын
Hi, I would like to ask you if it would be possible to do the same with 'Pose Detection'? Let me explain, I would like to take the keypoints on the colour view and put them in real time in the depth view. Basically to do the exact same thing you did but using not only the bounding box but also the keypoints but I don't understand how to achieve this, I would be grateful if you could answer me. Thanks!
@robotmania889610 ай бұрын
Hi Eness Chebbaki! Thanks for watching my video! Yes, it is possible. In case of “pose detection”, as described in the page below, you have to extract “keypoints” from results just as I did for bounding boxes in this tutorial. Then you will be able to plot coordinates of “keypoints” on depth image. docs.ultralytics.com/modes/predict/#masks
@enesschebbaki122610 ай бұрын
@@robotmania8896 Thank you for replying! I have read the documentation and am trying to do the same thing as you did with the boxes but I am not getting any results. I don't know if it's because of the format in which the tensor containing the coordinates of the keypoints is output. Could you help me out?
@robotmania889610 ай бұрын
@@enesschebbaki1226 Here is the sample code to extract coordinates of the keypoints. /////////////////////////////////////////////////////////// from ultralytics import YOLO import os model_directory = os.environ['HOME'] + '/pose/yolov8m-pose.pt' model = YOLO(model_directory) source = "sample.jpeg" results = model(source) for r in results: keypoints = r.keypoints for keypoint in keypoints: b = keypoint.xy[0].to('cpu').detach().numpy().copy() print(f"b : {b}")
@enesschebbaki122610 ай бұрын
@@robotmania8896 Fortunately, I was able to extract the keypoints. I used your same approach but thank you infinitely for your willingness and time! Your videos are always inspiring 💪🏼
@赵子铭-v1i8 ай бұрын
This tuto saved my life!! GREAT video
@robotmania88968 ай бұрын
Hi 赵子铭! Thanks for watching my video! It is my pleasure if this video has helped you!
@TheRecep273 ай бұрын
hello, Is it possible to run yolov7 with these settings?
@robotmania88963 ай бұрын
Yes, I think you will be able to execute yolov7 with libraries that have been installed in this tutorial.
@TheRecep273 ай бұрын
@@robotmania8896 thank you
@seanwhen6 ай бұрын
This is great, borther! This installation video is the answer to my urgent need.
@robotmania88966 ай бұрын
Hi SeanWhen! Thanks for watching my video! It is my pleasure if this video has helped you!
@guillermovc Жыл бұрын
Hi, Do you plan to make videos related to the intel neural stick? Thanks for your tutorials!
@robotmania8896 Жыл бұрын
Hi Guillermo Velazquez! Thanks for watching my video! For now, I am not considering making a tutorial for the intel neural stick.
@nhatpham57974 ай бұрын
Hello, What python version are you using?
@robotmania88964 ай бұрын
Hi Nhật Phạm! Thanks for watching my video. I use python 3.10 in this tutorial.
@nhatpham57974 ай бұрын
@@robotmania8896 Can I use Python 3.11 to install Ultralytics on Jetson Nano, I get a mistake when I use "Pip3 install Ultralytics"
@robotmania88964 ай бұрын
Yes, you should be able to install ultralytics on python 3.11. Please refer to this page. docs.ultralytics.com/quickstart/
@nhatpham57974 ай бұрын
@@robotmania8896 Hey, how can I check my version's jetpack
@nhatpham57974 ай бұрын
I use jetson nano not jetson orin nano, Is setting these up any different? I can setup the Pyrealsense2 lib
@xp-4yt Жыл бұрын
First of all i want to thank your for another great and very detailed tutorial with nice explanation. I am pretty new to Robotics and your videos are crucial to avoid a lot of troubles newbees are spotting in this field. And i have also small questions to the video - as i got the idea, the way an object detection should be used with all the Jetson, is a Deepstream Engine implementation. The engine seems to be much faster than a pt model. Am i wrong? Btw, can you give me a clue how to work with the navigation stack in ros2 to add the conditions and maybe some stoppong criteria? I want to use object detection to create an autopilot which takes road signs, traffic lights and etc into account. P.S. I am sorry for my English . P.P.S. Thanks for the new video once more!
@robotmania8896 Жыл бұрын
Hi xp-4yt! Thanks for watching my video! Yes, if you need to push your inference time to a limit, you should use DeepStream Engine. But I think describing several topics in one tutorial could be confusing, so I will make another tutorial for a DeepStream. As for navigation stack, some of the features you have mentioned could probably be achieved using Waypoint Task Executors. navigation.ros.org/plugins/index.html#waypoint-task-executors Also, in this tutorial I explained how to navigate to detected objects. It may also help you. kzbin.info/www/bejne/hZObnXqFfaeln8k
@xp-4yt Жыл бұрын
@@robotmania8896 Thanks a lot! This information is extremely helpful! ☺️
@KobeNein11 ай бұрын
@@robotmania8896 I can't wait to see DeepStream tutorial
@robotmania889611 ай бұрын
@@KobeNein I am planning to make a tutorial about DeepStream in a few weeks.
@xp-4yt11 ай бұрын
@@robotmania8896 I am also very inpatient about it. I am dealing with Deep stream right now I have no idea how to use an engine with topic data. Seems that topic information should be converted into video stream, but I have synchronisation problems...Is it possible to use Deep stream engine with Ros2 in real time at all? Forums say no
@mrortach4 ай бұрын
Can you please explain on a Windows computer?
@robotmania88964 ай бұрын
Hi MrOrtach! Thanks for watching my video! I currently not planning to make a video regarding realsense and windows. But here you can find a detailed explanation on how to build librealsense on windows. dev.intelrealsense.com/docs/compiling-librealsense-for-windows-guide
@prescriptionoatmeal7970Ай бұрын
Quick and easy!
@robotmania8896Ай бұрын
Hi Prescription Oatmeal! Thanks for watching my video! It is my pleasure if this video has helped you!
@nurulnajwakhamis26804 ай бұрын
Thank you for the tutorial but I got problem where I cannot find the OFF folder
@robotmania88964 ай бұрын
Hi nurulnajwa khamis! Thanks for watching my video! Is there any other generated folder, or the folder is not generated at all?
@nurulnajwakhamis26804 ай бұрын
@@robotmania8896 Hi thank you for your reply. there is no folder or other folder generated. Even I got error said that the pyrealsense2 module not found. But I try to change to another version of librealsense into v2.54.2 and it's working!
@robotmania88964 ай бұрын
@@nurulnajwakhamis2680 I am glad that you make it work!
@autumnsoybean595211 ай бұрын
Awesome! Great tutorial!
@robotmania889611 ай бұрын
Hi autumn soybean! Thanks for watching my video! It is my pleasure if this video has helped you!
@csrasel61336 ай бұрын
love this
@robotmania88966 ай бұрын
Hi CS RASEL! Thanks for watching my video! It is my pleasure if this video has helped you!
@mr.94892 ай бұрын
thankyou for saving me ㅠ.ㅠ
@robotmania88962 ай бұрын
My pleasure!
@준호김-k6y9 ай бұрын
Great video
@robotmania88969 ай бұрын
Hi 준호 김! Thanks for watching my video! It is my pleasure if this video has helped you!