Hey everyone! 2 things. First of all, we have instructions on the written guide of how to BOTH decrease the resolution and convert to NCNN to get greatly improved FPS (thank you very much Philipcodes from the forums). And talk about rough timing, Yolo11 launched the day after this video but it will work perfectly fine with this guide. In our guide, we have the line: model = YOLO("yolov8n.pt") You will just need to change it to: model = YOLO("yolo11n.pt") to start using Yolo11.model = YOLO("yolov8n.pt")
@MichaelSchultzSF3 ай бұрын
Love this! Just picked up a pi5 and a camera, going to start here for sure. Your vids are always so easy to follow and super helpful. Keep it up!
@kyuya57383 ай бұрын
Thank you! this is the best beginner tutorial I've come across. Can you please do a video on implementing AI kit to boost fps as well?
@Core-Electronics3 ай бұрын
The AI Kit is still quite fresh software wise, right now they have a fantastic set of instructions on getting it, but its not running out of a thonny script like this: www.raspberrypi.com/documentation/accessories/ai-kit.html#getting-started
@prodcalls2 ай бұрын
Amazing video. Thank you so much sir, you deserve more views!
@billycartdemons3 ай бұрын
great info & video - will definately use some of this
@tunglee43493 ай бұрын
This is a very helpful tutorial!!!! Nice work ❤
@quinxx127 күн бұрын
Thank you! Really appreciate you sharing this useful package! I have a slightly different use case in that I want to identify things which are not covered by the training data with Coco library. Does it make sense to collect my data and just create a training dataset which is in the coco library format? Is it as straight forward as that?
@Core-Electronics6 күн бұрын
You will need to train your own Yolo model which is a lot easier nowadays but still a little involved. A first try might be to search for any community trained models available on hugging face that - someone might of already trained one for you! huggingface.co/models?other=yolo You can find the model called “best.pt” under files and versions (after training a custom model it will default call it “best.pt”). Simply download and copy that into the same folder as your scripts and other models, and then you’ll need to modify your Python code to tell it to use the model If not you may need to look into training your own which is quite processing intensive (on our RTX 4080 it took nearly 3 hours). If you don't have the hardware to do so, you might wanna check out a service like Roboflow where you may even be able to do it for free! Best of luck!
@hehehehagrrrr13192 ай бұрын
How can I train with my own dataset?
@Core-Electronics2 ай бұрын
Training with your own data is a little bit more involved. Ultralytics has some great documentation on it, but be warned you will need some decent hardware. On a 4080 it ussually takes 2 or so hours, no GPU may take days or a week, and on a raspberry Pi it may take months. docs.ultralytics.com/yolov5/tutorials/train_custom_data/#23-organize-directories
@viniifsc3 ай бұрын
Nice vid, can you make a tutorial working with the AI Kit or the Coral Edge TPU? I'm interested to see the perfomance gain on those
@Core-Electronics3 ай бұрын
Its not a simple task to run this code on a dedicated AI chip, for the AI Kit you need to jump through a few hoops to convert the model to the specific format it needs. The AI Kit library does come with YoloV8n ready to go and we have seen reports of people getting FPS in the 50-60 range which is incredible! Right now it is a little difficult to actually use the AI Kit in a project (it feels a little more like a tech demo), but software support for it is developing rapidly so that shouldn't be a problem for too long. When the software support is mature enough you will definitely find a video here!
@germancruzramАй бұрын
Do you know of any alternative to connect the raspberry camera via USB instead of the flex cable (very short)?
@honchinleng9283Ай бұрын
I saw your videos using OpenCV and now with Yolo. Which one should I start as a beginner? Appreciate your super advice.
@Core-ElectronicsАй бұрын
These projects have progressed a lot since we made the old video, this one is easier, quicker to get going, and runs more than 10x faster! This one actually also uses OpenCV as well!
@mauchmaxamadeus2 ай бұрын
Can it also recognize small flying animals such as wasps, flies or even mosquitoes?
@Core-Electronics2 ай бұрын
I think you may have a hard time with that, they may be too small to be seen by the camera, and they may be too fast and blurry! On top of this I don't think the model will be able to identify them sorry.
@weihong8337Ай бұрын
thanks you!!! I made it. I use VNC not hdmi. FPS: 1.7
@weihong8337Ай бұрын
I try the ncnn, and the new FPS: 6
@puneethff4927Ай бұрын
@@weihong8337 brother means ? wt is ncnn ? how to use it ?
@sergeivoronov51612 ай бұрын
Thanks for the video. What’s the approximate max distance in which the detection will work? Or which size on screen should be object we want to detect and does this parameters affected by video resolution and model size?
@Core-Electronics2 ай бұрын
A lower resolution will lower the distance it can detect, and a smaller model will also lower the distance. We found that the medium model, when converted to NCNN (so at the standard 640x640) could recognise a cup at about 8-10 meters away.
@mehulkini9384Ай бұрын
@Core-Electronics I needed your help. So basically I am using it for my Quadcopter so I wanted to use YOLO v5 on my pi5 so can you tell me which camera would be good and the objects I have to find is that the plastic and the styrofoam. How do I train my YOLOv5 to Do that ?
@Core-ElectronicsАй бұрын
Really any camera will do, the Pi camera module v2 and v3 might be a good pick (you can also use a webcam and we have some code in the written guide linked below the video). Your issue would be in getting the model to detect Styrofoam and plastic. Training a model is quite involved and without a GPU can take several days. There are some pre-trained models that you can find here that might fit your needs, but if not you may need to deep-dive into training your own model, which we unfortunately don't cover 😭. huggingface.co/models?other=yolo
@lumostoyourday197610 күн бұрын
Would I be able to use this for a drone that detects certain objects and navigates towards them? Also would I still be able to use the Raspberry Pi 4 Model B 2019 Quad Core?
@Core-Electronics6 күн бұрын
The results data does contain information about the position of the object on screen, how you would use this to guide the drone may be a little more difficult. But it can run on the PI 4, just a bit slower than the Pi 5. We have seen people having trouble using YOLO world on the PI 4, but the nano model should be able to run
@odko1137Ай бұрын
Hello thanks for the video. I have a some questions Is it possible to if animal detected then i should spin the motor, but I don’t know how to do it
@Core-ElectronicsАй бұрын
You would need to first get YOLO to detect the animal first. Here is a list of all the things that are in the COCO library that can be detected: tech.amikelive.com/node-718/what-object-categories-labels-are-in-coco-dataset/ Then you would need to connect up a motor driver and motor. We have a guide on how to do that here to get you started! kzbin.info/www/bejne/m5KZpYampcyNorsfeature=shared And if you need a hand with it we have a maker community forum where lots of makers can help out with your project! forum.core-electronics.com.au/
@rbbala35893 ай бұрын
Nice . From india
@Akashplays-v2i24 күн бұрын
Sir can tell me raberry pi 4B setup from basics please sir😢
@ryandx597320 күн бұрын
Good morning, Im doing a technician Degree and I have to do a big project now it is similiar to a bachlor degree. I choose the topic AI Object detection with Raspberry Pi. Am I allowed to use your script? I will put the source from the guide in there. And also, is there a way that i can modify the script so I can optimize it a bit? in what topic would i need to do research? Thank you in advance!
@ryandx597320 күн бұрын
I have a cat :D! My idea is to put the camera infront of our garden door. When the object cat is detected for 5-10 secounds. It sends a email to me so I know if im in the living room that I have to open the door.
@Core-Electronics20 күн бұрын
Very nice project! We actually derived a lot of this code from the Ultralytics website itself, there is a lot of information over there that might help: www.ultralytics.com/ But you are more than welcome to use our code or go back and use theirs directly. What are you looking to do optimisation-wise? In the written guide we have some better tips on improving FPS, but if you want to optimise anything else about the code, large language models like chatGPT can help greatly. It may also be a good source in learning about what you need to learn. We also have a maker forum where we have lots of people who help out with this sort of stuff, so if you need hand feel free to post over there: forum.core-electronics.com.au/ Good luck with your project!
@pamus62423 ай бұрын
I cant believe how simple, uncomplicated and pragmatic this video is.....however is there anyway to have a pass-through to outsource that compute to an x86 system or couple of rpi 5 clusters ? Also this thing could run full frame rate on that odroid with that Rockchip monster with 16GB Ram. Will give it a try, but need to get me a RPi 5 first, have an RPi 4 already.
@Core-Electronics3 ай бұрын
The Ultralytics implementation of YOLO is very cross platform, so if you can get it set up on an x86 system, you should be able to use nearly the same python code we cover here! In terms of the Odroid, It may come down to an issue of optimisation, even when we convert it to NCNN it still doesn't fully utilise all of the Pi's hardware, would need to test though. And RAM isn't a big factor here, the biggest model is barely using 2GB of RAM which is incredible! Best of luck when you can give this a go!
@pamus62423 ай бұрын
@@Core-Electronics Wow! ok Chris from explaining computers did a video yesterday on a new Radxa with an Intel N100 chip and similar build to a RPi5. This x86 thing could do it, just guessing. I have a tiny Thinkcenter lying around with an i7 6700....Now all I need is to be able to connect the camera to the PCie interface or search for some usb module that can connect to the cam....may need to research more....
@feather_jp82 ай бұрын
I’m trying to automate something based off object recognition and I was wondering if you might be able to help me out, specifically I want it to play a noise whenever it detects certain objects, for example when it see a person, it would play a .wav file that correlates
@Core-Electronics2 ай бұрын
You can easily achieve this with the Pygame library, we don't have a specific tutorial on this but you can find a million others online demonstrating how to use it. The important lines should be something along the lines of: import pygame pygame.mixer.init() pygame.mixer.music.load("myFile.wav") pygame.mixer.music.play() You'll just need to whack the .wav file in the same folder as the object detection script. If you get stuck or need a hand though, feel free to chuck a post on our community forums!
@armanddewet9700Ай бұрын
Are you able to use any USB camera for this type of integration?
@Core-ElectronicsАй бұрын
We have some code in the written guide that uses a webcam instead. There can be some issues with the colour profile used by the camera, and we talk a little about it in there. core-electronics.com.au/guides/raspberry-pi/getting-started-with-yolo-object-and-animal-recognition-on-the-raspberry-pi/#appendix-using-a-webcam
@Username-dr6ru2 ай бұрын
Can you use yolo world to control hardware as well, or does that only work with the base models?
@Core-Electronics2 ай бұрын
The hardware control script can definitely be modified to use yolo world. You should only need to change the line where we choose the model to use, and add in the line where we prompt it what to look for!
@karzokalori892 ай бұрын
Mate, really great, educational, and interesting video. Could you show this with the new AI HAT+ 26 TOPS from Raspberry Pi? It would be very interesting to learn how to extract the output in the form of a CSV file or something similar, to use the information to find out how many people pass by the camera and at what time, or how many cyclists, etc. Maybe even make graphs from this?
@Core-Electronics2 ай бұрын
We definitely have some AI HAT videos in the pipeline (but the setup and usage is very different), I don't know about data logging thought. Large langauge models like ChatGPT and Claude would be more than capable of helping you write the code your looking for though!
@murraystaff5683 ай бұрын
Nice video! I just bought an AI kit from you guys (today!) hoping this will boost fps significantly?
@Core-Electronics3 ай бұрын
There are a few steps between running the models that come with the AI kit, and getting YOLO to run on it. (But we may be working on an AI HAT guide as we speak 😏)
@mohammedshoaib27523 ай бұрын
Can we implement this in rpi4b 4gb ram? (using external camera)
@Core-Electronics3 ай бұрын
We haven't tested it, but it will most likely work on a Pi 4. Just be prepared as it may be very slow, the Pi 5 is about 2-3x faster than the Pi 4.
@RohanKumar-lm8koАй бұрын
Can you help me in this: pip install ultralytics[export] These packages do not match the Hashes from the requirement file.
@Core-ElectronicsАй бұрын
I previously had this issue and it was caused by not running the first set of commands properly: sudo apt update sudo apt install python3-pip -y pip install -U pip If that doesn't work, a fresh installation of Bookworm OS might help. If all that fails feel free to post on our community forum topic for this video, we have lots of makers over there that can help! forum.core-electronics.com.au/t/getting-started-with-yolo-object-and-animal-recognition-on-the-raspberry-pi/20923
@rbbala35893 ай бұрын
Can you create a that take things using object detection pls😁
@GenreFluid3 ай бұрын
Can you use this for Wildlife live streaming?
@Core-Electronics3 ай бұрын
You most definitely could! The troubles may be in supplying power to it, and getting it an internet connection to send data back. You would also need to experiment to see which types of wildlife it will pick up. It may recognise everything 4-legged as a dog!
@joshuamiguelroa29622 ай бұрын
Can i use the raspberry pi 4b and raspberry pi camera?
@joshuamiguelroa29622 ай бұрын
Im working on a project that works with iot and connected to esp32
@Core-Electronics2 ай бұрын
We haven't tested it, but it will most likely work on a Pi 4, Ultraytics says it has support. Just be prepared as it may be very slow, the Pi 5 is about 2-3x faster than the Pi 4.
@HarshitGautam-bj3lc3 ай бұрын
Hey, Loved your content i am an Intern at ISRO(India Space Research Organisation) and i am working on deploying a yolov8 model on raspberry pi can you help me deploy that with raspberry pi ai kit and improve the model for real time inference. What format would be best to deploy as i have seen few videos that says convert the model into onxx then convert it into Hailo hf format using the hailo dataflow compiler or model zoo then copying and then running the code am i going right?? your help is highly appreciated.
@Core-Electronics3 ай бұрын
That sounds exactly right! The AI Kit only works with the Hailo .HEF model format, and the easiest way is to first convert it to ONXX, then HEF. Just be aware that when you convert it to ONXX you will often "bake in" a lot of configuration. When its in Pytorch format, we can change the resolution, and for things like YOLO world we can change the prompts for it to look for, but when we convert it to ONXX it locks these in and we cant change it. So get the settings right, convert to ONXX, then to HEF and run on the hat. The usage is different than our script here though, we are using a nice library which lets us run it with high level Python code and its not as easy yet to do this with the kit. Best of luck mate!
@Core-Electronics3 ай бұрын
Another thing! If you run into issues with the AI Kit check out the AI camera that just launched - it uses the Sony imx-500. We have had a lot more ease in using it and writing custom scripts with it. It may not be as powerful, but it still runs well.
@HarshitGautam-bj3lc2 ай бұрын
@@Core-Electronics Thanks a lot.
@phafoubest82682 ай бұрын
I keep getting error dependency in installing the Ultralytics[export]. Any has encountered this before and how it can be fixed?
@Core-Electronics2 ай бұрын
Have you tried running the line multiple times? It installs quite a lot with that line and you may need to run it a few times to let it do its thing. If that doesn't fix it, feel free to post your issue on our dedicated community forum topic for this video. Try and include some information about the specific dependency issue. We have a lot of makers over there that are happy to help! forum.core-electronics.com.au/t/getting-started-with-yolo-object-and-animal-recognition-on-the-raspberry-pi/20923/6
@sams90892 ай бұрын
The ncnn portion of the code doesn’t work for me! I get an error “ModuleNotFoundError: No module named ‘ncnn’ “. I have the exact lines of code running and the main code works as well so i’m unsure how to fix this
@Core-Electronics2 ай бұрын
Is this when running the conversion script or trying to run the object detection code after converting it? Make sure that your script is saved and is in the same folder as all your other code and models. If this still doesn't work, feel free to chuck a post on our community forum topic for this video, we have lots of makers over there that can help. forum.core-electronics.com.au/t/getting-started-with-yolo-object-and-animal-recognition-on-the-raspberry-pi/20923 We are also in the process of updating the NCNN conversion section as we have found a better way so that should be up sometime today if you want to give it a try!
@sams90892 ай бұрын
@@Core-Electronics This is when running the conversion script. It tries to run update but spits out: AutoUpdate skipped (Offline) I’ll post on the forum but thanks!
@Lp-ze1tg2 ай бұрын
How slow will it be on pi 4?
@Core-Electronics2 ай бұрын
Probably about 2-3 times slower 😞
@Thebackbencher173 ай бұрын
Can we do it on rpi4
@Core-Electronics3 ай бұрын
We didn't test it on an RPi4, but it should work pretty much the same, Ultralytics says that it is supported. Just be ready for it to run about 2x slower :(
@Arctics0424 күн бұрын
yes but can't get above 2 fps. It's rather 1 fps
@tumultuouscornucopiaАй бұрын
Half of this is missing. (1) You don't say you need a sudo apt upgrade after the sudo apt update. (2) As far as I can tell, the ultralytics install does not install PyTorch so that is another step. (3) There seems to be a load of settings needed to make the camera work - although these may be out of date, I can't tell because I cannot make the install work. Given that you show setup from a new set of components all that stuff is necessary. All I get running your tutorial is a load of errors about torch>=1.7.0 (no - re-running does not magically fix the issue).
@Core-ElectronicsАй бұрын
Sorry to hear you are having issues. This installation process was taken directly from Ultralytics who have made most of the modern YOLO models. Running apt upgrade won't hurt but it's not entirely needed here as we are mainly focused on ensuring that Python and pip are up to date. You may have encountered an issue in your installation process as it will most definitely install Pytorch. That or you may have an issue with your virtual environments. The camera settings can vary depending on the Pi and could be many things. Feel free to post your issue on our community forum post for this guide with a little bit of information about your setup and where the issue is, we have lots of makers over there that can help!
@kavingnanamurali409720 күн бұрын
Anyone know why my colour saturation is off like people are appearing blue
@Core-Electronics17 күн бұрын
That sounds like you have a colour space issue. We had these issues when we were using a USB webcam as the red and blue channels were being swapped (and your mostly red-ish face becomes mostly blue-ish ahahaha). At the end of the written guide we have a script for usb webcams that fixes the colour space. Have a dig around with it as its likely to fix your issue!
@nikpatel26052 ай бұрын
If anyone has got any ideas on how to use this for a night vision camera that will turn lights on when a fox is detected please let me know