Great ! this is a really new way of creating detection models, instead of fighting with those messy tensorflow and conda packages
@Ultralytics9 ай бұрын
Glad you like it!
@wsdea Жыл бұрын
Coordinates are in (xc, yc, w, h) format, not (x1, y1, x2, y2) format as said at 2:34. This got me confused for a minute
@Ultralytics Жыл бұрын
Yes that's right, box labels are in xywh normalized coordinates :)
@jeffchoate8712 Жыл бұрын
thanks I heard that and then spent 10 minutes searching to try and find where its actually documented, only really found something on roboflow though.
@Ultralytics5 ай бұрын
Glad you figured it out! For future reference, you can find detailed information on bounding box format conversions in our Simple Utilities documentation docs.ultralytics.com/usage/simple-utilities/. It covers various formats and how to convert between them. 😊
@poor00404 ай бұрын
Saved me some time thanks 👍
@Ultralytics4 ай бұрын
You're welcome! Happy to help. If you have any more questions, feel free to ask. 👍😊
@codeinrust9 ай бұрын
YOLOv8 is so much easier to use than other models that require you to write tons of complex code, just to load the model and run inference. Thank you for making object detection (and related computer vision tasks) accessible to everyone!
@Ultralytics9 ай бұрын
Thank you very much for your kind words! We eagerly anticipate receiving more of your positive feedback in the future.
@amarboldbatzorig73137 ай бұрын
Super straightforward and good explanations. Thank you!
@Ultralytics6 ай бұрын
We are glad to hear your feedback. Thank you :)
@equaltopeace5161 Жыл бұрын
Can you show the later part of this code? I can't see it in its entirety in the video. (!yolo task=detect mode=predict...).It is the penultimate code that appears in the video
@Ultralytics Жыл бұрын
Could you kindly point out the specific second in the video where you encounter difficulty in viewing the code? The general command for detection is mentioned """ yolo task=detect mode=predict source="path/to/video.mp4" show=True """
@equaltopeace5161 Жыл бұрын
ok,in 5:15,"!yolo task=detect mode=predict model=?content/runs/detect/train/weight/best.pt conf=0.5 source={dataset.location}/t..." I want to know what the next code is😥@@Ultralytics
Thank you for your patience in replying, I'll start my object detection assignment.@@Ultralytics 🥰
@karinafernandaperez86827 ай бұрын
@@equaltopeace5161 thank you for that question! I was missing that part, too!
@somerset0069 ай бұрын
@ultralytics Thank you so much for your response to my previous question. Another one I had was this: in my project, it would be helpful to use data augmentation techniques, such as fuzzifying the object being identified/tracked. Is there a way to do it at runtime, without saving the augmented images onto disk first? Thank you!
@Ultralytics9 ай бұрын
Yes, you can perform real-time data augmentation using libraries like OpenCV directly within your codebase. This allows you to dynamically augment images during runtime without the need to save them onto disk first.
@amirsavadkouhi250 Жыл бұрын
Hello, thanks for your education, i wanna have test with custom data by Faster R-CNN SO do you have a video about that
@Ultralytics Жыл бұрын
Thanks! A faster R-CNN video is not available at the moment but may be available in future. Thanks Ultralytics Team!
@anhnguyenduongquoc3821 Жыл бұрын
Dear sir I have a question, when i tried to install YOLOv8 with the code at 1:14 there is a bug : "ValueError: Invalid 'mode='. Valid modes are ('train', 'val', 'predict', 'export', 'track', 'benchmark')." It seems not like the result you got in this video. How can i fix it, thanks a bunch!
@Ultralytics Жыл бұрын
You can omit the `!yolo mode=checks` command as it has been deprecated in the latest PYPI packages. Thanks Ultralytics Team
@anhnguyenduongquoc3821 Жыл бұрын
thanks a lot, i had a great judgement from the supervisor
@Ultralytics5 ай бұрын
You're welcome! Glad to hear it worked out. If you have any more questions, feel free to ask. 😊🚀
@bandaradasanayaka4743 Жыл бұрын
I need a big help from you. I am currently working on a project. In my project, I collected a dataset and trained the females and males in an image in that dataset by labeling them separately. I need to get the male count and female count of the labeled image. I would like to know if it can be done.🙏🙏
@Ultralytics Жыл бұрын
You can train Ultralytics YOLOv8 on your annotated dataset. Afterward, you can perform object detection on individual frames and apply filtering based on class names, such as "male" and "female."
@bandaradasanayaka4743 Жыл бұрын
@@Ultralytics I did annotation and trained dataset with YOLOv8, Can you tell how can i do object detection on individual frames and filtering based on class names because i am confused with that Plz....🙏🙏😥
@Ultralytics Жыл бұрын
You can ask your technical queries on Ultralytics GitHub or on Discord Server. Ultralytics GitHub: github.com/ultralytics/ultralytics/ Ultralytics Discord: ultralytics.com/discord
@jeremiahmarimon163 Жыл бұрын
Are all of these steps free to use and has no charge?
@Ultralytics Жыл бұрын
Yes, the steps are free to use. It's worth mentioning that Colab will have some limits in place for GPU usage on a free account.
@omerkaya5669 Жыл бұрын
I have too many images. Once the training starts, it takes too long to scan the images. What should I do for this?
@Ultralytics Жыл бұрын
Rather than generating a cache on the disk, you can store data in RAM, which will yield speed and efficiency benefits. However, this approach will necessitate a larger amount of RAM.
@omerkaya5669 Жыл бұрын
I use Colap Pro+. Is there any command to save data to RAM?@@Ultralytics
@Ultralytics Жыл бұрын
Certainly, you can utilize `cache=ram` when executing the training command. For example: yolo train data="path/to/data.yaml" cache="ram" For more information, you can check the Ultralytics YOLOv8 training arguments: docs.ultralytics.com/modes/train/#arguments
@omerkaya5669 Жыл бұрын
Thank you very much. Another question I have is that the training time is very slow in YOLOv8. 20 thousand images, 32 batch size and 100 epochs. V100 GPU@@Ultralytics
@Ultralytics Жыл бұрын
The only remedy for this situation is to use a more potent GPU.
@TheRomanFour11 ай бұрын
Does this traning method override the existing classes from before and only detects the one one trained for? I want to train it but keep the pretrained classes in the model aslo!
@Ultralytics11 ай бұрын
If you fine-tune the model on custom data, it will exclusively detect the classes you trained it on. For additional details, you can refer to our documentation: docs.ultralytics.com/modes/train/ Thanks
@TheRomanFour11 ай бұрын
@@Ultralytics I am wondering is there a method to keep the old class? Per example retrain it on boats but keep old class of people ?
@Ultralytics11 ай бұрын
@@TheRomanFour If you will fine-tune the model on custom dataset, the old classes will be override!
@samuelmiklos8313 Жыл бұрын
Hey! I am encountering an error while training a YOLOv8 model on Google Colab Pro+ with my custom dataset. During the initial epoch of training, I receive multiple instances of an Assertion '-sizes[i]
@Ultralytics Жыл бұрын
What batch size are you utilizing during the training process? If its 64, try to use 32 or 16 to avoid CUDA assertion errors. Thanks Ultralytics Team!
@yaminadjoudi4357 Жыл бұрын
thank you for the video, please can we use yolo for healthcare images classification ?
@Ultralytics Жыл бұрын
Yes, you can use Ultralytics YOLOv8 for healthcare images classification. For more information, you can check our image classification docs: docs.ultralytics.com/tasks/classify/
@wilsonernst55539 ай бұрын
Love the video, you said the notebook was in the description but I can't find it anywhere
@Ultralytics9 ай бұрын
Thanks for the feedback! The Colab notebook is now included in the description. Alternatively, you can access it directly via the following link: colab.research.google.com/github/ultralytics/ultralytics/blob/main/examples/tutorial.ipynb
@helengao-dy7kd8 ай бұрын
@@Ultralyticsmaybe something wrong. This notebook is not the one in the video
@Ultralytics8 ай бұрын
@@helengao-dy7kd We regularly update the notebook, for better support!
@Huds-ux1xb Жыл бұрын
How if we want to see recall and precision
@Ultralytics Жыл бұрын
As you initiate the model training, the Precision and Recall for each epoch will be displayed in the CLI or Google Colab. Thanks Ultralytics Team!
@Huds-ux1xb Жыл бұрын
@@Ultralytics how about f1 score ?
@Huds-ux1xb Жыл бұрын
@@Ultralytics did yolov5 have same like yolov8 ?
@Ultralytics Жыл бұрын
Yes Precision and Recall display will be same in YOLOv5 and YOLOv8
@Huds-ux1xb Жыл бұрын
@@Ultralytics how about F1 Score ? I needed, did YoloV5 still working on 2023? Any bug ok google colab?
@DIVAKAR-b5k Жыл бұрын
can we implement this in Jupyter Notebook for real-time video in yolov5?
@Ultralytics Жыл бұрын
Certainly, you can carry out this implementation within a Jupyter Notebook. However, it's essential to note that a GPU is necessary for efficient execution, as training on a CPU would result in a considerably slower training process. Additionally, YOLOv5 is well-suited for training models on custom data.
@DIVAKAR-b5k Жыл бұрын
@@Ultralytics I am running it in CPU but getting many errors.
@DIVAKAR-b5k Жыл бұрын
@@Ultralytics can u send me the jupyter notebook codes for real-time video sequence image detection?
@Ultralytics5 ай бұрын
Running YOLOv5 on a CPU can be challenging due to performance constraints. For real-time video detection, using a GPU is highly recommended. Unfortunately, I can't provide full Jupyter Notebook code here, but you can find detailed guides and examples in our documentation. For real-time video detection, you can start with the following steps: 1. Install Ultralytics: ```python pip install ultralytics ``` 2. Load the model and perform inference: ```python from ultralytics import YOLO model = YOLO("yolov5s.pt") Load a pre-trained model results = model("path/to/video.mp4") Perform inference on a video ``` For more detailed instructions, please refer to our FAQ docs.ultralytics.com/help/FAQ/. If you encounter specific errors, feel free to share them, and I'll be happy to help!
@deepaknr7616 Жыл бұрын
Can you please explain again what does the tag imgsz is about?
@Ultralytics Жыл бұрын
The term "imgsz" stands for "image size," representing the dimensions at which the model processes input images. In simpler terms, it defines the size to which all images are resized before being presented to the model during the training process. For instance, if you set imgsz=416, all images will be transformed to a 416 x 416 size before undergoing model training.
@deepaknr7616 Жыл бұрын
Thanks so much for your explanation
@Ultralytics5 ай бұрын
You're welcome! 😊 If you have any more questions, feel free to ask. Happy training! 🚀
@jakubkahoun83838 ай бұрын
Can you put link to whole colab file? For example 0:53 i dont see the code etc..
@Ultralytics8 ай бұрын
Yes, you can access the Colab notebook at the link: github.com/ultralytics/ultralytics/blob/main/examples/tutorial.ipynb
It looks like you're trying to run a YOLOv8 prediction task in Google Colab. Your command looks good! If you encounter any issues, make sure your paths are correct and that you have the latest versions of `torch` and `ultralytics` installed. For more detailed guidance, you can refer to our documentation: docs.ultralytics.com/integrations/google-colab/ Happy coding! 🚀
@devkumaracharyaiitbombay53412 ай бұрын
thanks you saved my day google collab suggested you when i am facing errors
@Ultralytics2 ай бұрын
You're welcome! 😊 If you need more help with Google Colab, check out our guide here: docs.ultralytics.com/integrations/google-colab/. Happy coding!
@快美思11 ай бұрын
thanks for your video! but is there any way to train 3D custom dataset?
@Ultralytics11 ай бұрын
Training with 3D datasets is currently not supported, but there's a possibility it will be available in the future. :) Thanks, Ultralytics Team!
@jesusmtz29 Жыл бұрын
Awesome video. Is there documentation on wjere to train for segmentstion?
@Ultralytics Жыл бұрын
You can fine-tune YOLOv8 Object Segmentation on custom data by following the steps mentioned in the Ultralytics Docs: docs.ultralytics.com/tasks/segment/#train.
@bilalshahid7494 Жыл бұрын
Hi, can you please tell how can I determine accuracy, precision and f1 score on my testing data?
@Ultralytics Жыл бұрын
To determine accuracy, precision, and F1 score for object detection on your testing data: 1- Accuracy: Divide the number of correctly detected objects by the total objects in your testing data. 2- Precision: Divide the number of true positive detections by the total number of positive detections (true positives + false positives). 3- F1 Score: Use the formula: 2 * (Precision * Recall) / (Precision + Recall), where Recall is the number of true positives divided by the total actual positives.
@bilalshahid7494 Жыл бұрын
@@Ultralytics but where do I get the confusion matrix? The confusion matrix in the "train" folder is not for the testing data right?
@Ultralytics5 ай бұрын
You're right! The confusion matrix in the "train" folder is for the training data. To get the confusion matrix for your testing data, you need to run the validation mode on your test dataset using the `model.val()` function. This will generate the confusion matrix and other performance metrics for your testing data. For more details, check out our performance metrics guide docs.ultralytics.com/guides/yolo-performance-metrics/.
@fabiodagostino752910 ай бұрын
Thank for the video. Can you explain me how is the labeling done in detail for YOLOv8?
@Ultralytics10 ай бұрын
Sure, you can use our data auto-annotatior: docs.ultralytics.com/reference/data/annotator/
@pushpendrakushwaha604 Жыл бұрын
Hey! if I want to retrain the model for more 50 epochs so I have to specify the location of dataset folder too? I mean data.yaml
@Ultralytics Жыл бұрын
Yes it will be required. It will help the code to access the dataset.
@pushpendrakushwaha604 Жыл бұрын
@@Ultralytics Thanks
@Ultralytics5 ай бұрын
You're welcome! If you have any more questions, feel free to ask. Happy training! 🚀
@binbinding6267 Жыл бұрын
bro can you make one video on adding regressive to estimate stuff's weight if possible? Thanks.
@Ultralytics Жыл бұрын
Thank you for the suggestions! We will certainly look into it.
@gonzalourbanos9930 Жыл бұрын
Hello, thanks for the video. However, i get an error when running "import utils" in the fisrt cell of the tutorial. I am using jupyter notebook and i already have utils installed. Do u know how to solve this issue? Thanks!!
@Ultralytics Жыл бұрын
You can uninstall "utils" since it's not essential. Utils is already included as a module in Ultralytics YOLOv8. Once you've done that, you can easily utilize the Ultralytics package.
@ShadowD2C Жыл бұрын
for the training epochs, why does it say 19 Images (ie only the validation set?) shouldnt it be running the training on your larger training set?
@Ultralytics Жыл бұрын
In this context, 19 images serve as a validation data sample. If you acquire additional validation data, the count will adjust accordingly.
@ShadowD2C Жыл бұрын
@@Ultralytics so after every epoch it validates by the valid set? Im not fully understanding how this works, any extra information is appreciated
@Ultralytics5 ай бұрын
Yes, after each epoch, the model validates using the validation set to assess its performance. This helps monitor metrics like accuracy and loss, ensuring the model isn't overfitting. For more details, check our documentation docs.ultralytics.com/. 😊
@Nico-si3rf Жыл бұрын
Hello does anyone know whats the last part of this code "!yolo task=detect mode=predict model=/content/runs/detect/train/weights/best.pt conf=0.5 source{dataset.location}/t" it was show on 5:28. For context I want to test it on an image folder that contains 1600 jpg images
@Ultralytics Жыл бұрын
The code line for detection is provided below: ```!yolo task=detect mode=predict model="/content/runs/detect/train/weights/best.pt" conf=0.5 source="path/to/testimages/folder"``` Regards.
@nikhild1946 Жыл бұрын
hey thanks for the info, i appreciate it. But, i'm using vs code environment could you help me to run this google colab code in VS code
@Ultralytics Жыл бұрын
What challenges are you encountering? I believe there shouldn't be many issues, as the commands are likely to remain same!!!
@hemanthreddy2485 Жыл бұрын
Hey, I have a doubt i have a dataset of images with image id and have a corresponding text file which has id and corresponding class of the dataset. Now how to train the yolov8 model can you make a video on that ?
@Ultralytics Жыл бұрын
If you have an annotated dataset, you can proceed by following the instructions outlined in our documentation to train the Ultralytics YOLOv8 model with your custom dataset. To access further details, please refer to our training documentation available at: docs.ultralytics.com/modes/train/
@LeonZZ Жыл бұрын
Hi Is the yolov8 label format is the same as yolov5? Can I use "Make sense" to label my images?
@Ultralytics Жыл бұрын
@LeonZZ! The annotation format for Ultralytics YOLOv8 is the same as the annotation format of Ultralytics YOLOv5. Regards, Ultralytics Team!
@AuroraRusso-r9s Жыл бұрын
hello, I have a problem. I have a PC with an AMD Radeon TM Graphics card. i occasionally have problems running the nvidia-smi command, i mean sometimes i can run it and sometimes i get this error: /bin/bash: line 1: nvidia-smi: command not found. Can you tell me how I can solve this?
@Ultralytics Жыл бұрын
The `nvidia-smi` command is specifically designed to interact with NVIDIA's GPU hardware and drivers. It won't work with an AMD Radeon graphics card because they are fundamentally different architectures supported by different software stacks. Therefore, it's not supposed to be installed or operational on a machine with an AMD graphics card, unless you also have an NVIDIA card installed in the same system. The intermittent availability of `nvidia-smi` may indicate multiple things: 1. **Path Issue**: If you have both AMD and NVIDIA cards and you installed NVIDIA drivers at some point, then the command might not be in your system's PATH. Check the installation directories and add them to the PATH environment variable, if needed. To temporarily add the path to the current session: ```bash export PATH=$PATH:/path/to/nvidia-smi/directory ``` To permanently add the path, add the above line to your `.bashrc` or `.zshrc` file. 2. **Environment Issue**: You might be using different shell sessions, some of which might have access to `nvidia-smi` if you sourced specific environmental settings. 3. **Driver Installation**: If you had an NVIDIA card before and then switched to AMD, it's possible that the NVIDIA drivers are only partially uninstalled. 4. **Virtual Environments**: If you are using virtual environments, make sure that the environment where `nvidia-smi` is accessible is activated. To diagnose further, you can run the following: - Find out if `nvidia-smi` is installed: ```bash which nvidia-smi ``` If this returns a path, then `nvidia-smi` is installed on your system. - Check your environment variables: ```bash echo $PATH ``` Make sure the directory containing `nvidia-smi` is in there. If you are using an AMD card and have no need for NVIDIA tools, I'd recommend that you ignore the `nvidia-smi` command. For AMD cards, there are other tools for monitoring and managing your GPU, such as `radeontop` for Linux. If you believe you should have `nvidia-smi` because you also have an NVIDIA card in your system, then you should check your NVIDIA driver installation.
@AuroraRusso-r9s Жыл бұрын
thank you very much, I solved the problem. Except that when I deploy on Roboflow, it gives me this error: ModuleNotFoundError: No module named 'ultralytics.utils', but still I already installed ultralytics version==8.0.134, which had previously requested me.@@Ultralytics
@Ultralytics5 ай бұрын
Glad you solved the initial issue! For the `ModuleNotFoundError: No module named 'ultralytics.utils'` error, it seems like there might be an issue with the installation or environment setup. Here are a few steps to troubleshoot: 1. Ensure Correct Installation: Verify that Ultralytics is correctly installed. You can reinstall it using: ```bash pip install ultralytics==8.0.134 ``` 2. Check Environment: Make sure you are in the correct Python environment where Ultralytics is installed. You can check installed packages with: ```bash pip list ``` 3. Verify Import: Try importing the module in a Python shell to see if it works: ```python import ultralytics.utils ``` 4. Roboflow Integration: If the issue persists specifically with Roboflow, ensure that your deployment script or environment is correctly set up. You might need to check Roboflow's documentation or support for any specific requirements. If the problem continues, please provide more details about your deployment setup on Roboflow. This will help in giving more precise guidance.
@masyithahfarid4492 Жыл бұрын
Hello, thanks for the video, I would like to ask, in the tutorial it provides the mAP value for validation sets, how can I calculate the mAP value for test set to evaluate the final performance of the model?
@Ultralytics Жыл бұрын
You can substitute the validation data with the test data in the data.yaml file and initiate the validation process. This will yield the Mean Average Precision (mAP) for the test data.
@farrugiamarc07 ай бұрын
If you use the model.val() method, it takes a parameter split="test" which can be used to test on a different split. In this example there is a test split directory defined in the yaml file. I think that it may also work in the CLI.
@Ultralytics5 ай бұрын
Absolutely! You can use the `split="test"` parameter to evaluate the model on the test set. Here's how you can do it: Python: ```python from ultralytics import YOLO Load the model model = YOLO("yolov8n.pt") Validate on the test set model.val(data="path/to/data.yaml", split="test") ``` CLI: ```bash yolo val data=path/to/data.yaml split=test ``` For more details, check out the Ultralytics Modes Documentation docs.ultralytics.com/modes/.
@razaq_moch11 ай бұрын
thx for the video but i think it's not complete explanation on how and why we use that. i had a problem on running gpu nvidia smi and set the data training that i've been collected from roboflow so the training did not work for me. plz let me know on how to figure that out. any chat are welcome for more detail explanation. thanks😊
@Ultralytics11 ай бұрын
We apologize for any inconvenience caused. Could you kindly provide the error logs you encountered? Additionally, could you specify the operating system you are using? These details will assist us in pinpointing the error more effectively. Thanks Ultralytics Team!
@masyithahfarid4492 Жыл бұрын
Hello, thanks for the informative video, could you suggest to me, how can I do hyperparameters tuning for YOLOv8 model?
@Ultralytics Жыл бұрын
While training, you have the option to configure hyperparameters such as learning rate, weight decay, etc. You can find detailed information in our documentation: docs.ultralytics.com/guides/hyperparameter-tuning/#file-structure We trust that this information will be beneficial. Feel free to reach out if you have any further questions. Thanks, Ultralytics Team!
@masyithahfarid449211 ай бұрын
Great! thanks a lot@@Ultralytics
@Ultralytics5 ай бұрын
You're welcome! 😊 Happy training with YOLOv8! If you have any more questions, feel free to ask.
@markraymundo388910 ай бұрын
Hello! What does it mean when I'm getting this error FileNotFoundError: [Errno 2] No such file or directory: 'runs/detect/train/weights'
@Ultralytics10 ай бұрын
This implies that the path you're mentioning does not contain a 'weights' folder.
@markraymundo388910 ай бұрын
@@Ultralytics I am getting this error on this line: !yolo task=detect mode=train model=yolov8l.pt data={dataset.location}/data.yaml epochs=20 imgsz=1920
@Ultralytics10 ай бұрын
Sure, Please ensure that `dataset.location` corresponds to the path of your original `dataset.yaml`; otherwise, it will result in an error.
@spotnuru837 ай бұрын
just amazing guys..thank you for the training i hope i can do this from now on..
@Ultralytics7 ай бұрын
We are glad to hear your feedback. Thank you!
@spotnuru837 ай бұрын
@@Ultralytics Can you suggest any tool which is opensource for labeling , and should not have much of trouble to do the labeling.
@Ultralytics7 ай бұрын
@@spotnuru83 You can use the labelImg tool: github.com/HumanSignal/labelImg
@spotnuru837 ай бұрын
@@Ultralytics I am having problem installing this. This is one of the major problems for people learning AI and ML most of the times the installation itself does not work, can you create some resolutions or steps to be followed.
@Ultralytics7 ай бұрын
Yes, we will create the tutorial on this soon :) Thanks Ultralytics Team!
@it_vaibhavchopade4187 Жыл бұрын
can i get colab file used in this video, i am getting an error at last second stage
@Ultralytics Жыл бұрын
Please see colab.research.google.com/github/ultralytics/ultralytics/blob/main/examples/tutorial.ipynb
@caiohenriquemanganeli98067 ай бұрын
Thank you for the video. I just would like to ask you, it is impossible to see the line code where you predict the model (just before the "import glob..."). Could kindly write down the setence here?
@Ultralytics7 ай бұрын
Sure, the command is mentioned below :) !yolo task=detect mode=predict model="path/to/best.pt" source="path/to/image.png" Thanks Ultralytics Team!
@Jolle_Gaming Жыл бұрын
Hello, thanks for the well inforamtive tutorial. I was wondering, where are we using the annotaded data/labels? It semes to me, as a beginner, that we are only traning on the images.
@Ultralytics Жыл бұрын
Supervised learning is the approach YOLOv8 adopts, necessitating not only images but also labels (annotations) for effective training. Thanks!!!
@Reesha-05 Жыл бұрын
sir can you tell how you get the weights file? did you get it while exporting the dataset from roboflow?
@Ultralytics Жыл бұрын
When training the model in Google Colab, you'll find the "Documents" section in the left sidebar. Within this section, a "runs" folder will automatically be generated, when training starts. This "runs" folder will contain all the training files, including model weights, the F1 Curve, and the Precision and Recall Curve.
@Reesha-056 ай бұрын
@@Ultralytics thankyou sir !!!
@Ultralytics5 ай бұрын
You're welcome! 😊 If you have any more questions, feel free to ask. Happy training! 🚀
@DaTomy11 ай бұрын
May you provide the full code used in the video?
@Ultralytics11 ай бұрын
Certainly, you can follow our prediction documentation section, where each code snippet is accompanied by comprehensive descriptions and details. Visit docs.ultralytics.com/modes/predict/ for more information. Best regards, Ultralytics Team
@chakib2378 Жыл бұрын
Is it required go format via roboflow ? I already my data and labels, is it not possible to run yolov8 by doing the formatting myself ?
@Ultralytics Жыл бұрын
Certainly, you have the option to structure the dataset independently and subsequently fine-tune the model using Ultralytics YOLOv8. Simply upload the dataset to Google Drive and then mount it in Google Colab. Thanks Ultralytics Team!
@VlogWithShan1 Жыл бұрын
Dear sir can you provide googlecolab notebook if posibile ?
@Ultralytics Жыл бұрын
The colab notebooks are currently not available, we have plan for these in Early 2024! But you can get the code available in our Docs: docs.ultralytics.com/modes/predict/#key-features-of-predict-mode Thanks Ultralytics Team!
@omerkaya5669 Жыл бұрын
How can I change hyperparameter values? For example, I want to change the learning rate or weight_decay value.
@Ultralytics Жыл бұрын
You have the flexibility to modify these parameters by including arguments within the training command. For instance: Python """ from ultralytics import YOLO model = YOLO('yolov8s.pt') results = model.train(data="coco.yaml", lr0=0.01, weight_decay=0.0005) """ CLI """ yolo train model="yolov8s.pt" data="coco.yaml" lr0=0.01 weight_decay=0.0005 """ This allows you to customize the training process by adjusting the values as needed.
@omerkaya5669 Жыл бұрын
Thank you very much for the information. How can we do it through Colab?@@Ultralytics
@Ultralytics5 ай бұрын
You're welcome! To change hyperparameters in Google Colab, you can use the same approach as in Python. Here's a quick example: ```python from ultralytics import YOLO Load a pretrained model model = YOLO('yolov8s.pt') Train the model with custom hyperparameters results = model.train(data="coco.yaml", lr0=0.01, weight_decay=0.0005) ``` Just run this code in a Colab cell, and it will train your model with the specified learning rate and weight decay. For more details, check out our training documentation: docs.ultralytics.com/modes/train/ 🚀
@rahaf.r831811 ай бұрын
Could you please provide the Google Colab link that you used ? I can't find it.
@Ultralytics11 ай бұрын
The release of Google Colab for this module is pending; however, we have plans for its implementation and will share it soon. Stay tuned for updates. Thanks, Ultralytics Team!
@vincegallardo1432 Жыл бұрын
Hello Good Morning, I just wanted to ask how can I compile the Yolov8 to use it offline. Is it possible to compile it on Tensorflow? If yes, Can I Pm you? thank you In advance.
@Ultralytics Жыл бұрын
Yes, you can utilize the YOLOv8 model on your local machine by following mentioned commands below: " git clone github.com/ultralytics/ultralytics cd ultralytics python setup.py install ""
@RoyMathew-h1g11 ай бұрын
Can I get the colab notebook shown in this video
@Ultralytics11 ай бұрын
Certainly, you can access the notebooks by visiting the following link: github.com/ultralytics/ultralytics?tab=readme-ov-file#notebooks Thanks Ultralytics Team!
@pragnesh_kumar.p11 ай бұрын
I am trying to train the exiting yolo model to my dataset through this from ultralytics import YOLO # Load a model model = YOLO('yolov8n.pt') # load a pretrained model (recommended for training) print("done") # Train the model results = model.train(data={path_to_datasaet}/data.yaml', epochs=10, imgsz=640) but i am unable to see it running for some reason. I have downloaded the dataset using roboflow's api key
@Ultralytics11 ай бұрын
Could you kindly provide the error logs you've come across? Sharing these logs will assist us in gaining a better understanding of the issue. Thanks, Ultralytics Team!
@pragnesh_kumar.p11 ай бұрын
There is no Error. I have a RTX3050Ti on my laptop. I created a virtual environment and brought in all the required libraries. I selected the nano sized yolo for my project. I got my dataset ready from Open mages dataset, and now i just ran this code, after making sure the structure for the folders of data is correct. Like i said before once i run this, it just shows a * on the cell thats running, but no output is shown even though i waited for 20 mins. Do you think its the lack of a better GPU on my laptop?@@Ultralytics
@Ultralytics11 ай бұрын
@@pragnesh_kumar.p seems like the issue related to GPU drivers, can you please try to update your GPU drivers?
@robindietz6252Ай бұрын
Thanks for this video! It looks like the colab notebook was updated for YOLOv11 which has different commands. Is there a custom object detection notebook available for v11?
@UltralyticsАй бұрын
You're welcome! As of now, YOLOv11 documentation and resources are continually evolving. For the latest on training custom object detection models with YOLOv11, you can keep an eye on the Ultralytics GitHub repository github.com/ultralytics/ultralytics and our YOLO11 documentation docs.ultralytics.com/models/yolo11/. If there's a specific notebook update, it will likely be shared there. Stay tuned! 😊
@mahdis_rahmani Жыл бұрын
Hi. Is this process a "Transfer Learning" process? Are we fine- tuning the model that is pretrained or are we training it from scratch? How Can I fine tune the yolov8 model that is pretrained and add a few layers so that it will be able to detect a 3 class dataset?
@Ultralytics Жыл бұрын
Yes this process is called Transfer Learning. You have the option to either fine-tune the model starting with pretrained weights or train it from scratch. When you're fine-tuning the model for a specific task involving 3 classes, there's no need to make modifications to the model internal layers. Simply run the training using a dataset that contains annotations for those 3 classes, and the model will automatically adapt its hyperparameters and configuration to accommodate these classes.
@r.vazamantazakka5908 Жыл бұрын
Hi, I am using the yolov8 to predict a video. However the model returns an output video in .avi format. Can i have the output video in mp4? If the answer is yes, how can I do that?
@Ultralytics Жыл бұрын
The 'avi' codec is commonly used in Ultralytics YOLOv8. If you wish to make a change, you can perform inference and save the results using a custom OpenCV writer. In YOLOv8, the video writing module is located at the link: github.com/ultralytics/ultralytics/blob/main/ultralytics/engine/predictor.py#L208.
@mahtabniakan216411 ай бұрын
Hello. Thank you for your helpful video. I have a question about the dataset. I already have a dataset and trained my YOLOv5 model using that data perfectly. Can I use this dataset for training my YOLOv8 again?
@Ultralytics11 ай бұрын
Certainly, you can utilize the same dataset for YOLOv8. The annotation format remains consistent between YOLOv5 and YOLOv8. Best regards, Ultralytics Team
@ue4152 Жыл бұрын
Hello, how can I find the link for the colab files in these tutorials?
@Ultralytics Жыл бұрын
Google Colab notebooks are currently not supported; however, support for them will be available soon. Thanks Ultralytics Team
@WavaDev Жыл бұрын
Where is the Google Colab Notebook for this video?
@Ultralytics Жыл бұрын
We are currently developing Google Colab Notebooks, and they will be made accessible in the near future. Thank you for your patience.
@WavaDev Жыл бұрын
@@Ultralytics No problem, I am now using Ultralytics Hub.
Perfect! That's the right link to the Google Colab Notebook. Happy training! 🚀 If you need more details, check out our Google Colab integration guide docs.ultralytics.com/integrations/google-colab/.
@1TreukFlyyy9 ай бұрын
What is the use of the tes folder ? Why some dataset don't have any test folder ? (I use Autodistill to generate a dataset and it makes a train and val folder, but no test folder).
@Ultralytics9 ай бұрын
While the test folder isn't mandatory, its inclusion can enhance the accuracy of results during model testing. Although we're not familiar with third-party tools like autodistill, we recommend utilizing models such as SAM for dataset creation. Thank you.
@1TreukFlyyy9 ай бұрын
@@Ultralytics Do images in the test folder need to be annotated ? Or are they only used for testing predictions / detections and get visual cues of how to model detects ?
@Ultralytics5 ай бұрын
Images in the test folder typically don't need annotations. They're mainly used to see how well the model performs on unseen data and to get visual cues of its predictions. For more details, check out our Model Testing Guide docs.ultralytics.com/guides/model-testing/.
@Nummi31 Жыл бұрын
Hi, I have a question. What if I insert negative pictures with no objects and empty txt files to this exact model in Google Colab? They will be skipped or trained as negative images? Thank you. Really important answer for me.
@Ultralytics Жыл бұрын
When you insert negative pictures with no objects and empty TXT files for training Ultralytics YOLOv5 and YOLOv8 model, these pictures will be skipped during training. Models won't be trained to detect objects in such images. It's crucial to provide a balanced dataset with positive (object-containing) examples for effective training. Thanks
@Nummi31 Жыл бұрын
Thank you so much for your quick answer!!@@Ultralytics
@Ultralytics5 ай бұрын
You're welcome! 😊 If you have any more questions, feel free to ask. Happy training with YOLOv8! 🚀
@masyithahfarid449211 ай бұрын
Hello Ultralytics team, I would to ask, during the training process, may I know, the value mAP shown for each epoch was tested on training or validation set? thank you
@Ultralytics11 ай бұрын
The map displayed during the training process pertains to the validation data. Regards, Ultralytics Team!
@romroc627 Жыл бұрын
Thanks for this interesting video. Can you make a video about model.tune() ?
@Ultralytics Жыл бұрын
This is a great idea! While we work on the video please see our Tune docs at docs.ultralytics.com/guides/hyperparameter-tuning/
@r.vazamantazakka5908 Жыл бұрын
Hi, is it possible to use the model for a tracking task on a video after being fine-tuned to our custom dataset? If yes, how do we accomplish that?
@Ultralytics Жыл бұрын
Certainly, the YOLOv8 object detection model that you've fine-tuned for your specific dataset can indeed be used for tracking purposes. Here's the workflow: ``` model = YOLO('path/to/fine-tuned model.pt') # Load your fine-tuned model results = model.track(source="kzbin.info/www/bejne/gn_agHeAjcipqpY", show=True) # Perform tracking with the default tracker results[0].plot() ``` For more detailed information, you can refer to the Ultralytics YOLOv8 object tracking documentation at the mentioned link: docs.ultralytics.com/modes/track/
@r.vazamantazakka5908 Жыл бұрын
@@UltralyticsOh I see. Thank you for the answer!
@Ultralytics5 ай бұрын
You're welcome! 😊 If you have any more questions, feel free to ask. Happy tracking! 🚀
@WaiTheng7 ай бұрын
Nice video. However, may I ask is it possible to create a model that can differentiate disease? For example, from different patterns of a certain plant's leaf, the system can identify the disease of it.
@Ultralytics7 ай бұрын
Yes, it's possible. You will need to fine-tune the Ultralytics YOLO models on disease data for 100 or more epochs. Once training finishes you can easily detect different diseases based on their features.
@P1yushq6 ай бұрын
does yolov8 annotation format is cls top left bottom right or cls x_center y_center width height in this vid, @2:36, its said its the first but i doubt it is second one
@Ultralytics6 ай бұрын
Hi there! 👋 Great question! YOLOv8 uses the format `cls x_center y_center width height` for annotations. If you have any doubts, you can always refer to the Ultralytics documentation docs.ultralytics.com for more details. Make sure you're using the latest versions of the relevant packages to avoid any issues. If you need further assistance, feel free to ask! Happy training! 🚀
@piyush-hr4nl6 ай бұрын
Thanks lot for the confirmation, i was confused with what the gentleman said....cheers
@Ultralytics5 ай бұрын
You're welcome! Glad I could help clear that up. Cheers and happy annotating! 🎉😊
@rahaf.r831811 ай бұрын
Is number of epochs can affect on the accuracy of the model? if not what is the best number of epochs for YOLOv8m ? I want a maximum accuracy.
@Ultralytics11 ай бұрын
While the number of epochs can impact the model's accuracy, it is not always essential. Sometimes, an increase in epochs can lead to overfitting of the model. Thanks Ultralytics Team!
@Entertainment.x06 Жыл бұрын
sir i have facing this error : RuntimeError: Sizes of tensors must match except in dimension 1. Expected size 15 but got size 0 for tensor number 1 in the list. what to do for this
@Ultralytics Жыл бұрын
Please raise a bug report at github.com/ultralytics/ultralytics for support
@fatmanursefer14829 ай бұрын
Hi, I collected data with video to use in my project. Can I use this video to train my model or do I need to train the model using only photos? If I can use it, how should I label the data in this video? I would appreciate it very much if you could help me on this issue.
@Ultralytics9 ай бұрын
To train the Ultralytics YOLOv8 model on custom data, you'll need to gather various images and annotate them. Subsequently, you can utilize these images to fine-tune the model for custom data. Thank you.
@Ultralytics9 ай бұрын
To train the Ultralytics YOLOv8 model on custom data, you'll need to gather various images and annotate them. Subsequently, you can utilize these images to fine-tune the model for custom data. Thank you.
@feehe-b7g Жыл бұрын
Hi Thanks for the video.In your labels it's 0,1,2,3 but in data.yaml it says Cup the category of these cups, why is that?
@Ultralytics Жыл бұрын
@user-zg8vy1vh5t, data.yaml includes 6 classes labeled: - Cocio cup - Cup - Halloween Cup - Hand-painted Cup - White Cup - cup For additional details, please refer to the data.yaml section in the video at 2:50 seconds (kzbin.info/www/bejne/gn_agHeAjcipqpY)."
@nicholasbaronbramantyo8269 Жыл бұрын
Hello What a great explanation I want to ask, where I can see the brief explanation about the result graphs? And also, how did you calculate the accuracy of the model?
@Ultralytics Жыл бұрын
Upon training Ultralytics YOLOv8 on custom data, it will generate F1Score, Map, Precision, and Recall values at the end. These metrics can be utilized to compute the model's accuracy.
@NicolaiAI Жыл бұрын
Thanks a lot!
@Ultralytics5 ай бұрын
You're welcome! If you have any more questions, feel free to ask. Happy training! 🚀
@romesh1832 Жыл бұрын
can i do this on my laptop? i dont have nvidia graphics card though
@ashiq2786 Жыл бұрын
colab uses cloud , you can run this inside your browser
@Ultralytics Жыл бұрын
Yes, you can utilize Google Colab for training purposes without the need for a dedicated system GPU. Google Colab does offer GPU support, albeit with some limitations. During that time, you can efficiently train the YOLOv8 model.
@romesh1832 Жыл бұрын
thank you @@Ultralytics
@romesh1832 Жыл бұрын
thanks @@ashiq2786
@Ultralytics5 ай бұрын
You're welcome! If you have any more questions, feel free to ask. Happy training! 😊
@kareemasg82410 ай бұрын
that's amazing , but i have a question what is the code of prediction here ?
@Ultralytics10 ай бұрын
The code for predictions and comprehensive information can be found in our documentation, accessible at: docs.ultralytics.com/modes/predict/
@henryjones6627 Жыл бұрын
Hello, I was wondering if it is possible to train a model on a certain dataset and after that training has been completed to train the model on another dataset on top of the first one ?
@Ultralytics Жыл бұрын
Certainly, it is feasible. However, if the second dataset contains distinct classes, the first trained model will overwrite them.
@jplockport19 ай бұрын
Hey how do I use my weights that were made to train with in the colab where do I put them in etc file path
@Ultralytics9 ай бұрын
You can conveniently upload the file to Google Drive and specify the path to the model during prediction. The code will then automatically execute inference using the model you provided. Thanks
@anandukc47098 ай бұрын
How to identify the number of epochs while training is there a criteria for giving epochs?
@Ultralytics8 ай бұрын
There isn't a straightforward method for determining the optimal epochs for model training. It's necessary to assess both the training and validation metrics to determine the ideal number of epochs that align with your specific needs. Thanks, Ultralytics Team!
@masyithahfarid4492 Жыл бұрын
Hello, may I know, how can we obtain value of TP, TN, FP, FN from the result of validation/test?
@Ultralytics Жыл бұрын
To obtain the values of True Positives (TP), True Negatives (TN), False Positives (FP), and False Negatives (FN) from the results of validation or testing, you can use the mentioned formula. ```Precision = TP / (TP + FP) Recall = TP / (TP + FN) F1 Score = 2 * (Precision * Recall) / (Precision + Recall) Accuracy = (TP + TN) / (TP + TN + FP + FN)``` Note: Ultralytics YOLOv8 is primarily used for object detection, so precision, recall, and F1 score may need to be adapted based on the specific evaluation requirements for object detection tasks. Ensure that your ground truth and predictions align with the format expected for YOLOv8 evaluations.
@mdsohrabakhtaremam95557 ай бұрын
Thanks for the informative video, i have a question that when i am using either last.pt or best.pt it only detects one class on which it is trained suppose in this it is train on cup so it only detect cup, and when i use yolov8n.pt model it unable to detect cup and beside cup it detect all 80 categories why is it happening, what i need to do if i want to detect all 80 categories along with 1 cup category, Thanks once again
@Ultralytics7 ай бұрын
Well, You are actually fine-tuning the model on CUP dataset, so it will only detect the CUP, no other class. On the other hand, YOLOv8 pretrained models are fine-tuned on COCO dataset (80 classes), that why it's detecting 80 classes, but there is no class of CUP inside COCO dataset, so model is not detecting the CUP. If you want to detect COCO 80 classes alongside CUP class, you will need the merge the CUP dataset with COCO dataset and then fine-tuning the model will able to detect all these 80 COCO classes, alongside CUP class. Thanks Ultralytics Team!
@mdsohrabakhtaremam95557 ай бұрын
But it would be very huge data after adding 80 categories to the cup data and take lots of time to trained, is there no any methode to merge the two models last.pt and yolov8n.pt which is trained on two different dataset one is on cup and another is on 80 Coco dataset. If it's not possible then from where I can get Coco dataset of 80 categories.. Thanks for the above information 😊 and I am working on it sincerely.
@Ultralytics7 ай бұрын
Directly model merge is not supported. You can directly download the COCO dataset, by following our docs: docs.ultralytics.com/datasets/detect/coco/#dataset-yaml
@vincegallardo1432 Жыл бұрын
why when I do epoch 20, It is not allowed. I mean It stops. why is that?
@Ultralytics Жыл бұрын
Hi there, this situation may happen if you have a large dataset. In such cases, you may need to use Google Colab Pro. Google Colab's free plan supports a limited duration of training and limited GPU availability.
@vincegallardo1432 Жыл бұрын
Thank you very much. Aprreciated@@Ultralytics
@vincegallardo1432 Жыл бұрын
thank you very much, I am doing a project right now and its a great help. @@Ultralytics
@FernandaZ-u7c11 ай бұрын
@@vincegallardo1432 For the free version of Google Colab Notebook, you can use CPU instead of GPU when train your own detection model. It can continue the epoches, although a little bit slow, but it will continue.
@Ultralytics5 ай бұрын
You're welcome! Glad to hear it's helping with your project. If you need more details on using Google Colab for training, check out our guide: Google Colab Integration docs.ultralytics.com/integrations/google-colab/. Happy training! 🚀
@omerkaya5669 Жыл бұрын
I have 1600 images. I divide my dataset into 580 training and 20% validation. When training the model, why train it with the validation data set?
@Ultralytics Жыл бұрын
Validation data is exclusively reserved for validation purposes and is not included in the training process. During validation, the algorithm assesses accuracy, mean average precision (MAP), recall, and precision by comparing the training outcomes with the validation dataset.
@wishyasin30488 ай бұрын
Hi, firstly thank you for teaching us. I have a project which name is "autonomous car project". I have a big problem in the lane tracking system. ı want to solve this problem with yolo. how can I solve it? how can I do model learning with yolo?
@Ultralytics8 ай бұрын
Sure! To solve your lane tracking problem with YOLO, follow these steps: 1. Gather a dataset of road scenes with lane markings. 2. Annotate the lane markings in your dataset. 3. Train a custom YOLOv8 model on the annotated data. 4. Evaluate the model's performance. 5. Deploy the model for real-time lane tracking in your autonomous car project.
@toyly28209 ай бұрын
last prediction mode line was not shown completely and now i am in huge guesses.
@Ultralytics9 ай бұрын
Complete prediction line is mentioned below. !yolo detect predict source="path/to/video.mp4" model="path/to/best.pt" conf=0.5
@muhammadbugaje78977 ай бұрын
Hello, the google collab is not the one in the video, please send the exact google collab link
@helper_bot7 ай бұрын
its the one made by roboflow, found it by searching "yolov8 custom object detection colab"
@Ultralytics7 ай бұрын
Thanks for sharing the feedback! Well, we regularly update the modules, the colab provided in video description is updated and also it's official colab notebook that you can follow to do object detection using Ultralytics YOLOv8.
@Ultralytics2 ай бұрын
Sure! You can find the exact Google Colab notebook used in the video here: Colab Notebook colab.research.google.com/github/ultralytics/ultralytics/blob/main/examples/tutorial.ipynb. Let me know if you have any other questions! 😊
@jaysawant3097 Жыл бұрын
in runs, in detect, i got no folder named predict!!!!!!!
@Ultralytics Жыл бұрын
Yes, you will need to use `save=True` argument with the prediction command. Thanks Ultralytics Team!
@hsu65467 ай бұрын
Thanks for your video and code, Is it possible to use your weight and add my own label? ex: your weight have 85 classes and i want to add two new classes (apple&orange) in this case, is it possible to come out a 87 classes weight? if so, can i ask how to do it? Really appreciate for your video and review!
@Ultralytics7 ай бұрын
If you want to add more classes in the COCO dataset, i.e. (apple & orange), you will need to merge the annotation of your 2 classes inside the COCO dataset, later you will need to fine-tune the model and it will take some time. Once done, you can use the model file to detect COCO classes alongside your 2 custom classes. Thanks Ultralytics Team!
@mwaaqaas10 ай бұрын
sir is there any way or its possible to see yolo v8 model each layer output during training on few images , to understand how yolo v8 work
@Ultralytics10 ай бұрын
Throughout training, you won't have visibility into every layer unless you modify this functionality. By default, Ultralytics provides a model summary before commencing the first epoch and a validation summary at the end of training, accompanied by various resulting metrics.
@mwaaqaas10 ай бұрын
@@Ultralytics i am new to DL and YOLO. To understand the working of each layer, do you have any idea which approach is better than above idea..
@Ultralytics5 ай бұрын
Welcome to the world of Deep Learning and YOLO! To understand the workings of each layer, you can visualize feature maps and intermediate outputs. A good starting point is to use hooks in PyTorch to capture and visualize these outputs. For more detailed guidance, check out our model evaluation insights docs.ultralytics.com/guides/model-evaluation-insights/. Happy learning! 🚀
@jalaludinmusawi8993 Жыл бұрын
Hi, i get this error while trying to train my model: RuntimeError: An attempt has been made to start a new process before the current process has finished its bootstrapping phase. This probably means that you are not using fork to start your child processes and you have forgotten to use the proper idiom in the main module: if __name__ == '__main__': freeze_support() ... The "freeze_support()" line can be omitted if the program is not going to be frozen to produce an executable.
@Ultralytics Жыл бұрын
It appears that the issue may be related to the workers. You can consider setting the workers to either 0 or 1 during the training of your custom dataset. If the problem persists, you can find more information on this topic in the Ultralytics Issues by visiting the link: github.com/ultralytics/ultralytics/issues/2218
@Learning-tj1qg11 ай бұрын
At 3:54 How did you get a GPU? Was it chance or did you pay for it? because my GPU_mem is 0G right now and it is extremely slow.
@Ultralytics11 ай бұрын
To enable GPU runtime in Google Colab, you need to choose it manually as the default setting is CPU. Simply click on the "Runtime" in the navigation bar and then select GPU runtime. Best regards, Ultralytics Team!
@karthikkannan780311 ай бұрын
Also how do you turn the Camera on? I have the same code as you in the data but it is not opening by default what is the command to do so?
@Ultralytics11 ай бұрын
If an external camera is connected, you may specify source=1; for a laptop's integrated camera, use source=0. For RTSP streams, you can utilize the rtsp URL. Thanks
@rubelahmed34742 ай бұрын
Does fine-tuning a pre-trained model on a new dataset cause it to forget the original classes it was trained on? For example, if the model was initially trained on the COCO dataset (which includes 80 classes) and then fine-tuned on a smaller dataset with only six classes, will the model still retain its ability to recognize the original 80 classes? Or will it only perform well on the new six classes after fine-tuning?
@Ultralytics2 ай бұрын
Fine-tuning a pre-trained model on a new dataset can lead to "catastrophic forgetting," where the model may lose its ability to recognize the original classes. When you fine-tune on a smaller dataset with only six classes, the model tends to adapt to these new classes, potentially at the expense of the original 80 classes. To mitigate this, techniques like regularization or using a smaller learning rate can help balance learning new information while retaining old knowledge. For more details, you can explore the YOLO-World model documentation: YOLO-World docs.ultralytics.com/models/yolo-world/.
@tanmoysaha63046 ай бұрын
hey i can't connect to gpu or tpu due to google colab limits.... is there any other way to do this??
@Ultralytics6 ай бұрын
Hello! If you're facing limitations with Google Colab's GPU or TPU access, there are several alternatives you can consider: 1. **Kaggle Kernels**: Kaggle offers free GPU and TPU resources similar to Google Colab. You can easily switch your notebook to Kaggle and continue your work there. 2. **AWS SageMaker**: Amazon Web Services provides SageMaker, which is a managed service that allows you to build, train, and deploy machine learning models at scale. They offer free tier options and pay-as-you-go pricing. 3. **Google Cloud Platform (GCP)**: You can use GCP's AI Platform Notebooks, which provide managed Jupyter notebooks with access to GPUs and TPUs. GCP offers free credits for new users. 4. **Microsoft Azure Notebooks**: Azure also provides Jupyter notebooks with access to powerful GPUs. They offer free credits for new users as well. 5. **Local Setup**: If you have a powerful local machine with a compatible GPU, you can set up your environment locally using frameworks like TensorFlow or PyTorch. Hope this helps! Regards! Ultralytics Team!
@AyomideKazeem-g7n10 ай бұрын
Hello, Thank you very much for the video. When I tried using my roboflow annotation as my dataset, I keep getting this error FileNotFoundError: Dataset '/content/Field-Result-Detection-5/data.yaml' images not found. Can you kindly help?
@Ultralytics10 ай бұрын
Thank you for providing the error logs. The error indicates a potential issue with the correct specification of the data.yaml path. Please double-check the path. Once it's accurate, you should be able to train the model without encountering any error logs. Thanks Ultralytics Team!
@XenosKenosis Жыл бұрын
So i have this problem, i have two dataset one without anotation and one with anotation and i don't use the api key because in roboflow i don't actually know how to input a dataset without anotation like just completely clean without anotation can you like give me a way to do that or suggest Edit : the one with anotation i don't want to detect any object with the data because what I'm trying to do is to detect wallhack data like in video game wallhack, so one is an image with the wallhack in it and the second one is without any cheats
@Ultralytics Жыл бұрын
If you have the annotated dataset, you can train the model either on Google Colab or your local machine. The training procedures are well-detailed in our documentation, accessible at: docs.ultralytics.com/modes/train/
@XenosKenosis Жыл бұрын
@@Ultralytics what about the one with none tho, do I just use it as testing or something, or should I also train it so the model know how to differentiate between the two dataset, sry I know I should ask this on Stack overflow or something but thanks for the suggestion before
@XenosKenosis Жыл бұрын
@@Ultralytics problem solved thanks, turns out it was a matter of folder or data structure lol, thanks alot you guys actually help me
@Ultralytics Жыл бұрын
We are pleased to hear that your concern has been resolved. Thank you.
@Student_yet6 ай бұрын
Thanks for the video! How can I resume training? I have trained for 30 epochs and now want to start from 31, please help!
@Ultralytics6 ай бұрын
Thanks for watching! To resume training from where you left off, simply use the `resume=True` argument in your training command. Make sure you're using the latest versions of `torch` and `ultralytics`. For more details, check out the Ultralytics documentation docs.ultralytics.com/modes/train/#resuming-interrupted-trainings. If you encounter any issues, please share specific error messages or code snippets. Happy training! 🚀
@Student_yet6 ай бұрын
@@Ultralytics Thankyou for your response! It worked. Could you also help me how to start from where left when the time limit of using gpu on google colab ends, how can i start or resume from where it stopped, I don't want to rerun all the CLI's from start. Please Help
@Ultralytics5 ай бұрын
Resuming from Colab is super easy using the link I shared above, or also even simpler with HUB at hub.ultralytics.com (just a click of a button :)
@alexred996310 ай бұрын
is here way to use 3d solid models like .step format for education of yolo?
@Ultralytics10 ай бұрын
Officially, Ultralytics does not support 3D solid models. However, you can utilize our models for your custom project implementation. Thanks, Ultralytics Team!
@letongzhao57669 ай бұрын
Helleo, how do you creat your dataset of cup detection v2?
@Ultralytics9 ай бұрын
This dataset was generated by converting a video into frames and subsequently annotating the data in YOLO format using the labelImg software available at: github.com/HumanSignal/labelImg
@felixf4262 Жыл бұрын
Does someone have the Google colab link? Can't find it. :(
@Ultralytics Жыл бұрын
We're creating Google Colab notebooks, coming soon to our GitHub: github.com/ultralytics/ultralytics
Exactly! Here's the direct link to the Google Colab notebook: colab.research.google.com/github/ultralytics/ultralytics/blob/main/examples/tutorial.ipynb. Happy training! 🚀
@daryladhityahenry10 ай бұрын
Hi! I don't understand a little. Using roboflow is free, but the training only get 3 credits. So, when we train like in the video, we're not using roboflow credit right? And also, how much picture needed to train for good result? Thanks.
@Ultralytics10 ай бұрын
To train the model on Roboflow, you'll require Roboflow credits. Alternatively, Ultralytics provides free training options on Google Colab or your local machine. Thanks Ultralytics Team!
@daryladhityahenry10 ай бұрын
@@Ultralytics LOL! hahahaha. yeah thank you Ultralytics hahaha. Hi hi. I just want to ask another thing. I see many use case and training. This is an idea, but can ultralytics trained to know "mouse" ( PC MOUSE ), track the movement, and also maybe track if it's click or right click etc if we're giving some sign like: the mouse cursor become yellow when click, and red when right click. Is it possible to track that on ultralytics ( if we train it? ). If yes, do you have any estimation on how much data I need? Because mouse is quite limited, it may only need 3 images right hahahah.. And if that low amount of data used for training, how much epoch I need to do? @___@. ( Just an estimation, I know it's still needs trial and error ).. Thankksss
@Ultralytics10 ай бұрын
Training Ultralytics to track a PC mouse, including movement and clicks, is possible. For a diverse dataset, aim for more than just a few images - perhaps a few hundred. Start with around 50 epochs for training, adjusting as needed. Real-world testing and data augmentation can refine the model. Thanks & Good luck!
@daryladhityahenry10 ай бұрын
@@Ultralytics 🤯for 100+ same mouse icon to be needed is quite weird, but okay. Maybe different background etc will help it detect what mouse is actually. Thank you Ultralytics :D:D.
@Ultralytics5 ай бұрын
Exactly! Different backgrounds and angles will help the model generalize better. Best of luck with your project! 😊
@kuaranir24409 ай бұрын
Pls how to decrease font size of text on bounding boxes ?
@Ultralytics9 ай бұрын
You can utilize the 'line_width' parameter for shrinking the dimensions of bounding boxes. For instance: ```yolo detect predict source="path/to/image.jpg" line_width=2``` For further details, you can refer to the variety of arguments supported for inference in our documentation: docs.ultralytics.com/modes/predict/#inference-arguments
@kuaranir24409 ай бұрын
@@Ultralytics thanks
@Ultralytics5 ай бұрын
You're welcome! If you have any more questions, feel free to ask. Happy coding! 🚀
As Google Colab is a server-based application, using the laptop camera directly in Google Colab is restricted due to privacy concerns. However, you can still showcase the camera feed by incorporating specific JavaScript code. Thanks Ultralytics Team!
@thetechmachine544610 ай бұрын
where can i get the dataset?
@Ultralytics10 ай бұрын
You can access the datasets by visiting docs.ultralytics.com/, where you will find the download URL and detailed information for each dataset. Thanks!!!
@fsaudm5 ай бұрын
How does the model know where the labels are? The data.yaml only has the location of the images, correct?
@Ultralytics5 ай бұрын
Great question! The `data.yaml` file specifies the paths to the images and the annotation files. The labels are typically stored in a separate directory, often in the same format as the images, and the `data.yaml` file points to this directory. For more details, check out our documentation: docs.ultralytics.com/datasets/. 😊
@GaryZhou-x9h5 ай бұрын
why the default unzipped file structure from Roboflow is not the same as the model required? The default structure from Roboflow is train -> images; labels, but the YOLOV8 model requires a structure of images -> train; val; test (I am pretty sure I chose the correct version)
@Ultralytics5 ай бұрын
It sounds like there might be a mismatch in the dataset structure. YOLOv8 expects a specific format. You can reorganize your dataset to match the required structure. For more details on the correct dataset format, check our documentation here: docs.ultralytics.com/datasets/. If you need further assistance, feel free to ask! 😊
@GaryZhou-x9h5 ай бұрын
@@Ultralytics Thanks for the helpful information. Just one more question, I noticed Roboflow automatically generate a model evaluation result using Roboflow 3 object detection model once I generated a customized dataset, is it based on YOLOv8? And is there a way to download the trained model, so that we don't have to train ourself?
@Ultralytics5 ай бұрын
Yes it's likely that the their detection models are Ultralytics YOLOv8 models "under the hood", but I think they may not make it easy for you to download as they probably want you to use the model through their platform.