Oriented Bounding Boxes (YOLOv8-OBB) Object Detection using Ultralytics YOLOv8 | Episode 21

  Рет қаралды 8,603

Ultralytics

Ultralytics

Күн бұрын

Join us on the 21st video of our new series as we explore the vast capabilities of Ultralytics YOLOv8 Oriented Bounding Boxes (YOLOv8-OBB) models for developing projects and applications.
In this episode, we will extensively examine the YOLOv8-OBB module, providing a thorough overview. We'll explore its practical applications and complexities, delving into the code implementation to foster a comprehensive understanding.
➡️ Explore more: docs.ultralytics.com/tasks/obb/
➡️ Bilibili Video - shorturl.at/ajlo2
Key Moments
0:00 - Ultralytics YOLOv8-OBB Introduction
0:38 - YOLOv8-OBB Documentation
1:26 - YOLOv8-OBB Models
1:45 - YOLOv8-OBB Train
2:13 - YOLOv8-OBB Dataset Format
2:47 - YOLOv8-OBB Val & Predict
3:05 - YOLOv8-OBB Export
3:37 - YOLOv8-OBB Inference using Python
5:23 - YOLOv8-OBB Demo
5:46 - YOLOv8-OBB Use Cases
6:05 - Summary
Ultralytics ⚡ resources
- About Us - ultralytics.com/about
- Join Our Team - ultralytics.com/work
- Contact Us - ultralytics.com/contact
- Discord - ultralytics.com/discord
- Ultralytics License - ultralytics.com/license
YOLOv8 🚀 resources
- GitHub - github.com/ultralytics/ultral...
- Docs - docs.ultralytics.com/

Пікірлер: 35
@Vahroc
@Vahroc 7 күн бұрын
Lets assume we crop a big image into smaller regions (kinda like a sliding window) such that sometimes half a bounding box (or at least one corner of a box) sit outside the newly cropped image. When normalizing between 0 and 1, should we set the coordinate of the four points normaly, such that one of the corner may have coordinates bigger than 1 or smaller than 0 ? Or should we adapt these values to fall into the cropped image ? But then the box is not necessarily rectangular anymore. What is the best approach to train with yolo OBB under these conditions ?
@Ultralytics
@Ultralytics 7 күн бұрын
When training with YOLO OBB on cropped images, it's best to normalize bounding box coordinates based on the original image size, even if they fall outside the [0, 1] range. This maintains the integrity of the bounding boxes. Ensure your data pipeline and loss function can handle out-of-bound coordinates. If required to clip, adjust coordinates to fit within the cropped image while preserving box validity.
@afjamo
@afjamo 20 күн бұрын
Cooool!
@Ultralytics
@Ultralytics 20 күн бұрын
Thank you!!!
@m033372
@m033372 4 ай бұрын
Nice!
@Ultralytics
@Ultralytics 4 ай бұрын
Thanks!
@frostscarlet
@frostscarlet Ай бұрын
doest it uses the same approach as yolov5-obb which use CSL for its OBB function?
@Ultralytics
@Ultralytics 29 күн бұрын
Well, Ultralytics YOLOv8 uses a completely different approach, which is the piou(it's like iou to regular bounding boxes but for rotated boxes) way to regress obb. Thanks
@sudhikrishnana9778
@sudhikrishnana9778 4 ай бұрын
Can the OBB annotated images can be used with a mix of traditional bounding boxes images dataset for the training? Will it affect the output performance?
@Ultralytics
@Ultralytics 4 ай бұрын
Oriented bounding boxes are a separate concept that involves angle calculations for improved results. You cannot directly use OBB with traditional bounding boxes, but with modifications, you may be able to integrate them with traditional bounding boxes. Thanks, Ultralytics Team!
@TravelwithRasel.
@TravelwithRasel. 20 күн бұрын
hey, as a learner, i have a question of a point, should we mention the name of source into the last line, i meant inside the result row, please try to clarify of my concern
@Ultralytics
@Ultralytics 20 күн бұрын
Yes, you'll need to use the pathway to the video file or live stream for source argument, which will be use for inference. Thanks Ultralytics Team!
@user-xq8gv9ep9c
@user-xq8gv9ep9c 2 ай бұрын
How can get the coordinates of the results image predicted by custom model OBB? because if use the none OBB code, the result getting None
@Ultralytics
@Ultralytics 2 ай бұрын
You can obtain the bounding box coordinates for the Oriented Bounding Box (OBB) task using the provided code below. """python from ultralytics import YOLO from ultralytics.utils.plotting import Annotator, colors import cv2 model = YOLO("yolov8n-obb.pt") names = model.names cap = cv2.VideoCapture("Path/to/video/file.mp4") assert cap.isOpened(), "Error reading video file" while cap.isOpened(): success, im0 = cap.read() if success: results = model.track(im0, persist=True, show=False) pred_boxes = results[0].obb annotator = Annotator(im0, line_width=3, example=names) for d in reversed(pred_boxes): c, id = int(d.cls), None if d.id is None else int(d.id.item()) label = names[c] + ":" + str(id) box = d.xyxyxyxy.reshape(-1, 4, 2).squeeze() print("Bounding Box Coordinates : ", box) annotator.box_label(box, label, color=colors(int(id), True), rotated=True) cv2.imshow("ultralytics", im0) if cv2.waitKey(1) & 0xFF == ord('q'): break continue break cap.release() cv2.destroyAllWindows() """ Thanks
@dalinsixtus6752
@dalinsixtus6752 3 ай бұрын
for the cli command !yolo task=obb ... how to implement the same using model='best.pt' ,model.predict() , what is the arg for task=obb ??
@Ultralytics
@Ultralytics 3 ай бұрын
Oriented bounding boxes use the same parameters as those employed by the official YOLOv8 model. For more info: docs.ultralytics.com/modes/predict/#inference-arguments Thanks Ultralytics Team!
@dalinsixtus6752
@dalinsixtus6752 3 ай бұрын
@@Ultralytics is it possible to change the bounding box color during live camera detection , if certain condition is satisfied after object detection i.e i need to change the color from green to red if certain condition is proved false using ultralytics.utils.....
@Ultralytics
@Ultralytics 3 ай бұрын
Yes, it is achievable. Within the annotator.box_label function, you have the option to define the color of bounding boxes. For additional details, you can refer to our documentation at: docs.ultralytics.com/reference/utils/plotting/?h=box_la#ultralytics.utils.plotting.Annotator.box_label Thanks
@dalinsixtus6752
@dalinsixtus6752 3 ай бұрын
@@Ultralytics thanks for the solution.How to combine two models of different class , i tried transfer learning still didn't got the classes from the second model , even there is no module for ensembling yolov8 , can you provide me solutions to combine two models of different classes into a single model
@zubairkhalid3209
@zubairkhalid3209 4 ай бұрын
Is there a possibility to quantify the number of pixels that particularly define that bounding box (so basically quantifying sizes from say microscopic images)? Also, is it possible to quantify the the intensity of certain colors in a bounding box (just like colorimeter)?
@Ultralytics
@Ultralytics 4 ай бұрын
Yes, it's possible to quantify the number of pixels defining a bounding box in YOLOv8, aiding size quantification in microscopic images. Additionally, YOLOv8 allows for quantifying color intensity within a bounding box, akin to a colorimeter, offering versatile analysis capabilities. Note: By default, Ultralytics YOLOv8 does not include support for these features, but you can customize and add them based on your specific use case. Thanks Ultralytics Team!
@user-di4eb8vc1i
@user-di4eb8vc1i 4 ай бұрын
how the dataset be converted from the yolov8 annotated format to the yolov8 oriented bounding boxes format?
@Ultralytics
@Ultralytics 4 ай бұрын
At the moment, there isn't a direct method to convert an object detection dataset to OBB format. Our team is actively developing OBB modules, and you may see this conversion on our documentation page soon: docs.ultralytics.com/ Thanks, Ultralytics Team!
@user-cs6ni1oo6s
@user-cs6ni1oo6s Ай бұрын
😍Any visual heatmaps adapted for OBB object detection?
@Ultralytics
@Ultralytics Ай бұрын
Heatmaps for OBB object detection are currently not supported. However, we do plan to include this feature in upcoming releases. Thank you, Ultralytics Team
@dimitrispolitikos1246
@dimitrispolitikos1246 4 ай бұрын
Very nice work! A short question, which software tool do you suggest to use for annotating our dataset with oriented bounding boxes?
@Ultralytics
@Ultralytics 4 ай бұрын
Thanks. To annotate the dataset in OBB format, we recommend using the LabelImg_OBB Annotation tool, which can be found at this GitHub link: github.com/heshameraqi/labelImg_OBB Best regards, Ultralytics Team
@dimitrispolitikos1246
@dimitrispolitikos1246 4 ай бұрын
@@Ultralytics thank you very much for the suggestion! I tried it in the past but I think that it returns the bounding boxes as "class_id, x,y, r, theta" and not "class_index, x1, y1, x2, y2, x3, y3, x4, y4". I will double check it. Maybe a conversion function will be needed. Warmest regards!!
@truonggiang-227nguyen5
@truonggiang-227nguyen5 3 ай бұрын
how can I get the coordinates of the bounding box? Thanks you!
@Ultralytics
@Ultralytics 2 ай бұрын
To obtain bounding box coordinates, access the output of Ultralytics YOLOv8 model. Typically, it provides (x, y, width, height, conf, id) values. Extract them from the detection results, adjusting the code based on your model's output format. Thanks
@dinihanafi5097
@dinihanafi5097 2 ай бұрын
@@Ultralytics can you provide the example code?
@Ultralytics
@Ultralytics 2 ай бұрын
Sure, Below is the provided code snippet for obtaining the coordinates of Oriented Bounding Boxes using Ultralytics YOLOv8. ```python from ultralytics import YOLO from ultralytics.utils.plotting import Annotator, colors import cv2 # Initialize YOLOv8 model model = YOLO("yolov8n-obb.pt") names = model.names # Open video file cap = cv2.VideoCapture("Path/to/video/file.mp4") assert cap.isOpened(), "Error reading video file" while cap.isOpened(): success, im0 = cap.read() if success: # Make predictions on each frame results = model.predict(im0, persist=True, show=False) pred_boxes = results[0].obb # Initialize Annotator for visualization annotator = Annotator(im0, line_width=2, example=names) # Iterate over predicted bounding boxes and draw on image for d in reversed(pred_boxes): box = d.xyxyxyxy.reshape(-1, 4, 2).squeeze() print("Bounding Box Coordinates : ", box) annotator.box_label(box, names[int(d.cls)], color=colors(int(d.cls), True), rotated=True) # Display annotated image cv2.imshow("ultralytics", im0) # Check for key press to exit if cv2.waitKey(1) & 0xFF == ord('q'): break continue break # Release video capture and close windows cap.release() cv2.destroyAllWindows() ``` Thanks
@user-vo2kg3yp1j
@user-vo2kg3yp1j 15 күн бұрын
Nice, can you give me your test video(ships.mp4)
@Ultralytics
@Ultralytics 14 күн бұрын
Sure, the video is available at: shorturl.at/gjvCE Thanks Ultralytics Team!
@user-vo2kg3yp1j
@user-vo2kg3yp1j 14 күн бұрын
@@Ultralytics thanks
I PEELED OFF THE CARDBOARD WATERMELON!#asmr
00:56
HAYATAKU はやたく
Рет қаралды 38 МЛН
Conforto para a barriga de grávida 🤔💡
00:10
Polar em português
Рет қаралды 107 МЛН
FOOTBALL WITH PLAY BUTTONS ▶️ #roadto100m
00:29
Celine Dept
Рет қаралды 72 МЛН
it takes two to tango 💃🏻🕺🏻
00:18
Zach King
Рет қаралды 27 МЛН
Object detection app using YOLOv8 and Android
13:50
Code With Aarohi
Рет қаралды 8 М.
How to Train YOLOv8.1 on Custom Dataset with Oriented Bounding Boxes
25:46
What's New in YOLOv8 | Model Deep Dive
11:35
Roboflow
Рет қаралды 34 М.
YOLOv9: How to Train on Custom Dataset from Scratch with Ultralytics
21:22
Object Detection in 10 minutes with YOLOv5 & Python!
10:45
Rob Mulla
Рет қаралды 216 М.
How Neuralink Works 🧠
0:28
Zack D. Films
Рет қаралды 31 МЛН
iPhone 15 Pro vs Samsung s24🤣 #shorts
0:10
Tech Tonics
Рет қаралды 8 МЛН
Индуктивность и дроссель.
1:00
Hi Dev! – Электроника
Рет қаралды 1,5 МЛН