Great Video! I didn't have time to read the YOLO World paper completely, or even test it, but with the video I can understand a lot of its architecture, and it's performance! Thank you Peter for explaining in such a great way!
@Roboflow5 ай бұрын
pleasure to read comments like this!
@abdshomad11 ай бұрын
As always, the content is well delivered. Thank you for always share the knowledge 👍
@SkalskiP11 ай бұрын
my pleasure!
@wolpumba40998 ай бұрын
*YOLO-World Explained: A Bullet List Summary with Timestamps* *What is YOLO-World? (**0:00**)* * A cutting-edge, zero-shot object detection model that's 20x faster than predecessors. (0:24) * Uses a "prompt-then-detect" paradigm to achieve speed, encoding prompts offline and reusing embeddings. (2:26) * Leverages a faster CNN backbone and streamlined architecture for increased efficiency. (2:57) * Outperforms previous zero-shot detectors (like GroundingDINO) in terms of speed while maintaining accuracy. (2:12) *Advantages of YOLO-World:* * No need for custom dataset training for object detection. (0:42) * Real-time video processing capabilities (up to 50 FPS on powerful GPUs). (9:22) * Can incorporate color and position references in prompts for refined detection. (10:16) *Limitations of YOLO-World (**13:16**):* * Still slower than traditional real-time object detectors. (13:34) * May be less accurate than models trained on custom datasets, especially in uncontrolled environments. (13:51) * Can misclassify objects, particularly with low-resolution images or videos. (14:19) *Using YOLO-World Effectively (**5:33**):* * Experiment with different confidence thresholds for optimal results. (7:14) * Utilize non-max suppression (NMS) to eliminate duplicate detections. (8:07) * Filter detections based on relative area to remove unwanted large bounding boxes. (11:04) * Combine with FastSAM or EfficientSAM for zero-shot segmentation tasks. (15:21) *Beyond the Basics (**15:08**):* * YOLO-World opens possibilities for open-vocabulary video processing and edge deployment. (15:10) * Potential for advanced use cases like background removal, replacement, and object manipulation in video. (15:43) i used gemini 1.5 pro to summarize the transcript.
@Roboflow8 ай бұрын
Curious how did you do it
@wolpumba40998 ай бұрын
@@Roboflow I used the prompt "create bullet list summary: ". Then another prompt "add starting (not stopping) timestamps".
@uttamdwivedi770911 ай бұрын
Great work !!! Could you provide a tutorial on how to train (finetune) this YOLO-World model on specific type of data?
@Roboflow11 ай бұрын
I'll think about it. If enough people are interested we could at least write a blog.
@elhadjmeguellati303111 ай бұрын
interested and thanks for the very usefull content @@Roboflow
@scottsharp14211 ай бұрын
Yes, would love too see this as well. Thanks for great content.
@smartfusion879910 ай бұрын
Yes please, is it possible to run a fine tuned /light version on a edge device?
@viniciusgardim51546 ай бұрын
@@Roboflow do it, please
@LukasSmith82711 ай бұрын
the best to ever do it
@SkalskiP11 ай бұрын
haha you are too nice! But thanks!
@big_zzzzz11 ай бұрын
Priceless info!
@KarenWeissKarwei11 ай бұрын
Great video, informative and understandable. Thank you!
@yzamari6 ай бұрын
Great video! very informative!
@sumukharaghavanm646611 ай бұрын
Great solution for students Thanks a lot!!!!
@Codewello11 ай бұрын
Awesome as always! I have learned a lot from you, especially about supervision Also, I love the thumbnail. You look like you're saying 'come at me, bro 😁😁
@Roboflow11 ай бұрын
Glad to hear it!
@elvenkim11 ай бұрын
Hi Pieter! Great delivery, love the final video on YOLO + SAM. May I check with you on how do we extract the coordinate of the bounding box?
@Roboflow11 ай бұрын
In my code just access detections.xyxy :)
@elvenkim11 ай бұрын
@@Roboflow many thanks Pieter!
@madhinib2515Ай бұрын
Thanks for video . Would like to know if there is any possible way to deploy YOLO-world with roboflow inference on edge devices such as mobile
@nazaruddinnurcharis5989 ай бұрын
Good information, whether this Yolo can be used to detect objects in realtime using a camera?, because I am in a project to develop Yolo for use in realtime cameras that I plan to use on my farm to detect predators.
@froukehermens217611 ай бұрын
Can you use YOLO-world + SAM to annotate images for training a (faster) object detector? (or image segmentation - maybe even pose estimation?).
@Roboflow11 ай бұрын
Yes you can! Some time ago we showed how to do it with Grounding DINO + SAM combo: kzbin.info/www/bejne/pXa0ioaqo6tloposi=JzsB_leYOXbGtiGL
@Amir-vn2wx11 ай бұрын
@@Roboflow This is awesome!
@richarddjarbeng709311 ай бұрын
Cool tutorial. I have 2 questions. 1. Is there a list of classes that the model can detect? For instance if I want to detect 'yellow tricycles' but I am not sure if the model knows tricycles where can I check this. 2. How do you use this for semantic segmentation? You showed this briefly for the suitcases and croissants but you didn't go into the details.
@Roboflow11 ай бұрын
There is no list… You need to experiment. But that’s easy. All you need to do is use HF space: huggingface.co/spaces/stevengrove/YOLO-World You need to use boxes coming from YOLO-World to prompt SAM. Take a look at the code here. Few months ago we showed how to combo GroundongDINO + SAM combo: kzbin.info/www/bejne/pXa0ioaqo6tlopo
@richarddjarbeng709311 ай бұрын
@@Roboflow Will check it out. Thanks for the quick response
@vipulpardeshi286811 ай бұрын
Hey , I just want to know , Is there any method to use Roboflow models on Offline Projects . Because by using API inferencing is very slow and I want fast detections.Is there any way to save the model .pt file and use it later without alsways importing Roboflow workspace. Thanks❤
@Roboflow11 ай бұрын
Absolutely! You can use inference pip package to run any model from Roboflow on your local machine. You only need internet during the first run to download it. Then it is cached locally and you can run it offline.
@vipulpardeshi286811 ай бұрын
Ok thanks for the reply , you guys are the best
@99develop8011 ай бұрын
Thank you for the video! I have a question. What do you call the technology that uses YOLO-world + Efficient SAM in the back of the video to switch from detection to segmentation along the baseline? Or is there a way to implement it?
@Roboflow11 ай бұрын
I use Gradio library to build those interactive demos.
@TUSHARGOPALKA-nj7jx9 ай бұрын
Do we have a yolo-v8 model trained on the ade20k dataset? If not, how would one do it?
@alaaalmazroey322611 ай бұрын
Can YOLO-world detect the road area from dash camera accurately? As i need to detected for autonomous vehicle
@Roboflow11 ай бұрын
I recommend you try with your own images here: huggingface.co/spaces/stevengrove/YOLO-World
@iconolk73387 ай бұрын
I want to use this project. It works on the hugging face, but strangely it doesn't fit my environment, it doesn't work on my PC. I want to "clone" that on the hugging face, is there a way?
@Roboflow7 ай бұрын
Yes. HF Spaces work like git. You can clone entire project to your local.
@rajeshktym11 ай бұрын
Hi, is it a good suggestion to use YOLO-World for apple grade detection? A global shutter 2MP camera will capture 5 apples in the same position in a single frame (apple cup conveyor with trigger). We need to find bounding box of each apple and the classification result like grade A or grade B. What may be the maximum time required to obtain grade and boundary box information for each apple using jetson Nano.
@Roboflow11 ай бұрын
I think you can always spend few minutes to try. Like I said in the video: don’t be afraid to experiment, but be prepared that in your use case you might still need to train model on custom dataset. During my tests conveyor object detection usually worked really well. At least if objects do not occlude each other. That’s why I feel quite confident that detection part will work. I’m worried about classification.
@avamaeva799911 ай бұрын
This is a game changer, but it needs to work on mobile to be of real use in my setting? Two questions please: 1 - Can quantizisation be used on this model to make it much quicker, perhaps to a level where it will work in real time (at least 10fps) on state of the art phones (eg iPhone 15)? 2 - Can the model be run through the TFLite Converter? If not, any ideas whether that facility might be introduced? Many Thanks
@Roboflow11 ай бұрын
Good questions. As far as I know no quantized version was yet released. I’ll try to reach out to authors and ask.
@jimshtepa54238 ай бұрын
have you done any video on training a model for custom dataset?
@nourabdou411811 ай бұрын
Thank you, very informative. I've a question regarding the prompts, Does it support and understands things like "Red Zones" or "Grey Areas" ? I've tried to use it on maps and I was trying to identify grey areas or red areas but it doesn't work. Is there any workaround? thank you again!
@Roboflow11 ай бұрын
hard to say without looking at the exact image. zone or area sounds very general :/ Is there any chance you could look for a gray rectangle or circle? I'm thinking of something more precise. And I assume you need a very low confidence threshold to do it anyway.
@nourabdou411811 ай бұрын
@@Roboflow It works and obviously it's not correct 100% but It works which's good, thank you so much
@potobill8 ай бұрын
is there a C++ version? Is the C++ version faster or the same speed?
@alaaalmazroey322611 ай бұрын
Hi, Does yolo-world + SAM work well to segment all the cars and trucks perfectly in the video scenes when there is a very crowded in the road? If not what do you suggest? Thsnks
@Roboflow11 ай бұрын
If you plan to detect cars, just use any of models pre trained on COCO. You do not need zero shot detection to find cars :)
@DDDprinting8 ай бұрын
@@RoboflowDo you have a recommendation for a camera for this kind of work?
@TUSHARGOPALKA-nj7jx9 ай бұрын
Would Yolo-world-m or s version run in ms on a CPU?
@baseerfarooqui589710 ай бұрын
hi very informatic video i am getting this error while running code "AttributeError: type object 'Detections' has no attribute 'from_inference. i am using on my local system
@Roboflow10 ай бұрын
What version of supervision you have installed?
@nidalidais99999 ай бұрын
hi man , good work , what the difference between YOLO-World and T-REX model , and how to compare between models usually
@misaeldavidlinareswarthon19011 ай бұрын
Impressive !!!! ... I have a quiestion So for maximun speed I still have to use Yolov8 or yolo-world have less latency with coustom dataset
@Roboflow11 ай бұрын
If you need a model that runs in real-time or faster you still need to train object detector on custom datasets. It does not need to be YOLOv8.
@alaaalmazroey322611 ай бұрын
Hi, does YOLO-world can detect object (e.g. houses) perfectly from geospatial images?
@Roboflow11 ай бұрын
I tested. I’m afraid not ;/
@jkjhkjhkjhkjpopoipofsi11 ай бұрын
Hi, is there a way to count the time of objects in zone
@Roboflow11 ай бұрын
Yup. It is on out list of videos that are coming really soon!
@isaac1023111 ай бұрын
Can this be run locally on an rtx card? Or at least, how do we run this locally,?
@Roboflow11 ай бұрын
Absolutely! I think you can easily run it on RTX.
@Kalyani-k7b11 ай бұрын
Is this helpful in detecting the damaged object in real time??
@Roboflow11 ай бұрын
Probably depends on type of object and type of damage, but I think yes.
@Kalyani-k7b11 ай бұрын
@@Roboflow Thank you. Let's consider the example of suitcases and backpacks shown in the video. Can this technology be useful for detecting damage in them?
@Roboflow11 ай бұрын
@@Kalyani-k7b I'll try to answare this question during community session
@sreekanthreddy697910 ай бұрын
how to do this with web camera ?
@khalidalsinan376810 ай бұрын
in the huggingface website, when i upload a video, it outputs a video of 2 seconds only. Anyone knows how to fix this?
@Roboflow10 ай бұрын
We need to prevent long video processing , because it makes other users wait longer.
@Roboflow10 ай бұрын
You would need to clone the space and make it process longer files.
@KhalidAlsinan10 ай бұрын
@@Roboflowhow do I “clone” it?
@g.s.338911 ай бұрын
wow
@paulpolizzi342111 ай бұрын
can this work on my kids soccer videos?
@Roboflow11 ай бұрын
It probably can. But soccer is pretty standard use-case. YOLOv8 or other typical detector is probably a much better choice for you.
@novandaardhi786711 ай бұрын
can this integrated with ros2 using Nvidia Jetson Nano?
@Roboflow11 ай бұрын
We are going to test Jetson deployments internally soon, but I can already tell you that it will be pretty hard to run it on the Nano board. Xavier / Orin sounds a lot more realistic.
@novandaardhi786711 ай бұрын
thanks, maybe I can consider using Orin to run it, I'll wait for you to do a test on Jetson
@abdshomad11 ай бұрын
Yesterday I tried to detect red, yellow, green traffic light. It still did not recognize the color. Any specific guide on how to identify color?
@atharvpatawar834611 ай бұрын
If it’s able to detect the individual traffic lights, get the bounding boxes and use clustering to find the majority colour within that box
@abdshomad11 ай бұрын
@@atharvpatawar8346 currently it can't. It will detect the whole lights. Even I tried to change the prompt to : circle, box, bulb, still not possible. Maybe have to apply 2nd classifier?
@SkalskiP11 ай бұрын
@@abdshomad I'd say iy you need to use YOLO-World and second level classifier it is probably not wort this.
@SkalskiP11 ай бұрын
@@abdshomad which version of model did you used?
@rafaelsetyan175511 ай бұрын
Has anybody tried this model in UAV/Drone data, is it accurate? It might be possible to export onnx and to do inference in C++, isn't it?
@Roboflow11 ай бұрын
The only test I made on drone footage was "lake detection". But that was a large object; you are probably considering detecting smaller objects. As for ONNX export, yes, export is possible, but (as far as I know) once you export your text prompt is frozen.
@polnapanda493411 ай бұрын
After couple of hours working on google colab It cuts almost all performance, deletes data and says that i can buy gpu power
@Roboflow11 ай бұрын
Sorry to hear that. Google Colab is free, but only up to a certain point :/
@polnapanda493411 ай бұрын
@@RoboflowYep :c i was training my model and it deleted all progress after 4 hours of training
@zdong248310 ай бұрын
report issue when running note book on Mar 23, 2023, have to use !pip install -q ultralytics==8.1.30, otherwise fail.
@Roboflow10 ай бұрын
I’m not sure what you mean, but I just tested the code and everything works.
@hanma92496 ай бұрын
GG
@chandanchakma28757 ай бұрын
i want to learn AI .please make a playlist ..
@vishwamgupta-n6k11 ай бұрын
It is not working well when object size is less, GROUDING DINO Working well than Yolo-World.
@Roboflow11 ай бұрын
I think it all depends on specific cases. What do you meant by “object size is less”?
@vishwamgupta-n6k11 ай бұрын
@@Roboflow I mean when object is far away in image. Yolo world could not detect as many objects as GROUNDING DINO Could in such situation.
@Roboflow11 ай бұрын
@@vishwamgupta-n6k have you tried lower confidence threshold?
@vishwamgupta-n6k11 ай бұрын
@Roboflow yes tried that too, but still, the performance of GROUNDING DINO was superior. It could detect objects on more images than Yolo-world.
@science_electronique11 ай бұрын
groundino is more accurate
@netq25410 ай бұрын
"Cheap Nvidia T4" £1000 is not cheap bro
@Roboflow10 ай бұрын
Compared to A100 or H100 it is ;) but what I meant is just using T4 on AWS.
@netq25410 ай бұрын
@@Roboflow Holy hell you're right! I didn't realise how expensive these cards are!