This guy seriously needs a pay rise. Awesome content
@SkalskiP2 жыл бұрын
It's Peter from the video :) Thanks! I'll pass that idea to my superiors haha
@vishalpahuja2967 Жыл бұрын
Hi can you show annotation process for cracks on a single wall and detecting. Thank you.
@Roboflow Жыл бұрын
I used a model that was already on Roboflow Universe. I did not annotate it myself.
@vishalpahuja2967 Жыл бұрын
@@Roboflow Thankyou for the response , I actually needed to detect lines similar to cracks and wanted to know how can it be annotated so that detection can show exact shape of lines which can be curved so, how can I annotate the images and train the model to detect it.
@Roboflow Жыл бұрын
@@vishalpahuja2967 ah the data itself was annotatwed using polygons.
@ksteven4469 Жыл бұрын
Thank you for your tutorial. I have a question for you. At around 2:55, you mentioned that you had previously completed a project that involved object detection and instance segmentation at the same time. Would it be possible for me to take a look at the code for that project? Thx
@Roboflow Жыл бұрын
Hi sorry for the late reply I was working on a new video. I did detection + pose estimation. Take a look here: github.com/SkalskiP/sport/blob/master/football-players-pose-estimation/football_players_pose_estimation.ipynb
@robosergTV2 жыл бұрын
Thanks, great vid! YOLO7 creates an image with predictions during training, as you've shown it. Is it a feature of YOLO7 codebase, since YOLO5 does not generate an image with predictions automatically for me, only YOLO7 codebase does (as in your video)
@SkalskiP2 жыл бұрын
Hi 👋! It's Peter from video. Doesn't it? I'm pretty sure it does. I'd need to double check in that case.
@BenjaminvonCramon2 ай бұрын
Is it hard fact that Roboflow only accepts square format? I'd so prefer obviating the need to subdivide 1.5 aspect rasters into square tiles.
@vikashkumar-cr7ee Жыл бұрын
Dear Tutor, Greetings! I downloaded the Yolo format of the dataset. In the train folder, I can see images and corresponding labels folder, But I can't see any the yellow label that is created on crack. Infact, this is shown in the tutorial, but in the actual dataset, it is missing. It seems like it is not needed as a label is already done.
@joshuamacasadia4495 Жыл бұрын
How can i deploy the model to roboflow?
@muhammadmuzammul10232 жыл бұрын
Great knowledge
@abangfikri18652 жыл бұрын
thank u for the video, it helped me alot. However is yolov5 or yolov7 segmentation model can be deployed on android? is it possible? and how can it be done?
@SkalskiP2 жыл бұрын
Hi, its Peter from the video. I'm not sure about YOLOv7 as I know some of their exports doesn't work. However, you can for sure export YOLOv5 to TensorFlow Light and in that format it should be runnable on Android device.
@abangfikri18652 жыл бұрын
@@SkalskiP ohh thank u very for the reply. For yolov5 i know it can be runnable on android for object detection model. But can it be done for yolov5 segmentation?
@serchengxu4399 Жыл бұрын
can you do a real time crack detection with using yolov7 ?
@Roboflow Жыл бұрын
Realistically speaking I doubt that. We have really long backlog. However I encourage you to use this video and our other video about real time video processing. You should be able to figure it out ;) good luck 🍀
@janithaprathapa1267 Жыл бұрын
how to deploy this model in local machine
@Laddu2252 жыл бұрын
Hey. I think it's semantic segmentation not instance segmentation.
@SkalskiP2 жыл бұрын
Hi it's Peter from the video. It is actually instance segmentation. On the results image around 0:10 you can see we have multiple individual detections per image, not just single mask.
@christian.js.1997 Жыл бұрын
Thanks for this tutorial, I've been searching for hours on how to display/visualize feature map in YOLOv7, please make a tutorial about that. 😁
@Roboflow Жыл бұрын
You would like to visualize stored in each layer of the network?
@christian.js.1997 Жыл бұрын
@@Roboflow Yes 🤔
@angelospapadopoulos76792 жыл бұрын
is it possible and how can we train from scratch here?
@afiedoh62282 жыл бұрын
Hello, Thank you for your video. Please how do i apply my best.pt file to realtime video from my webcam? Thank you
@Roboflow2 жыл бұрын
Hi! Please ask the question in Notebooks Repository: github.com/roboflow/notebooks/discussions/categories/q-a we'll try to help you :)
@afiedoh62282 жыл бұрын
@@Roboflow Thank you, just figured it out!
@Roboflow2 жыл бұрын
@@afiedoh6228 great to hear that :)
@youssefkhaled5331 Жыл бұрын
@@afiedoh6228 could you pls tell us how
@nabiladnan62610 ай бұрын
when start train: AttributeError: module 'numpy' has no attribute 'int'.
@elbadamohamed66058 ай бұрын
Hi, Hi did you find the solution? i'm struggling with it
@nabiladnan6267 ай бұрын
@@elbadamohamed6605 nope. didn't try later. still its happening ?
@nabiladnan6266 ай бұрын
@@elbadamohamed6605 no solution still. did u find anything ?
@nabiladnan6266 ай бұрын
@@elbadamohamed6605 still stuck. Did u find any way?
@nabiladnan6265 ай бұрын
@@elbadamohamed6605still no solution.. Did you find anything to fix?
@dinanmutamaddin400 Жыл бұрын
how if i want to change thickness bonding box and size of font when predict object
@Roboflow Жыл бұрын
Use supervision it is a lot more flexible when it comes to annotating detections.
@YapHokLai Жыл бұрын
how to save the model?
@Roboflow Жыл бұрын
It is already saved after training and stored in the runs directory.
@iceiceisaac4 ай бұрын
Good career move lol
@Roboflow4 ай бұрын
Haha what do you mean?
@iceiceisaac4 ай бұрын
@@Roboflow I’m a civil engineer too ! Really thinking about switching but I’m too new to this stuff
@rishabhsheoran69598 ай бұрын
Hey! can you please help me with the deployment of the yolov7 segmnetation model on triton? When I hit the triton inference server, I get back following outputs: name: output tensor: float32[batch,anchors,Concatoutput_dim_2] name: onnx::Slice_539 tensor: float32[Transposeonnx::Slice_539_dim_0,3,Transposeonnx::Slice_539_dim_2,Transposeonnx::Slice_539_dim_3,40] name: onnx::Slice_693 tensor: float32[Transposeonnx::Slice_693_dim_0,3,Transposeonnx::Slice_693_dim_2,Transposeonnx::Slice_693_dim_3,40] name: onnx::Slice_844 tensor: float32[Transposeonnx::Slice_844_dim_0,3,Transposeonnx::Slice_844_dim_2,Transposeonnx::Slice_844_dim_3,40] name: 517 tensor: float32[Mul517_dim_0,32,Mul517_dim_2,Mul517_dim_3] output (1, 100800, 40) onnx::Slice_539 (1, 3, 160, 160, 40) onnx::Slice_693 (1, 3, 80, 80, 40) onnx::Slice_844 (1, 3, 40, 40, 40) From the above outputs how do i extract the bounding boxes and the masks?