Fantastic demo! Thank you for showcasing the AMB82 Mini Dev Board! We look forward to seeing more videos! 😁
@surflaweb Жыл бұрын
Good job. Keep it up!
@abrtn00101 Жыл бұрын
I have this same board, and I'm developing an object detection product for industrial use. I trained a custom YOLOv4 object detection model, and it seems to work quite well. I don't need a video stream for my use case, but for validating my models on-site, I use this method: Rather than output the video to an attached screen, I stream it using continuous JPEG. It currently seems to offer the best balance of framerate (30 FPS video and 5 FPS detection) and latency (its latency is much lower than RTSP). The WiFi is set to AP mode, so I can connect to it using my phone. If you're interested, I can share code and a video later in the week, as I'm currently quite busy preparing said product for my prototype presentation to the client.
@ThatProject Жыл бұрын
That's interesting. What tools did you use when training custom items? I try not to connect to the AP on my mobile phone if possible because in this case, data communication is not possible on the phone, so I am thinking of getting data through the server.
@abrtn00101 Жыл бұрын
@@ThatProject simple-image-download to download images from the web, self-hosted CVAT for annotation, training was done on Google Colab using the free GPU option with files on Google Drive, some Python scripts for image augmentation, etc. Right now, the AMB82-Mini only supports object detection models trained in Darknet or converted to ONNX - that's why I didn't use YOLOv7, or YOLO-NAS - but they're working adding PyTorch support by February.
@TT-it9gg Жыл бұрын
AMB82 is good. But very slow SPI and SDMMC. Still waiting for the support of MIPI interface....
@MilindRampure-t4j6 сағат бұрын
I’m working on a project using the AMB82 mini. The goal is to capture images at specific intervals, perform object detection on these images, and save them to an SD card only when objects are detected. Additionally, I want to draw bounding boxes and labels directly onto the saved images to highlight the detected objects. My main challenge is integrating these components efficiently and ensuring that the bounding boxes and labels are correctly drawn on the saved images. I’d appreciate any guidance or examples on how to achieve this, particularly on how to draw the detection results onto the image before saving it to the SD card.
@GusRJ70 Жыл бұрын
This is the board I talked about months ago!
@dastatiks6182 Жыл бұрын
I've buyed 2 of them for a AI battery operated streaming camera, still waiting for them to arrive. I hope that Realtek will continue to support this board and the community will grow. Saddly there is no other compatible camera module, and the one used on this board is End Of Life...
@ThatProject Жыл бұрын
I don't know what will happen to this market in the future, but I'm sure if it's popular, they'll be able to keep making new products.
@chandrurn Жыл бұрын
Wow wonderful , if this works with the affordable ESP 32 camera module for standalone person detection that would be wonderful for people like me, who knows only to upload code and do some hobby project with kids at home 😅
@GusRJ708 ай бұрын
Erick, if I need just two object to detect, so can I assume that the precision can be better? (based on your info at 52")
@ThatProject8 ай бұрын
My answer is yes or no. The more data you have about the objects you want to detect, the better the data set you can create. And since each object is independent, unless the number of models you have becomes extremely large, the difference in accuracy is unlikely to be much different.
@michelberg1744 Жыл бұрын
Good Work! I wonder if this could fit on a ESP32S3-EYE as well.
@abrtn00101 Жыл бұрын
Maybe. I've seen neural net projects for the ESP32, but the AMB82-Mini has something the ESP32S3-EYE doesn't: an NPU and enough memory to run more complex inferencing tasks. I have the AMB82-Mini and an ESP32S3-N16R8 (it's not the same as the ESP32S3-EYE, but it's essentially the same module but with double the flash storage minus the screen and the camera). The AMB82-Mini doesn't even break much of a sweat running object detection (80 objects, COCO dataset) at 576x320 10 FPS and overlaying the result on a 1080p 30 FPS H264 video - this video doesn't do it much justice, due to the poor performance of the attached screen. In contrast, the ESP32S3 has difficulty maintaining a decent framerate at SXGA size without drastically lowering the JPEG quality.
@michelberg1744 Жыл бұрын
@@abrtn00101 I am currently working with ESP-DL and their TVM Convert Tool. Do you have any experience with it?
@spirtualtraveller5 ай бұрын
it just runs perfect but after few seconds screen freezes. any solution for that
@ThatProject5 ай бұрын
Someone reported similar symptoms and he fixed the issue using the Camera_2_Lcd_JPEGDEC example. Want to try this?
@spirtualtraveller5 ай бұрын
@@ThatProject thanks for the reply bruh. if possible you can show training custom model for the board
@RixtronixLAB Жыл бұрын
Nice video, thanks :)
@davidanwar6996 Жыл бұрын
Can we use esp32 with ai camera thinker for video object detection?
@ThatProject Жыл бұрын
There is Eloquent TinyML library which you can use TensorFlow lite for your ESP32-CAM. Check this out. kzbin.info/www/bejne/bKmkpnWEq951hZIsi=PbqmopwF2MX7neFC&t=469
@ARD25086 ай бұрын
Hi, I've been trying your model following all the instructions stated on your GitHub page. Everything has been working well so far. However, after a while, during the execution of the program, the recording just freezes on the TFT screen, and the following text shows in the serial monitor: 15:33:09.699 -> Total number of objects detected = 0 15:33:09.822 -> YOLOv7t tick[0] = 69 15:33:09.822 -> YOLOv7t tick[0] = 69 15:33:09.996 -> YOLOv7t tick[0] = 69 15:33:10.129 -> Total number of objects detected = 1 15:33:10.169 -> [VOE]renew g/p(5d8c 5d8c)case1 14068 case 2 23948 15:33:10.247 -> 15:33:10.247 -> [VID Wrn]VOE CH1 JPG buff full (queue/used/out/rsvd) 20/0KB It seems to be related to some buffer being completely filled, but I can't find that buffer in the code. I don't know if you've had this same problem or if it's just my particular case.
@ThatProject6 ай бұрын
As far as I know, it is impossible to increase or decrease the configNN buffer separately. Have you tried to reduce the FPS of configNN, or modify it to draw only half of the objects found through NNObjectDetection on the screen if the number of objects is large?
@ARD25086 ай бұрын
@@ThatProject Yeah, I tried that but didn't work, the problem lies in the buffer of the tft channel, the NN buffer doesn't overflow at all. I ended up following the Camera_2_Lcd_JPEGDEC example on how to use the tft display, and it's seems to work falwlessly. Thanks for the response