I started a new channel with my projects in polish! www.youtube.com/@prosteczesci
@lukevc Жыл бұрын
To solve the lighting problem with your training data, there's a neat python library you can use called "albumentations". It takes your dataset and creates new images adding augmentations to it like contrast and blur e.c.t. This will make your NN more robust and improve the detection
@nikodembartnik Жыл бұрын
That is really interesting, thanks for sharing the information!
@luismilopez3329 Жыл бұрын
Big recomendation on the ROS part. The integration is just so much simpler and the workflow of the nodes and the communication between them is such a nice benefit. It will also provide you with visualization interfaces like Rviz, can be used in python or c++ and there is a LOT of work already done by its big community. It is not difficult to setup or get familiar with it if you are a decent proggramer, which you are. I am currently working in a similar project: developing low level c/python code for sensors connected to a RPI and then use ROS to integrate that code into nodes and centralize all the functionalities of my robots into a server. Its very fun and results are very powerful! Also, given the fact that you have a lidar, you could consider adding some sort of odometry sensors to your wheels and that would mean you could have an entirely autonomous navigation robot using the ROS navigation stack, that generates maps with the lidars and plans the path for the robot to follow!
@lloparyllopary Жыл бұрын
you’re so young and good in robotics!!!
@4115steve Жыл бұрын
I thought about doing this with a drone that could fly through a forrest and identify trees onto a map. Keep up the good work. Thanks for the helpful videos
@varunsreedharan5347 Жыл бұрын
Finally, unit sizes that I as an American can understand.
@wardeneternal11406 ай бұрын
Im so glad i found your channel! cool projects and the 3d print files are so helpful!
@dav1dsm1th Жыл бұрын
A simple solution to the camera field of view problem could be to move the other components around so the camera could be mounted on the bottom floor - I think the reason your previous robot worked more consistently was simply because the camera was lower - so the target didn't drop out of view as the robot approached it. Just an idea. Stay safe out there.
@claesmaartenkamphof539 Жыл бұрын
It's a really interesting video !! Maybe you can make a special video about the speed controls of the motor
@copetedavid Жыл бұрын
Great project! Question, Could you take the set of images you trained it with and duplicate them to adjust the brightness and contrast in increments and add those back to the set? This could help with the lighting condition issue. Thanks for sharing!
@electronic7979 Жыл бұрын
Nice
@Max_Mustermann10 ай бұрын
While not as interesting as using AI object recognition, tracking a simple object like a ball is also possible with something like OpenCV using the ball's color hue. I managed to get it running on a Raspberry Pi Zero. It only ran at around 1 fps, but it was working.
@Hybert_ Жыл бұрын
Really interesting project :)
@nikodembartnik Жыл бұрын
Thanks!
@leoetcheverry9685 Жыл бұрын
please make a video about your cnc again. I could do an interwiev about it maybe, I built 6 3d printers and 2 cnc and own a laser cutter. I also work as a repair man for laser cutter and have some question about your CNC that would do a great video maybe.
@mrunmaymete6192 Жыл бұрын
Amazing project!!!!!!!! Can you please provide all components list?
@CallousCoder Жыл бұрын
Nice video as usual! Python and multi threading oh man… that cracked me up. 😂Do they seriously teach multi threading on Python in university these days? Not in a serious language like C/C++? Because you also want semaphores to protect shared data have issues with passing data between threads and threads crashing and not cleaning up resources etc. That’s easier taught and demonstrated in C. And you can make a struct (or class) and have an instance per motor doing the smoothing. Also the balls would be really easy to detect without AI but just a bit of rudimentary computer vision since the hue of each ball is so unique and different from the surroundings that just filtering the hue using OpenCV would make it more robust (not so many issues with colour shift of evening as long as you have a wide enough filter). In factories for example to check for caps being on bottles we do never rely on AI. Because it’s slow but also not as reliable. This is also the reason why caps are such different colours from the bottles in most cases 🤫you could make caps that are translucent like the bottle from the same material but that would make automation in the bottle factory more complex and thus more expensive. But if this was just an experiment of training an AI then it’s a nice and simple one for sure. And you’ve learned that you need to make a training set in also different lighting conditions and different white balance 😊