Рет қаралды 5,843
I've added new classes to my existing machine learning model (beside tracking my face) to detect specific sign language alphabet. This allow me to enter different modes (auto-tracking, auto-monitoring & emergency power down) using hand gestures.
Object recognition using YOLO Darknet has never been easier, with ready-baked YOLO-Python wrapper, you can use your favourite (at least mine) Python programming language to perform further processing using Python, OpenCV etc.
3:00 New classes to be trained and their purposes
3:48 Getting the right training dataset
5:26 Create Python program to slice video into images
8:55 Evaluate trained machine learning model via Darknet
9:12 Indoor field testing
10:12 Outside field testing
Top Level Design
docs.google.com/drawings/d/1W...
Install, Training and Use YOLO the easy way
• Easy Installation, Tra...
Source code for simple DJI Tello SDK
bitbucket.org/RobotAndCode/te...
Source code for simple object recognition using YOLO Darknet +
script to slice video into images
bitbucket.org/RobotAndCode/te...
Pascal VOC annotation to YOLO converter
github.com/hai-h-nguyen/Yolo2...
Hardware & Software:
► DJI Tello: invol.co/clo03j
► Ubuntu 18.04: bit.ly/2zLMJGV
► Intel core i3-9100F @ 3.60GHz: invol.co/clo03f
► ZOTAC GTX-1050 @ 2GB: invol.co/clo03d
Recording gear:
► GoPro Hero 5: invol.co/clo03r
► BOYA BY-M1 - Lavalier microphone: invol.co/clo03t
Facebook: / robotandcode
Twitter: / robotandcode
LinkedIn: / murtadha-bazli-tukimat