Image Centric Navigation Solution For Visually Impaired People.

  Рет қаралды 6

Astroboy

Astroboy

17 күн бұрын

Image-Centric Indoor Navigation Solution for Visually Impaired People
Navigation in indoor environments is highly challenging for a visually impaired person,
particularly in an unknown environment.
For this purpose, we propose an image-centric indoor navigation solution using state-of-the-art computer vision techniques.
In this project, we employ YOLOv8 as the object detection model for real-time processing of visual data to assist visually impaired persons during navigation. It analyzes images coming from a forward-facing camera and identifies objects and obstacles within the environment and produces intuitive navigational cues, for example, directional guidance to move "left" or "right" relative to the central point of the image.
We use the COCO dataset (Common Objects in Context) for the training and fine-tuning of
YOLOv8, which will enable it to robustly detect a very wide range of objects found in indoor
environments. This ensures that the system is able to identify furniture, doors, pathways, and
other features that are important for safe navigation. The objects that are detected are
categorized and positioned on a spatial map, allowing the system to infer optimal paths for
movement. Navigation instructions are dynamically generated based on the relative position of
objects and communicated to the user through audio feedback or haptic devices.
Some of the key innovations of this solution are its lightweight design for real-time processing,
high object detection accuracy, and adaptability to different indoor settings. With YOLOv8, it
achieves a good balance between computational efficiency and detection performance, so the
system is suitable for deployment on portable devices like smartphones or wearable gadgets.
In addition, integration with spatial analysis algorithms will provide users with precise and
context-aware guidance.
Thorough testing was performed on the simulated and real-world indoor environment to check
the usability of the system. The results show that the presented solution will substantially
enhance mobility and independence in the life of visually impaired users, offering timely and
reliable navigation support. The next stages of the work include expansion of the model through
incorporating depth perception, semantic segmentation, and support for personalized indoor
maps, which are essential to make the system even more versatile and accurate.
In conclusion, this project is a step forward in assistive technology that utilizes the power of
deep learning and computer vision to empower visually impaired individuals in navigating
complex indoor environments with confidence and safety.
Keywords: Indoor navigation, visually impaired, YOLOv8, object detection, COCO dataset,
real-time guidance, assistive technology, spatial mapping, mobility aid, computer vision.

Пікірлер
What are AI Agents?
12:29
IBM Technology
Рет қаралды 932 М.
Generative AI in a Nutshell - how to survive and thrive in the age of AI
17:57
It’s all not real
00:15
V.A. show / Магика
Рет қаралды 20 МЛН
人是不能做到吗?#火影忍者 #家人  #佐助
00:20
火影忍者一家
Рет қаралды 20 МЛН
INNOVATIONS THAT WILL TRANSFORM YOUR WORLD
20:16
Zulay Tech
Рет қаралды 9 М.
The Plane That Will Change Travel Forever
16:09
22TechZoom
Рет қаралды 8 М.
The Only RAG AI Agent You'll ever need
22:20
iOSCoding
Рет қаралды 1,6 М.
I Redesigned the ENTIRE YouTube UI from Scratch
19:10
Juxtopposed
Рет қаралды 808 М.
Learn Machine Learning Like a GENIUS and Not Waste Time
15:03
Infinite Codes
Рет қаралды 293 М.
Why Does Diffusion Work Better than Auto-Regression?
20:18
Algorithmic Simplicity
Рет қаралды 402 М.
It’s all not real
00:15
V.A. show / Магика
Рет қаралды 20 МЛН