YOLOv8n-Based Lightweight Object Detection model for People with Visual Impairment
Keywords:
object detection, visually impaired, You look only once (YOLO), annotation, bounding boxAbstract
Vision is a crucial sense for living beings. A vast number of individuals worldwide have vision impairment. These individuals have challenges in moving autonomously and securely, including difficulty in obtaining information and communication. In this study, we address the challenge of object detection for visually impaired individuals by employing a YOLOv8n model to identify ten specific objects: Suitcase-Luggage, Switch-Box, Bottle, Dustbin, Mirror, Person, Staircase, Stove, Toilet, and Toothbrush. To achieve this, we developed a custom dataset using the Roboflow platform, ensuring comprehensive annotation for accurate detection. The dataset was meticulously divided into training, testing, and validation subsets to facilitate robust model evaluation. Following the training process, we analyzed the model's performance using confusion matrices, highlighting the precision, recall, and overall accuracy for each object category. The results demonstrate the model's effectiveness in detecting and distinguishing between the specified objects, underscoring its potential application in assistive technologies for the blind. This research contributes to the ongoing efforts to enhance accessibility and independence for visually impaired individuals through advanced machine learning techniques.











