Blindness affects millions of people worldwide, leading to difficulties in daily travel and a loss of independence due to a lack of spatial information. This article proposes a new navigation aid to help people with severe blindness reach their destination. Blind people are guided by a short 3D spatialised sound that indicates the target point to follow. This sound is combined with other sonified information on potential obstacles in the vicinity. The proposed system is based on inertial sensors, GPS data, and the cartographic knowledge of pedestrian paths to define the trajectory. In addition, visual clues are used to refine the trajectory with ground floor information and obstacle information using a camera to provide 3D spatial information. The proposed method is based on a deep learning approach. The different neural networks used in this approach are evaluated on datasets that regroup navigations from pedestrians’ point-of-view. This method achieves low latency and real-time processing without relying on remote connections, instead using a low-power embedded GPU target and a multithreaded approach for video processing, sound generation, and acquisition. This system could significantly improve the quality of life and autonomy of blind people, allowing them to reliably and efficiently navigate in their environment.
Publication
Télécharger la publication
Année de publication : 2024
Type :
Article de journal
Article de journal
Auteurs :
Scalvini, F.
Bordeau, C.
Ambard, M.
Migniot, C.
& Dubois, J.
Scalvini, F.
Bordeau, C.
Ambard, M.
Migniot, C.
& Dubois, J.
Titre du journal :
Sensors
Sensors
Mots-clés :
navigation aid, wearable assistive device, obstacle avoidance, sensory substitution, visual impairment, deep learning
navigation aid, wearable assistive device, obstacle avoidance, sensory substitution, visual impairment, deep learning