

The limited field-of-view tends to miss out on relevant information required to understand the surrounding environment. Currently, Professor Jeremy Cooperstock’s lab worked on a method which uses a conventional smartphone camera to detect the white lines on a crossing. When crossing an intersection, veering outside the white lines can cause accidents and result in injuries/death. The motivation for this project is to facilitate crossing an intersection for the VI safely. Through the use of machine learning and computer vision algorithms, the collected data will be used to predict the correct crossing direction and determine the distance needed to cross the intersection. The visual data acquired from this camera will be compared to a smartphone attached to a lanyard. The goal of this project is to aid VI people cross an intersection through the use of crowdsourcing and a 360° camera. Results will be compared to those obtained using camera input from a smartphone held in a neck-worn lanyard. This project will make use of a head-mounted 360° camera and a combination of human (crowdsourced) labeling with deep learning to train systems to provide more relevant and accurate scene descriptions, in particular for navigation in indoor environments, and guidance for intersection crossing, improving upon a system recently developed by our lab for this purpose. This results in a narrow view of the world that often misses important details relevant to gaining an understanding of their environment. In the context of live captioning for visually impaired users, this problem is exacerbated by the limited field of view of conventional smartphone cameras and camera-equipped eyewear. 235–252.360° Camera Scene Understanding for the Visually Impaired (VI) Project descriptionĭespite the popular misconception that deep learning has solved the scene recognition problem, the quality of machine-generated captions for real-world scenes tends to be quite poor.

Visual odometry and mapping for autonomous flight using an RGB-D camera pp. Springer Berlin/Heidelberg, Germany: 2017. Huang A.S., Bachrach A., Henry P., Krainin M., Maturana D., Fox D., Roy N. RGB-D-based human motion recognition with deep learning: A survey. Wang P., Li W., Ogunbona P., Wan J., Escalera S. Mobile robot obstacle avoidance in a computerized travel aid for the blind Proceedings of the 1994 IEEE International Conference on Robotics and Automation San Diego, CA, USA. A blind mobility aid modeled after echolocation of bats. An insight into assistive technology for the visually impaired and blind people: State-of-the-art and future trends. The results provide insights into the design of electronic travel aids, and the conclusions can also be applied in other fields, such as the sound feedback of virtual reality applications.Įlectronic travel aid scene detection sonification visually impaired people.īhowmick A., Hazarika S.M. Our findings highlight the features and the differences of three scene detection algorithms and the corresponding sonification methods. Furthermore, there is no interaction method that is best suited for all participants in the experiment, and visually impaired individuals need a period of time to find the most suitable interaction method. In contrast, through the sonification of low-level scene information, such as raw depth images, visually impaired people can understand the surrounding environment more comprehensively. The results show that the sonification of high-level scene information, such as the direction of pathway, is easier to learn and adapt, and is more suitable for point-to-point navigation. Three sonification methods are compared comprehensively through a field experiment attended by twelve visually impaired participants. In this paper, we propose three different auditory-based interaction methods, i.e., depth image sonification, obstacle sonification as well as path sonification, which convey raw depth images, obstacle information and path information respectively to visually impaired people. However, it is still challenging to convey scene information to visually impaired people efficiently. In recent years, with the development of depth cameras and scene detection algorithms, a wide variety of electronic travel aids for visually impaired people have been proposed.
