A neural-network virtual-reality mobility aid for the severely visually impairedM Everingham, T Troscianko, B Thomas, N Karia, D Easty, A neural-network virtual-reality mobility aid for the severely visually impaired. European Conference on Visual Perception 1998, Perception Vol 27 Supplement. ISSN 0301-0066, pp. 14–14. August 1998. No electronic version available.
Many people who are registered as blind nevertheless retain some residual vision and are said to have "low vision". Conditions resulting in such low vision include cataracts, diabetic retinopathy, age-related maculopathy, and retinal detachment. In recent years an increasing amount of research has attempted to apply principles from computer vision to the requirements of the low vision subject. A variety of conventional image-processing techniques have been used to enhance the visual appearance of a scene, and devices from the field of virtual reality such as head-mounted displays have been investigated as an aid to low vision. However, a fundamental limitation with conventional image-processing techniques is that they are applied to an entire image with no knowledge of scene content, resulting in unwanted emphasis of noise and unimportant detail. Our aim is to produce a portable system comprising a processing unit with head-mounted camera and display which will allow a person with low vision to be self-sufficient and mobile in a typical urban environment. Our approach differs from previous research in that it uses a neural network object classifier to allow images to be enhanced in a way which considers the identity of objects in the scene. Primarily, our system transforms an original image into a classified image in which the types of objects in the scene are identified by an object outline filled with a particular high saturation colour according to the object type, chosen by the user. By classification our system allows the user to identify important objects in a scene simply by their colour, requiring no perception of shape or high spatial frequencies, and minimal contrast sensitivity. The resultant images are very simple and uncluttered and we expect that users would adapt quickly to use of the system. Results obtained to date suggest that the system is capable of providing registered-blind users with useful visual information. We are now working on improving the speed and classification accuracy of the system, and investigating the applicability of our techniques to specific conditions.