Skip to main content

Virtual Reality for the Nearly Blind

This project seeks to use the capability of the neural network classifier developed at Bristol University to provide navigation clues for people with low vision.

Many people who are registered as blind nevertheless have some residual vision. Often, the person experiences an extremely blurred representation of the visual scene. Without some form of assistance, such a person is often unable to move around. The assistance can come in the form of a guide dog, who has an independent visual system. Alternatively, or possibly in addition to this, it is possible to enhance the visual information available to the person in such a way that the important features of the scene are presented visibly.

Our neural network classifier is capable of recognising common objects in outdoor scenes and can label over 90% of the objects in an image into the correct object classes. The new system will involve a small camera, which will pass images to a small computer which will then display a highly stylised image of the scene on a pair of virtual reality spectacles. Important objects such as cars, roads and pavements will be presented in vivid, highly contrasting colours for easy identification.

This project started in 1997 with funding for a 3-year PhD studentship provided by the National Eye Research Centre. Additional funding for a further PhD student and a 3-year Research assistant has now been provided by EPSRC and Quintek plc. Subjects for testing the system will be provided by the Bristol Eye Hospital. Initial tests of the system on a low vision subject from the Bristol National Institute for the Blind has demonstrated that such a system can provide a considerable improvement in the subjects ability to interpret a scene.

Staff and Students

Barry Thomas, Mark Everingham, Angus Clark.



Tom Troscianko (Psychology Dept.)


National Eye Research Centre, Bristol Eye Hospital, EPSRC, Quintek plc.