Vision occupies a prime position in how we understand and respond to our immediate environment, and it is because this that it is essential that advanced wearable devices incorporate some sort of visual competence. The importance of wearable visual sensors is not just in the wide spectrum of information that they can recover such as world structure, object properties or human activities, but also because of its unique position. Computational power required to process visual signals is also becoming less incompatible with wearable cpu's and therefore, a number of applications are appearing possible. Wearables have mostly viewed the world through conventional, narrow view, passive cameras designed for non-wearable applications. Although ideal for recovering ambient information such as lighting levels, colour histograms and so on, for more specic tasks the resulting imagery is too dependent on the wearer's exact posture and motions. The aim of this thesis has been the advancement of wearable vision, and to do so it marries techniques from active vision with wearable computing to produce a novel human-computer interaction interface. This results in the ability to decouple the wearer's motion and attention from that of the visual sensor and our experience is that this is not only feasible but desirable. By gaining a degree of independence from the wearer, the visual sensor can work in one of three set of reference frames. The rst set relates to the wearer's surrounding, for example, sensing the manipulative space in front of wearer's chest, or pointing where the wearer's head is attending. The second set includes frames aligned with the static surroundings, for example the camera becomes aligned with the horizon; and third is the set of frames attached to independent moving objects, particularly relevant to object tracking. The thesis addresses a number of important issues for visual wearables, such as the how to select sensor placement around the person, and the kind of sensor to use. The design and implementation is developed from objective methodologies and the hardware presented in repeatable detail. Several core wearable visual tasks such as camera stabilization, object tracking, 3D localisation and mapping and gesture identication are considered, alongside discussions of potential applications and future work.