Recent advances in wearable sensing allow active control of the orientation of a body-mounted camera worn by a remote user. In this paper we consider the control of the active camera from head movements. In the context of teleoperation, these may be the head movements of a remote operator, perhaps acting as the wearera??s assistant. The movements are likely to be larger than those in video-conference applications, and so frontal facial features are insufficient. The paper presents a model which incrementally combines a fixed 3D shape model with specific features found on the observed head. Robust methods, including the incorporation of a colour model, are used to mitigate the effect of mismatching, the main contribution being the use of both interest point and colour features within a single random-sampling framework.