Generic motion based object segmentation for assisted navigation
In the CASBliP project, a robust approach to annotating independently moving objects captured by head mounted stereo cameras that are worn by an ambulatory (and visually impaired) user is proposed. Initially, sparse optical flow is extracted from a single image stream, in tandem with dense depth maps. Then, using the assumption that apparent movement generated by camera egomotion is dominant, flow corresponding to independently moving objects (IMOs) is robustly segmented using MLESAC. Next, the mode depth of the feature points defining this flow (the foreground) are obtained by aligning them with the depth maps. Finally, a bounding box is scaled proportionally to this mode depth and robustly fit to the foreground points such that the number of inliers is maximised. The system runs at around 8 fps and has been tested by visually impaired volunteers.
For more information, see CASBliP - Computer Aided System for the Blind.
Computer Vision Group
Dept of Computer Science,
University of Bristol
For more information about our work or opportunities to join or visit the group, email vision at cs.bris.ac.uk