As the performance of commodity computer processors and camera technology has steadily increased, we have reached the point where real-time 3D computer vision has become a reality. In particular we have recently seen great progress in vision systems able to localise moving cameras and construct representations of the surrounding scene whilst operating at rates well above 25 Hz. Much of this has involved the transfer of Simultaneous Localisation and Mapping (SLAM) techniques from mobile robotics to camera-based systems, which have advantages in terms of sensor simplicity, cost, size and ubiquity. Algorithms for localisation and mapping using a single agile camera now offer the possibility of new applications in augmented reality, wearable computing and flexible robotics.
This tutorial will concentrate on the essential theoretical and practical aspects of high frame-rate monocular SLAM, with an emphasis on systems based around the extended Kalman filter. It will also provide a summary of recent advances and future challenges. The tutorial will be divided into four main sections:
The content is aimed at researchers and students wanting to gain an insight into the details of visual SLAM. It will include live demonstrations and pointers to free software and other resources.Slides from the tutorial are available below: