Photo of Andrew Gee

Andrew Gee

Contact Details

Publications

Page last updated: 4th January 2013




Biographical Information

I was a research associate in the Visual Information Laboratory at the University of Bristol until January 2013. From June 2010 until January 2013, I worked on the EU-FP7 Cognito project with Andrew Calway, Walterio Mayol-Cuevas and Dima Damen. Prior to this, I spent two years as a research assistant on the UK TSB ViewNet project and studied for a PhD under the supervision of Walterio Mayol-Cuevas funded by the UK EPSRC Equator IRC.

Research Interests

My research interest is in real-time visual simultaneous localisation and mapping (SLAM) for augmented reality (AR). In particular, my PhD thesis, Incorporating Higher Level Structure in Visual SLAM, investigates the incorporation of higher level structures, such as planes, into the Kalman filter SLAM system as a way of creating physically meaningful maps and collapsing the state size by removing redundancy.

Teaching

Augmented Reality Tutorial

Augmented reality (AR) is an active area of research that provides a meeting point for many interesting technologies and is predicted to see rapid commercial growth this decade, thanks to the proliferation of smartphones and tablet computers. These tutorial sessions will introduce the topic of augmented reality through a mixture of lecture, discussion and demos. As a rough outline, the sessions will cover the following areas:

PhD Thesis

Structure discovery in visual SLAM Incorporating Higher Level Structure in Visual SLAM
Andrew P. Gee
University of Bristol, May 2010

Incorporating higher level structure into the map enhances its value for tasks involving interaction with the real world and provides a simplified representation that enforces implicit constraints between the features. The main aim of this thesis is to address the problem of discovering and incorporating higher level structure concurrently with normal SLAM operation, in a way that preserves the statistical consistency and accuracy of the system and advances the possibilities of meaningful interaction with the map.


Selected Publications

Map with synthetic view locations 6D Relocalisation for RGBD Cameras Using Synthetic View Regression
Andrew P. Gee and Walterio Mayol-Cuevas
In: British Machine Vision Conference (BMVC), September 2012 (Poster)

This work considers the evaluation of methods for relocalisation specifically for RGBD cameras and tested in small workspace scenes that have moving objects and/or minimal texture. Two existing local feature-based methods are considered and a new method proposed that uses synthetically generated views within a regression framework and that is capable of estimating 6D camera pose at framerate. We also show some results of novelty detection of objects that were not in the original map and some results towards the goal of constant relocalisation as an alternative to conventional camera tracking.

Update: We have added results from a new dataset to the videos and poster. The new dataset is recorded using a head-mounted camera during a tea-making task in a kitchen environment. It contains rapid rotational movements, motion blur and occlusions. We also use a median rather than a mean in the regression kernel to provide improved occlusion robustness.

Videos | Extended Abstract | Poster


ViewNet system diagram A Topometric System for Wide Area Augmented Reality
Andrew P. Gee, P.J. Escamilla-Ambrosio, Matthew Webb, Walterio Mayol-Cuevas and Andrew Calway
In: Computers & Graphics, vol.35, no.4, pp.854-868, August 2011

This work describes the ViewNet system, which is designed to facilitate efficient communication of information relating to the physical world using Augmented Reality (AR). It combines a range of technologies to create a system capable of operating in real-time, over wide areas and for both indoor and outdoor operation. The central concept is to integrate localised mapping and tracking based on real-time visual SLAM with global positioning from both GPS and indoor ultra-wide band (UWB) technology. The key elements are: robust and efficient vision based tracking and mapping using a Kalman filter framework; rapid and reliable vision based relocalisation of users within local maps; user interaction mechanisms for effective annotation insertion; and an integrated framework for managing and fusing mapping and positioning data. We present the results of experiments conducted over a wide area, with indoor and outdoor operation.


Structure discovery in visual SLAM Discovering Higher Level Structure in Visual SLAM
Andrew P. Gee, Denis Chekhlov, Andrew Calway and Walterio Mayol-Cuevas
In: IEEE Transactions on Robotics, vol.24, no.5, pp.980-990, October 2008

This work extends the results presented in our previous paper and presents a visual SLAM system in which planes and lines are embedded within the state to represent structure in the scene. This collapses the state size, reducing computation and improving scalability, as well as giving a higher level scene description. Critically, the structure parameters are augmented into the SLAM state in a proper fashion, maintaining inherent uncertainties via a full covariance representation.

Video: Plane Simulation (small map) | Video: Line Simulation (small map) | Video: Plane Simulation (room with four walls) | Video: Real Planes


Ninja game using plane discovery in visual SLAM Ninja on a Plane: Automatic Discovery of Physical Planes for Augmented Reality Using Visual SLAM
Denis Chekhlov, Andrew P. Gee, Andrew Calway and Walterio Mayol-Cuevas
In: International Symposium on Mixed and Augmented Reality (ISMAR), November 2007 (Short Paper, Oral Presentation)

This work presents a game in which real objects with planar surfaces are added to an AR environment in real-time, enabling an AR agent to navigate through the scene. See this paper for full details of the plane discovery techniques used in this short paper.

Video: Demo



Model-based SLAM in planar scene Real-Time Model-Based SLAM Using Line Segments
Andrew P. Gee and Walterio Mayol-Cuevas
In: International Symposium on Visual Computing (ISVC), November 2006 (Oral Presentation)

This work develops a monocular real-time SLAM system that uses line segments extracted on the fly and that builds a wire-frame model of the scene to help tracking. The use of line segments provides viewpoint invariance and robustness to partial occlusion, whilst the model-based tracking is fast and efficient, reducing problems associated with feature matching and extraction.

Video: Planar scene | Video: 3D scene