Conference Programme

 

    Decisions Available (CMT)
  List of Accepted Papers
 

  Conference Registration
  (early rate until 20 July 2013)
  (author deadline 25 July 2013)
 
  Camera-ready Guidelines
  (submission by 25 July 2013)
 

  BMVC 2013 STATISTICS:
   439 submissions
   30% accept rate
   7% oral accept rate

 

BMVC 2013 - Keynote Speakers

 


 

Opening Keynote by Andrew Zisserman:
Towards On-the-fly Large Scale Video Search


Abstract. We would like to be able to find anything in an image or video dataset. The talk will describe our progress on visual search for finding people, specific objects and categories in large scale video datasets. The novelty is that the item of interest can be specified at run time by a text query, and a discriminative classifier for that item is then learnt on-the-fly using images downloaded from Google Image search.

We will compare state of the art encoding methods for the problem, and discuss the choices in achieving the best trade-off between three important performance measures for a realtime system of this kind, namely: (i) accuracy, (ii) memory footprint, and (iii) speed. We will also describe steps to achieving `total recall'.

There will be demonstrations on a large scale video dataset of BBC broadcasts.

This is joint work with Relja Arandjelovic, Ken Chatfield and Omkar Parkhi.
Recordings of the talk are at VideoLectures.net.

Bio. Professor Andrew Zisserman leads the Visual Geometry Group at the University of Oxford, UK. Andrew's research interests include visual recognition, image retrieval, multi-view geometry, and other aspects of computer vision. Some of Andrew's papers are amongst the most highly cited works in the field. His contributions received multiple awards at the top computer vision conferences including three Marr prizes at the International Conferences on Computer Vision. He has published several books including "Visual Reconstruction" (with Andrew Blake) and "Multiple View Geometry in Computer Vision" (with Richard Hartley). He is a fellow of the Royal Society.

 



Keynote by Frank Dellaert:

Factor Graphs for Fast and Scalable 3D Reconstruction and Mapping

 

Abstract. Simultaneous Localization and Mapping (SLAM) and Structure from Motion (SFM) are important and closely related problems in robotics and vision. I will show how both SLAM and SFM instances can be posed in terms of a graphical model, a factor graph, and that inference in these graphs can be understood as variable elimination. The overarching theme of the talk will be to emphasize the advantages and intuition that come with seeing these problems in terms of graphical models. For example, common computational tricks, such as the Schur complement trick in SFM, are simple choices about the order in which to eliminate the graph. In addition, while the graphical model perspective is completely general, linearizing the non-linear factors and assuming Gaussian noise yields the familiar direct linear solvers such as Cholesky and QR factorization. Based on these insights, we have developed both batch and incremental algorithms defined on graphs in the SLAM/SFM domain. In addition to direct methods, we recently worked on efficient iterative methods that use subgraphs of these factor graphs as pre-conditioners in a conjugate gradient scheme. Finally, we are now looking into how optimal control can be seamlessly integrated with the estimation algorithms for use in autonomous vehicles.
Recordings of the talk are at VideoLectures.net.


Bio: Frank Dellaert is a Professor in the School of Interactive Computing at the Georgia Institute of Technology. His research is in the areas of Robotics and Computer vision. He is particularly interested in graphical model techniques to solve large-scale problems in mapping and 3D reconstruction.  You can find out about his research and publications at http://www.cc.gatech.edu/~dellaert. The GTSAM toolbox which embodies many of the ideas his group has worked on in the past few years is available for download at http://tinyurl.com/gtsam

 

 


 

BMVC2013 Homepage