AUTOARC: Automated Film Analysis for Indexing, Archiving and Editing
Dealing effectively with very large quantities of film and video material is becoming an increasing problem in the broadcast industry, particularly where archive material is concerned. The industry recognises the increasing need to reuse their film assets in many more than one production. The process of cataloguing and reusing is highly labour intensive. The archiver classifier has the laborious task of viewing many hours of film on a frame by frame basis in order to locate or catalogue material. For example, BBC Wildvision employs 18 skilled classifiers to watch hundreds of hours of film footage and produce hand detailed, time stamped annotations to enable later retrieval.Retrieval by companies wishing to reuse footage in new productions is done by posted tapes and does not take advantage of on-line access. Retrieval and manipulation of film footage is an integral part of the editing process for new productions. There is an urgent need for automated approaches to these tasks in order to exploit valuable material that is being currently under used.
The aim of the project is to develop an automated indexing system that provides an integrated environment for dealing with the complete process of indexing, editing and archiving. It will use computer vision techniques and motion analysis to extract automatically a hierarchical content based description of video sequences and use this as a common basis for a suite of tools for carrying out editing and archiving tasks. It will be designed to be compatible with the existing archive database and will use that data to instruct the new. The hierarchical description will be extracted using a combination of scene partitioning, object recognition and motion analysis techniques and will provide the user with a rapid visualisation of film content in the form of a story board indicating major scene breaks, representative key frames and camera motions (zooming and panning). At its highest level the description will consist of representative frames from component scenes which act as links to increased content and structural details at lower levels of the description.
The description will also form the basis of semi-automating the indexing and archiving process. It will identify and link similar scenes, reducing the bulk of the labelling tasks to the descriptions and unique content. It will produce a meta-data description of the footage in terms of the film's structure, extracted image features, textural annotations, closed caption information and time codes, which can then be stored along with the footage to facilitate retrieval.
By addressing the issues in both film editing and archiving simultaneously, the system will provide a seamless interface between the raw and archived material, assisting in the entire film production process.
Staff and Students
Barry Thomas, Andrew Calway, Neill CampbellSarah Porter

