Welcome. You seem to be using a rubbish web browser. Why not upgrade by clicking one of the links in the menu...
I am a PhD student in the Visual Information Laboratory at the University of Bristol's Merchant Venturers School of Engineering, supervised by Professor Majid Mirmehdi. I am presently engaged in research on learning visual semantic attributes, such as colours and textures for which we have names, without using hand-labelled training data (see below). Before beginning a PhD, I did a Masters degree in Computer Science. My first degree was a MSci in Mathematics, also at the University of Bristol.
My research interests include computer vision, and machine learning.
As part of my research into weakly supervised learning of visual semantic attributes, I required a clustering algorithm with certain properties. I propose to first cluster the features extracted from images, before performing weakly supervised learning using the clusters. This way the volume of data will be greatly reduced, allowing many more images to be used for learning.
To this end, we developed QUAC - Quick Unsupervised Anisotropic Clustering. The algorithm works by finding elliptical (or hyper-elliptical) clusters one at a time, removing the data corresponding to each cluster after it is found. It has several advantages over other clustering algorithms:
Source code for QUAC is available on the downloads page.
There are already billions of images available on the internet, with many more being added every day. Services such as Google images use the text which appears near images in webpages, to allow them to be searched. They say an image is worth a thousand words, but many images have only a handful of terms associated with them, and usually only in a single language. My work is focussed on using Google image search to train models of visual semantic attributes such as colours ("red, "ruby", "vermillion"), patterns ("stripy", "chequered"), and materials ("leopard skin", "wood"), using only the results of Google Images type searches, with a minimum of human interaction. This will give unbiased models that can be trained without the time and expense of having people select, segment, and annotate images. However, when searching for images corresponding to a visual attribute term, it cannot be guaranteed that every image will contain the attribute, or that those which do will contain only that attribute. The challenges are finding what it is that a set of images have in common, while ignoring the multitude of other visual information.
Studies have shown that many accidents on motorways are caused by drivers falling asleep at the wheel. The aim of this project was to develop a system capable of detecting and tracking the lanes of motorways and A roads, using video data from a camera mounted on the front of a vehicle. This would then allow early warning of accidental lane departure, and thus help to improve road safety.
See example videos of the system working here.