ADEPT: Adaptive Dynamic Ensemble Prediction TechniquesThe ADEPT project is funded by EPSRC and run jointly in Bristol (EP/F023871/1) and Manchester (EP/F023855/1). It started in August 2008 and investigates the connections between two sub-fields of machine learning: ensembles and the genetics-based systems known as learning classifier systems. It aims to hybrise the two fields and put them on a common theoretical foundation.
AbstractPredicting unknown quantities is a fundamental part of science and engineering. For example, in medicine one might wish to predict whether a person has a cancerous tumour or not based on a scan; or in manufacturing, whether an industrial machine is producing faulty devices or not. The field of Artificial Intelligence has studied many techniques to produce good predictors. The last decade of research has seen the development of "population-based" techniques. Instead of using a single predictor, these build teams of predictors and combine the decisions of the individuals through a voting or averaging process. Both theory and experiments show this reliably improves upon using a single predictor -- as they say two heads are better than one. A nice feature is that these methods are predictor-independent, meaning they can combine any kind of predictors (e.g. neural networks, decisions trees) into a team. This project aims to unify two sub-fields of Artificial Intelligence that deal with these population-based predictor-independent techniques: Ensemble Methods and Learning Classifier Systems.
Ensemble Methods have produced some of the most powerful predictors of the last decade; the most well-known is called "AdaBoost", and has been dubbed "the best off-the-shelf predictor in the world" (Professor Leo Breiman, University of California at Berkeley). These methods have been widely applied in many areas; however, one important area not yet investigated is "multi-step" problems. These are problems where decisions in the past and present can affect what the best decisions in the future will be --- for example choosing to play a certain opening strategy in chess means certain moves are less favourable later on in the game. Our most difficult multi-step problem will be optimising elevator scheduling to minimise the amount of time between pressing an elevator call button and the arrival of the elevator. It is surprisingly difficult to optimise the movement of elevators in a large building. For one thing, a building with 5 elevators and 30 floors has more possible configurations than there are grains of sand on all the beaches in the world. Most ensemble methods cannot be directly applied to this kind of problem.
Learning Classifier Systems are a class of nature-inspired algorithms, that can dynamically generate and adjust sets of predictors, and are capable of tackling these multi-step problems. Traditional ensemble methods have not considered the multi-step domain, but have strong theoretical foundations to build upon. Learning Classifier Systems do not have such a strong theory base, but have been intensely studied on multi-step problems.
This project will create hybrid methods using theory and practice from these two quite disparate fields. We will advance the state-of-the-art in both fields and increase research capacity for tackling several problem classes, focusing in particular on multi-step problems.
Members in Manchester
- Gavin Brown (Principal Investigator)
- Mingjie-Zhao (Research Associate)
- Arjun Chandra (Former research Associate)
- Richard Stapenhurst (Research Student)
Members in Bristol
- Tim Kovacs (Principal Investigator)
- James Marshall (Former co-investigator)
- Nara Edakunni (Research Associate)
External Scientific Auditor
- Jeremy Wyatt, University of Birmingham
- Narayanan Edakunni, Tim Kovacs, Gavin Brown & James Marshall (2009). Modeling UCS as a Mixture of Experts. Proceedings of the 2009 Genetic and Evolutionary Computation Conference (GECCO'09), pp. 1187-1194, ACM. ISBN 978-1-60558-325-9.
- Narayanan Edakunni & Tim Kovacs (2009). Probabilistic Modeling of UCS : a theoretical study. Technical report CSTR-09-002, University of Bristol.
- James Marshall, Gavin Brown & Tim Kovacs (2007). Bayesian Estimation of Rule Accuracy in UCS. Proceedings of the 2007 GECCO Conference Companion on Genetic and Evolutionary Computation, pp. 2831-2834, ACM. ISBN 978-1-59593-697-4.
- Gavin Brown, Tim Kovacs & James Marshall (2007). UCSpv: principled voting in UCS rule populations. Proceedings of the 2007 Genetic and Evolutionary Computation Conference (GECCO 2007), pp. 1774-1781, ACM. ISBN 978-1-59593-697-4.
- From Heuristics to Statistics: an Overview of the ADEPT Project. Keynote talk at CIDUE 2011 by Gavin Brown.
SoftwareWe have written two implementations of UCS as given by the original UCS paper: "Accuracy-based learning classifier systems: Models, analysis and applications to classification tasks" E. Bernado-Mansilla and J. M. Garrell-Guiu, and with further details of the algorithm as specified in UCSpv: principled voting in UCS rule populations in the list of publications above. Neither of our versions implements subsumption or fitness-sharing.
- UCS in Matlab.
- UCS in Java. This does not implement macroclassifiers as a speed-up but does generate some statistics on them.