Skip to main content

Efficient first-order probabilistic models for inference and learning

A probabilistic model is any formalism to specify a complex probability distribution. Such formalisms facilitate uncertainty handling and evidential reasoning in artificial intelligence. Current probabilistic models restrict their variables to simple boolean propositions, discrete attributes, or numbers. The goal of this project was to enhance these models with the power of first-order logic. This enables the variables to range over complex structured objects, be they molecules or websites. The project has proposed several new methods for specifying such models, reasoning with them, and learning them from data. The approach uses the individual-centred representations that are a central topic of study in recent work in machine learning and inductive logic programming. Possible domains of application include molecular biology, drug design, information retrieval on the web, and user modelling. Experimental validation of the utility of first-order probabilistic models has been carried on several of these domains.

Staff and Students

Peter Flach, Elias Gyftodimos.

Publications

Support

This research was supported by EPSRC research grant GR/N07394.