This page contains pointers to material that is believed to be relevant for the topic of the workshop. This material is partly selected by the organizers, and partly suggested by people addressed by the organizers. Please contact the organizers if you feel any relevant material is missing.
Inductive Logic Programming (ILP) is often situated as a research area emerging at the intersection of Machine Learning and Logic Programming (LP). This paper makes the link more clear between ILP and LP, in particular, between ILP and Abductive Logic Programming (ALP), i.e., LP extended with abductive reasoning. We formulate a generic framework for handling incomplete knowledge. This framework can be instantiated both to ALP and ILP approaches. By doing so more light is shed on the relationship between abduction and induction. As an example we consider the abductive procedure SLDNFA, and modify it into an inductive procedure which we call SLDNFAI.
Many abductive understanding systems generate explanations by a backwards chaining process that is neutral both to the explainer's previous experience in similar situations and to why the explainer is attempting to explain. This article examines the relationship of such models to an approach that uses case-based reasoning to generate explanations. In this case-based model, the generation of abductive explanations is focused by prior experience and by goal-based criteria reflecting current information needs. The article analyzes the commitments and contributions of this case-based model as applied to the task of building good explanations of anomalous events in everyday understanding. The article identifies six central issues for abductive explanation, compares how these issues are addressed in traditional and case-based explanation models, and discusses benefits of the case-based approach for facilitating generation of plausible and useful explanations in domains that are complex and imperfectly understood.
Abduction is inference to the best explanation, a pattern of reasoning
that occurs in such diverse places as medical diagnosis, scientific
theory formation, accident investigation, language understanding, and
jury deliberation. This book breaks new ground in the scientific,
philosophical, and technological study of abduction. It presents new
ideas about the inferential and information-processing foundations of
knowledge and certainty. It argues that knowledge arises from
experience by processes of abductive inference, in contrast with the
view that knowledge arises noninferentially, or that deduction and
inductive generalization are sufficient to account for knowledge.
This book reports key discoveries about abduction that were made as a result of designing, building, testing, and analyzing knowledge-based systems for medical diagnosis and other abductive tasks. These systems demonstrate that abductive inference can be described precisely enough to achieve good performance, even though this description lies largely outside the classical formal frameworks of mathematical logic and probability theory.
The book tells the story of six generations of increasingly sophisticated generic abduction machines and the discovery of reasoning strategies that make it computationally feasible to form well-justified composite explanatory hypotheses despite the threat of combinatorial explosion. Finally, the book argues that perception is logically abductive and presents a layered-abduction computational model of perceptual information processing.
In this paper we study the problem of of integrating abduction and learning as they appear in Artificial Intelligence. A general comparison of abduction and induction as seperate inferences and fields in AI is given. Based on this analysis we study their possible interaction and integration. We introduce the notions of of abductive concept learning as a framework for learning with incomplete background theories. Together with this we propose a general methodology for incorporating abduction in inductive concept learning that allows us to exploit more fully the rich knowledge in the background theories for learning. This basic methodology extends in a natural way to the more general context of theory revision.
In this paper we study the problem of learning abductive theories with particular interest in learning theories for the problem of attribute-based classification as studied in the area of machine learning. The paper proposes a new alternative formulation of this class of learning problems where abduction takes an integral part in the formulation of the appropriate theories and more importantly in the definition of the learning problem itself. We present a general algorithm that learns abductive theories for classification and examine its main features. We show how within our abductive approach it is possible to formulate and handle in a natural way cases of the problem with incomplete information. We also study the relation of our approach to other existing approaches for these learning problems, notably that of decision trees, and argue that our approach could provide a novel and useful link between abduction and machine learning.
Abduction and induction are reasoning forms for drawing conclusions from incomplete information. Induction, i.e. inferring properties of sets of individuals from properties of individuals, was already distinguished by Aristotle, while the term 'abduction' was introduced much later by Peirce for inference of explanations for observed phenomena. Both reasoning forms are presently being studied and applied by researchers in artificial intelligence and logic programming. However, the current views of abduction and induction and their interrelation are problematic, which is mainly caused by the fact that Peirce developed two perspectives on abduction, one based on syllogisms, the other on the underlying inferential pattern. In this paper I argue that both perspectives have their merits but need further formalisation. Furthermore, I propose a formalisation of the inferential perspective.