This paper presents a method for realising abduction in artificial neural networks (ANNs) by generalising existing neuro-symbolic approaches from normal logic programs to abductive logic programs (ALPs) in order to provide a more expressive formalism for representing and reasoning about partial knowledge and integrity constraints. The aim is to develop a massively-parallel technique for abduction that can also be integrated with standard connectionist learning approaches to offer more control over which assumptions can and cannot be made in learning. Existing methods for abduction in neural networks are not well suited to this task as they only apply to a restricted a class of abduction problems or they do not adequately address the problem of computing multiple solutions. By contrast, this paper proposes an approach for translating ALPs into ANNs whereby no restrictions are imposed on the underlying programs and, if required, the network can be made to systematically compute all abductive explanations or provide a guarantee when none exist. Moreover, since the topology of the network mirrors the structure of the program, it can be acquired and revised by standard neuro-symbolic training techniques and can also be exploited to impose a preference on the order in which the solutions are found.