Meta-learning is concerned with the selection of a suitable learning tool for a given application. Landmarking is a novel approach to meta-leaning. It uses simple, quick and efficient learners to describe tasks and therefore to locate the problem in the space of expertise areas of the learners being considered. It relies on the performance of a set of selected learning algorithms to uncover the sort of learning tool that the task requires. The paper presents landmarking and reports its performance in experiments involving both artificial and real-world databases. The experiments are performed in a supervised learning scenario where a task is classified according to the most suitable learner from a pool. Meta-learning hypotheses are constructed from some tasks and tested on others. The experiments contrast the new technique with an information-theoretical approach to meta-learning. Results show that landmarking outperforms its competitor and satisfactory selects suitable learning tools in all cases examined.