While a binary classifier aims to distinguish positives from negatives, a ranker orders instances from high to low expectation that the instance is positive. Most classification models in machine learning output some score of a??positivenessa??, and hence can be used as rankers. Conversely, any ranker can be turned into a classifier if we have some instance-independent means of splitting the ranking into positive and negative segments. This could be a fixed score threshold; a point obtained from fixing the slope on the ROC curve; the break-even point between true positive and true negative rates; to mention just a few possibilities. These connections between ranking and classification notwithstanding, there are considerable differences as well. Classification performance on n examples is measured by accuracy, an O(n) operation; ranking performance, on the other hand, is measured by the area under the ROC curve (AUC), an O(n logn) operation. The model with the highest AUC does not necessarily dominate all other models, and thus it is possible that another model would achieve a higher accuracy for certain operating conditions, even if its AUC is lower. However, within certain model classes good ranking performance and good classification performance are more closely related than suggested by the previous remarks. For instance, there is evidence that certain classification models, while designed to optimise accuracy, in effect optimise an AUC-based loss function . It has also been known for some time that decision tree yield convex training set ROC curves by construction , and thus optimising training set accuracy is likely to lead to good training set AUC. In this talk I will investigate the relation between ranking and classification more closely. I will also consider the connection between ranking and probability estimation. The quality of probability estimates can be measured by, e.g., mean squared error in the probability estimates (the Brier score). However, like accuracy, this is an O(n) operation that doesna??t fully take ranking performance into account. I will show how a novel decomposition of the Brier score into calibration loss and refinement loss  sheds light on both ranking and probability estimation performance. While previous decompositions are approximate , our decomposition is an exact one based on the ROC convex hull. (The connection between the ROC convex hull and calibration was independently noted by ). In the case of decision trees, the analysis explains the empirical evidence that probability estimation trees produce well-calibrated probabilities .