A coherent interpretation of AUC as a measure of aggregated classification performancePeter Flach, José Hernández-Orallo, Cèsar Ferri, A coherent interpretation of AUC as a measure of aggregated classification performance. International Conference on Machine Learning. June 2011. PDF, 226 Kbytes. External information
The area under the ROC curve (AUC), a well-known measure of ranking performance, is also often used as a measure of classification performance, aggregating over decision thresholds as well as class and cost skews. However, David Hand has recently argued that AUC is fundamentally incoherent as a measure of aggregated classifier performance and proposed an alternative measure. Specifically, Hand derives a linear relationship between AUC and expected minimum loss, where the expectation is taken over a distribution of the misclassification cost parameter that depends on the model under consideration. Replacing this distribution with a Beta(2,2) distribution, Hand derives his alternative measure H. In this paper we offer an alternative, coherent interpretation of AUC as linearly related to expected loss. We use a distribution over cost parameter and a distribution over data points, both uniform and hence model-independent. Should one wish to consider only optimal thresholds, we demonstrate that a simple and more intuitive alternative to Hand's H measure is already available in the form of the area under the cost curve.