Skip to main content

An Improved Model Selection Heuristic for AUC

Shaomin Wu, Peter Flach, Cesar Ferri, An Improved Model Selection Heuristic for AUC. 18th European Conference on Machine Learning. Joost N. Kok, Jacek Koronacki, Ramon Lopez de Mantaras, Stan Matwin, Dunja Mladenic, Andrzej Skowron, (eds.). ISBN 978-3-540-74975-2, pp. 478–489. September 2007. PDF, 329 Kbytes. External information

Abstract

The area under the ROC curve (AUC) has been widely used to measure ranking performance for binary classification tasks. AUC only employs the classifiera??s scores to rank the test instances; thus, it ignores other valuable information conveyed by the scores, such as sensitivity to small differences in the score values. However, as such differences are inevitable across samples, ignoring them may lead to overfitting the validation set when selecting models with high AUC. This problem is tackled in this paper. On the basis of ranks as well as scores, we introduce a new metric called scored AUC (sAUC), which is the area under the sROC curve. The latter measures how quickly AUC deteriorates if positive scores are decreased. We study the interpretation and statistical properties of sAUC. Experimental results on UCI data sets convincingly demonstrate the effectiveness of the new metric for classifier evaluation and selection in the case of limited validation data.

Bibtex entry.

Contact details

Publication Admin