Multiclass classifiers are often designed and evaluated only on a sample from the classes on which they will eventually be applied. Hence, their final accuracy remains unknown. In this work we study how a classifier's performance over the initial class sample can be used to extrapolate its expected accuracy on a larger, unobserved set of classes. For this, we define a measure of separation between correct and incorrect classes that is independent of the number of classes: the reversed ROC (rROC), which is obtained by replacing the roles of classes and data-points in the common ROC. We show that the classification accuracy is a function of the rROC in multiclass classifiers, for which the learned representation of data from the initial class sample remains unchanged when new classes are added. Using these results we formulate a robust neural-network-based algorithm, CleaneX, which learns to estimate the accuracy of such classifiers on arbitrarily large sets of classes. Unlike previous methods, our method uses both the observed accuracies of the classifier and densities of classification scores, and therefore achieves remarkably better predictions than current state-of-the-art methods on both simulations and real datasets of object detection, face recognition, and brain decoding.
|Original language||American English|
|State||Published - 2021|
|Event||9th International Conference on Learning Representations, ICLR 2021 - Virtual, Online|
Duration: 3 May 2021 → 7 May 2021
|Conference||9th International Conference on Learning Representations, ICLR 2021|
|Period||3/05/21 → 7/05/21|
Bibliographical noteFunding Information:
We thank Etam Benger for many fruitful discussions, and Itamar Faran and Charles Zheng for commenting on the manuscript. YS is supported by the Israeli Council For Higher Education Data-Science fellowship and the CIDR center at the Hebrew University of Jerusalem.
© 2021 ICLR 2021 - 9th International Conference on Learning Representations. All rights reserved.