Confident Feature Ranking

Bitya Neuhof, Yuval Benjamini

Research output: Contribution to journalConference articlepeer-review

Abstract

Machine learning models are widely applied in various fields. Stakeholders often use post-hoc feature importance methods to better understand the input features’ contribution to the models’ predictions. The interpretation of the importance values provided by these methods is frequently based on the relative order of the features (their ranking) rather than the importance values themselves. Since the order may be unstable, we present a framework for quantifying the uncertainty in global importance values. We propose a novel method for the post-hoc interpretation of feature importance values that is based on the framework and pairwise comparisons of the feature importance values. This method produces simultaneous confidence intervals for the features’ ranks, which include the “true” (infinite sample) ranks with high probability, and enables the selection of the set of the top-k important features.

Original languageAmerican English
Pages (from-to)1468-1476
Number of pages9
JournalProceedings of Machine Learning Research
Volume238
StatePublished - 2024
Event27th International Conference on Artificial Intelligence and Statistics, AISTATS 2024 - Valencia, Spain
Duration: 2 May 20244 May 2024

Bibliographical note

Publisher Copyright:
Copyright 2024 by the author(s).

Fingerprint

Dive into the research topics of 'Confident Feature Ranking'. Together they form a unique fingerprint.

Cite this