Ranking categorical features using generalization properties

Sivan Sabato*, Shai Shalev-Shwartz

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

7 Scopus citations

Abstract

Feature ranking is a fundamental machine learning task with various applications, including feature selection and decision tree learning. We describe and analyze a new feature ranking method that supports categorical features with a large number of possible values. We show that existing ranking criteria rank a feature according to the training error of a predictor based on the feature. This approach can fail when ranking categorical features with many values. We propose the Ginger ranking criterion, that estimates the generalization error of the predictor associated with the Gini index. We show that for almost all training sets, the Ginger criterion produces an accurate estimation of the true generalization error, regardless of the number of values in a categorical feature. We also address the question of finding the optimal predictor that is based on a single categorical feature. It is shown that the predictor associated with the misclassification error criterion has the minimal expected generalization error. We bound the bias of this predictor with respect to the generalization error of the Bayes optimal predictor, and analyze its concentration properties. We demonstrate the efficiency of our approach for feature selection and for learning decision trees in a series of experiments with synthetic and natural data sets.

Original languageEnglish
Pages (from-to)1083-1114
Number of pages32
JournalJournal of Machine Learning Research
Volume9
StatePublished - Jun 2008
Externally publishedYes

Keywords

  • Categorical features
  • Decision trees
  • Feature ranking
  • Generalization bounds
  • Gini index

Fingerprint

Dive into the research topics of 'Ranking categorical features using generalization properties'. Together they form a unique fingerprint.

Cite this