Spectral Self-supervised Feature Selection

Daniel Segal, Ofir Lindenbaum, Ariel Jaffe

Research output: Contribution to journalArticlepeer-review

Abstract

Choosing a meaningful subset of features from high-dimensional observations in unsupervised settings can greatly enhance the accuracy of downstream analysis tasks, such as clustering or dimensionality reduction, and provide valuable insights into the sources of heterogeneity in a given dataset. In this paper, we propose a self-supervised graph-based approach for unsupervised feature selection. Our method’s core involves computing robust pseudo-labels by applying simple processing steps to the graph Laplacian’s eigenvectors. The subset of eigenvectors used for computing pseudo-labels is chosen based on a model stability criterion. We then measure the importance of each feature by training a surrogate model to predict the pseudo-labels from the observations. Our approach is shown to be robust to challenging scenarios, such as the presence of outliers and complex substructures. We demonstrate the effectiveness of our method through experiments on real-world datasets from multiple domains, with a particular emphasis on biological datasets.

Original languageEnglish
JournalTransactions on Machine Learning Research
Volume2024
StatePublished - 2024

Bibliographical note

Publisher Copyright:
© 2024, Transactions on Machine Learning Research. All rights reserved.

Fingerprint

Dive into the research topics of 'Spectral Self-supervised Feature Selection'. Together they form a unique fingerprint.

Cite this