Distilling Datasets Into Less Than One Image

Asaf Shul, Eliahu Horwitz, Yedid Hoshen

Research output: Contribution to journalArticlepeer-review

Abstract

Dataset distillation aims to compress a dataset into a much smaller one so that a model trained on the distilled dataset achieves high accuracy. Current methods frame this as maximizing the distilled classification accuracy for a budget of K distilled images-per-class, where K is a positive integer. In this paper, we push the boundaries of dataset distillation, compressing the dataset into less than an image-per-class. It is important to realize that the meaningful quantity is not the number of distilled images-per-class but the number of distilled pixels-per-dataset. We therefore, propose Poster Dataset Distillation (PoDD), a new approach that distills the entire original dataset into a single poster. The poster approach motivates new technical solutions for creating training images and learnable labels. Our method can achieve comparable or better performance with less than an image-per-class compared to existing methods that use one image-per-class. Specifically, our method establishes a new state-of-the-art performance on CIFAR-10, CIFAR-100, and CUB200 on the well established 1 IPC benchmark, while using as little as 0.3 images-per-class.

Original languageEnglish
JournalTransactions on Machine Learning Research
Volume2025-March
StatePublished - 2025

Bibliographical note

Publisher Copyright:
© 2025, Transactions on Machine Learning Research. All rights reserved.

Fingerprint

Dive into the research topics of 'Distilling Datasets Into Less Than One Image'. Together they form a unique fingerprint.

Cite this