Designing a Conditional Prior Distribution for Flow-Based Generative Models

Research output: Contribution to journalArticlepeer-review

Abstract

Flow-based generative models have recently shown impressive performance for conditional generation tasks, such as text-to-image generation. However, current methods transform a general unimodal noise distribution to a specific mode of the target data distribution. As such, every point in the initial source distribution can be mapped to every point in the target distribution, resulting in long average paths. To this end, in this work, we tap into a non-utilized property of conditional flow-based models: the ability to design a non-trivial prior distribution. Given an input condition, such as a text prompt, we first map it to a point lying in data space, representing an “average" data point with the minimal average distance to all data points of the same conditional mode (e.g., class). We then utilize the flow matching formulation to map samples from a parametric distribution centered around this point to the conditional target distribution. Experimentally, our method significantly improves training times and generation efficiency (FID, KID and CLIP alignment scores) compared to baselines, producing high quality samples using fewer sampling steps. Code is available at https://github.com/MoSalama98/conditional-prior-flow-matching.

Original languageEnglish
JournalTransactions on Machine Learning Research
VolumeDecember-2025
StatePublished - 2025

Bibliographical note

Publisher Copyright:
© 2025, Transactions on Machine Learning Research. All rights reserved.

Fingerprint

Dive into the research topics of 'Designing a Conditional Prior Distribution for Flow-Based Generative Models'. Together they form a unique fingerprint.

Cite this