Blended Latent Diffusion

Omri Avrahami, Ohad Fried, Dani Lischinski

Research output: Contribution to journalArticlepeer-review

20 Scopus citations


The tremendous progress in neural image generation, coupled with the emergence of seemingly omnipotent vision-language models has finally enabled text-based interfaces for creating and editing images. Handling generic images requires a diverse underlying generative model, hence the latest works utilize diffusion models, which were shown to surpass GANs in terms of diversity. One major drawback of diffusion models, however, is their relatively slow inference time. In this paper, we present an accelerated solution to the task of local text-driven editing of generic images, where the desired edits are confined to a user-provided mask. Our solution leverages a text-to-image Latent Diffusion Model (LDM), which speeds up diffusion by operating in a lower-dimensional latent space and eliminating the need for resource-intensive CLIP gradient calculations at each diffusion step. We first enable LDM to perform local image edits by blending the latents at each step, similarly to Blended Diffusion. Next we propose an optimization-based solution for the inherent inability of LDM to accurately reconstruct images. Finally, we address the scenario of performing local edits using thin masks. We evaluate our method against the available baselines both qualitatively and quantitatively and demonstrate that in addition to being faster, it produces more precise results.

Original languageAmerican English
Article number3592450
Pages (from-to)149:1-149:11
Number of pages11
JournalACM Transactions on Graphics
Issue number4
StatePublished - 1 Aug 2023

Bibliographical note

Publisher Copyright:
© 2023 Owner/Author(s).


  • zero-shot text-driven local image editing


Dive into the research topics of 'Blended Latent Diffusion'. Together they form a unique fingerprint.

Cite this