AUDIOTOKEN: Adaptation of Text-Conditioned Diffusion Models for Audio-to-Image Generation

Guy Yariv, Itai Gat, Lior Wolf, Yossi Adi, Idan Schwartz

Research output: Contribution to journalConference articlepeer-review

1 Scopus citations

Abstract

In recent years, image generation has shown a great leap in performance, where diffusion models play a central role. Although generating high-quality images, such models are mainly conditioned on textual descriptions. This begs the question: how can we adopt such models to be conditioned on other modalities?. In this paper, we propose a novel method utilizing latent diffusion models trained for text-to-image-generation to generate images conditioned on audio recordings. Using a pre-trained audio encoding model, the proposed method encodes audio into a new token, which can be considered as an adaptation layer between the audio and text representations. Such a modeling paradigm requires a small number of trainable parameters, making the proposed approach appealing for lightweight optimization. Results suggest the proposed method is superior to the evaluated baseline methods, considering objective and subjective metrics. Code and samples are available at: https://pages.cs.huji.ac.il/adiyoss-lab/AudioToken.

Original languageEnglish
Pages (from-to)5446-5450
Number of pages5
JournalProceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH
Volume2023-August
DOIs
StatePublished - 2023
Event24th International Speech Communication Association, Interspeech 2023 - Dublin, Ireland
Duration: 20 Aug 202324 Aug 2023

Bibliographical note

Publisher Copyright:
© 2023 International Speech Communication Association. All rights reserved.

Keywords

  • Audio-to-image
  • Diffusion models

Fingerprint

Dive into the research topics of 'AUDIOTOKEN: Adaptation of Text-Conditioned Diffusion Models for Audio-to-Image Generation'. Together they form a unique fingerprint.

Cite this