AUDIO CONDITIONING FOR MUSIC GENERATION VIA DISCRETE BOTTLENECK FEATURES

Simon Rouard, Yossi Adi, Jade Copet, Axel Roebel, Alexandre Défossez

Research output: Contribution to journalArticlepeer-review

Abstract

While most music generation models use textual or para-metric conditioning (e.g. tempo, harmony, musical genre), we propose to condition a language model based music generation system with audio input. Our exploration in-volves two distinct strategies. The first strategy, termed textual inversion, leverages a pre-trained text-to-music model to map audio input to corresponding "pseudowords" in the textual embedding space. For the second model we train a music language model from scratch jointly with a text conditioner and a quantized audio feature extractor. At inference time, we can mix textual and audio conditioning and balance them thanks to a novel double classifier free guidance method. We conduct automatic and human studies that validates our approach. We will release the code and we provide music samples on musicgenstyle.github.io in order to show the quality of our model.

Original languageEnglish
Pages (from-to)146-153
Number of pages8
JournalProceedings of the International Society for Music Information Retrieval Conference
Volume2024
StatePublished - 2024

Bibliographical note

Publisher Copyright:
© S. Rouard, Y. Adi, J. Copet, A. Roebel, A. Défossez.

Fingerprint

Dive into the research topics of 'AUDIO CONDITIONING FOR MUSIC GENERATION VIA DISCRETE BOTTLENECK FEATURES'. Together they form a unique fingerprint.

Cite this