Generative Spoken Dialogue Language Modeling

Tu Anh Nguyen, Eugene Kharitonov, Jade Copet, Yossi Adi, Wei Ning Hsu, Ali Elkahky, Paden Tomasello, Robin Algayres, Benoît Sagot, Abdelrahman Mohamed, Emmanuel Dupoux

Research output: Contribution to journalArticlepeer-review

20 Scopus citations

Abstract

We introduce dGSLM, the first ‘‘textless’’ model able to generate audio samples of naturalistic spoken dialogues. It uses recent work on unsupervised spoken unit discovery coupled with a dual-tower transformer architecture with cross-attention trained on 2000 hours of two-channel raw conversational audio (Fisher dataset) without any text or labels. We show that our model is able to generate speech, laughter, and other paralinguistic signals in the two channels simultaneously and reproduces more naturalistic and fluid turn taking compared to a text-based cascaded model.1,2.

Original languageEnglish
Pages (from-to)250-266
Number of pages17
JournalTransactions of the Association for Computational Linguistics
Volume11
DOIs
StatePublished - 14 Mar 2023
Externally publishedYes

Bibliographical note

Publisher Copyright:
© 2023 Association for Computational Linguistics.

Fingerprint

Dive into the research topics of 'Generative Spoken Dialogue Language Modeling'. Together they form a unique fingerprint.

Cite this