NAST: Noise Aware Speech Tokenization for Speech Language Models

Shoval Messica, Yossi Adi

Research output: Contribution to journalConference articlepeer-review

Abstract

Speech tokenization is the task of representing speech signals as a sequence of discrete units. Such representations can be later used for various downstream tasks including automatic speech recognition, text-to-speech, etc. More relevant to this study, such representation serves as the basis of Speech Language Models. In this work, we tackle the task of speech tokenization under the noisy setup and present NAST: Noise Aware Speech Tokenization for Speech Language Models. NAST is composed of three main components: (i) a predictor; (ii) a residual encoder; and (iii) a decoder. We evaluate the efficiency of NAST considering several spoken language modeling tasks and show that NAST is superior to the evaluated baselines across all setups. Lastly, we analyze NAST and show its disentanglement properties and robustness to signal variations in the form of noise, reverberation, pitch-shift, and time-stretch. Code and pre-trained models are available at https://github.com/ShovalMessica/NAST.

Original languageEnglish
Pages (from-to)4169-4173
Number of pages5
JournalProceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH
DOIs
StatePublished - 2024
Event25th Interspeech Conferece 2024 - Kos Island, Greece
Duration: 1 Sep 20245 Sep 2024

Bibliographical note

Publisher Copyright:
© 2024 International Speech Communication Association. All rights reserved.

Keywords

  • speech language modeling
  • speech tokenization

Fingerprint

Dive into the research topics of 'NAST: Noise Aware Speech Tokenization for Speech Language Models'. Together they form a unique fingerprint.

Cite this