We present a textless speech-to-speech translation (S2ST) system that can translate speech from one language into another language and can be built without the need of any text data. Different from existing work in the literature, we tackle the challenge in modeling multi-speaker target speech and train the systems with real-world S2ST data. The key to our approach is a self-supervised unit-based speech normalization technique, which finetunes a pre-trained speech encoder with paired audios from multiple speakers and a single reference speaker to reduce the variations due to accents, while preserving the lexical content. With only 10 minutes of paired data for speech normalization, we obtain on average 3.2 BLEU gain when training the S2ST model on the VoxPopuli S2ST dataset, compared to a baseline trained on un-normalized speech target. We also incorporate automatically mined S2ST data and show an additional 2.0 BLEU gain. To our knowledge, we are the first to establish a textless S2ST technique that can be trained with real-world data and works for multiple language pairs.
|Original language||American English|
|Title of host publication||NAACL 2022 - 2022 Conference of the North American Chapter of the Association for Computational Linguistics|
|Subtitle of host publication||Human Language Technologies, Proceedings of the Conference|
|Publisher||Association for Computational Linguistics (ACL)|
|Number of pages||13|
|State||Published - 2022|
|Event||2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL 2022 - Seattle, United States|
Duration: 10 Jul 2022 → 15 Jul 2022
|Name||NAACL 2022 - 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Proceedings of the Conference|
|Conference||2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL 2022|
|Period||10/07/22 → 15/07/22|
Bibliographical noteFunding Information:
The authors would like to thank Adam Polyak and Felix Kreuk for initial discussions on accent normalization.
© 2022 Association for Computational Linguistics.