Abstract
End-to-end spoken language understanding (SLU) predicts intent directly from audio using a single model. It promises to improve the performance of assistant systems by leveraging acoustic information lost in the intermediate textual representation and preventing cascading errors from Automatic Speech Recognition (ASR). Further, having one unified model has efficiency advantages when deploying assistant systems on-device. However, the limited number of public audio datasets with semantic parse labels hinders the research progress in this area. In this paper, we release the Spoken Task-Oriented semantic Parsing (STOP) dataset 1, the largest and most complex SLU dataset publicly available. Additionally, we define low-resource splits to establish a benchmark for improving SLU when limited labeled data is available. Furthermore, in addition to the human-recorded audio, we are releasing a TTS-generated versions to benchmark the performance for low-resource and domain adaptation of end-to-end SLU systems.
Original language | English |
---|---|
Title of host publication | 2022 IEEE Spoken Language Technology Workshop, SLT 2022 - Proceedings |
Publisher | Institute of Electrical and Electronics Engineers Inc. |
Pages | 991-998 |
Number of pages | 8 |
ISBN (Electronic) | 9798350396904 |
DOIs | |
State | Published - 2023 |
Externally published | Yes |
Event | 2022 IEEE Spoken Language Technology Workshop, SLT 2022 - Doha, Qatar Duration: 9 Jan 2023 → 12 Jan 2023 |
Publication series
Name | 2022 IEEE Spoken Language Technology Workshop, SLT 2022 - Proceedings |
---|
Conference
Conference | 2022 IEEE Spoken Language Technology Workshop, SLT 2022 |
---|---|
Country/Territory | Qatar |
City | Doha |
Period | 9/01/23 → 12/01/23 |
Bibliographical note
Publisher Copyright:© 2023 IEEE.
Keywords
- assistant
- domain adaptation
- spoken language understanding