Abstract
Phoneme boundary detection plays an essential first step for a variety of speech processing applications such as speaker diarization, speech science, keyword spotting, etc. In this work, we propose a neural architecture coupled with a parameterized structured loss function to learn segmental representations for the task of phoneme boundary detection. First, we evaluated our model when the spoken phonemes were not given as input. Results on the TIMIT and Buckeye corpora suggest that the proposed model is superior to the baseline models and reaches state-of-the-art performance in terms of F1 and R-value. We further explore the use of phonetic transcription as additional supervision and show this yields minor improvements in performance but substantially better convergence rates. We additionally evaluate the model on a He-brew corpus and demonstrate such phonetic supervision can be beneficial in a multi-lingual setting.
Original language | English |
---|---|
Title of host publication | 2020 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2020 - Proceedings |
Publisher | Institute of Electrical and Electronics Engineers Inc. |
Pages | 8089-8093 |
Number of pages | 5 |
ISBN (Electronic) | 9781509066315 |
DOIs | |
State | Published - May 2020 |
Externally published | Yes |
Event | 2020 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2020 - Barcelona, Spain Duration: 4 May 2020 → 8 May 2020 |
Publication series
Name | ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings |
---|---|
Volume | 2020-May |
ISSN (Print) | 1520-6149 |
Conference
Conference | 2020 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2020 |
---|---|
Country/Territory | Spain |
City | Barcelona |
Period | 4/05/20 → 8/05/20 |
Bibliographical note
Publisher Copyright:© 2020 IEEE.
Keywords
- Sequence segmentation
- phoneme boundary detection
- recurrent neural networks (RNNs)
- structured prediction