Phoneme boundary detection plays an essential first step for a variety of speech processing applications such as speaker diarization, speech science, keyword spotting, etc. In this work, we propose a neural architecture coupled with a parameterized structured loss function to learn segmental representations for the task of phoneme boundary detection. First, we evaluated our model when the spoken phonemes were not given as input. Results on the TIMIT and Buckeye corpora suggest that the proposed model is superior to the baseline models and reaches state-of-the-art performance in terms of F1 and R-value. We further explore the use of phonetic transcription as additional supervision and show this yields minor improvements in performance but substantially better convergence rates. We additionally evaluate the model on a He-brew corpus and demonstrate such phonetic supervision can be beneficial in a multi-lingual setting.
|Title of host publication
|2020 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2020 - Proceedings
|Institute of Electrical and Electronics Engineers Inc.
|Number of pages
|Published - May 2020
|2020 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2020 - Barcelona, Spain
Duration: 4 May 2020 → 8 May 2020
|ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings
|2020 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2020
|4/05/20 → 8/05/20
Bibliographical notePublisher Copyright:
© 2020 IEEE.
- Sequence segmentation
- phoneme boundary detection
- recurrent neural networks (RNNs)
- structured prediction