Abstract
We propose a new paradigm for aligning a phoneme sequence of a speech utterance with its acoustical signal counterpart. In contrast to common HMM-based approaches, our method employs a discriminative learning procedure in which the learning phase is tightly coupled with the alignment task at hand. The alignment function we devise is based on mapping the input acoustic-symbolic representations of the speech utterance along with the target alignment into an abstract vector space. We suggest a specific mapping into the abstract vector-space which utilizes standard speech features (e.g. spectral distances) as well as confidence outputs of a framewise phoneme classifier. Building on techniques used for large margin methods for predicting whole sequences, our alignment function distills to a classifier in the abstract vector-space which separates correct alignments from incorrect ones. We describe a simple iterative algorithm for learning the alignment function and discuss its formal properties. Experiments with the TIMIT corpus show that our method outperforms the current state-of-the-art approaches.
Original language | English |
---|---|
Pages | 2961-2964 |
Number of pages | 4 |
State | Published - 2005 |
Externally published | Yes |
Event | 9th European Conference on Speech Communication and Technology - Lisbon, Portugal Duration: 4 Sep 2005 → 8 Sep 2005 |
Conference
Conference | 9th European Conference on Speech Communication and Technology |
---|---|
Country/Territory | Portugal |
City | Lisbon |
Period | 4/09/05 → 8/09/05 |