Abstract
We show that the state-of-the-art Transformer MT model is not biased towards monotonic reordering (unlike previous recurrent neural network models), but that nevertheless, longdistance dependencies remain a challenge for the model. Since most dependencies are short-distance, common evaluation metrics will be little influenced by how well systems perform on them. We therefore propose an automatic approach for extracting challenge sets replete with long-distance dependencies, and argue that evaluation using this methodology provides a complementary perspective on system performance. To support our claim, we compile challenge sets for English-German and German-English, which are much larger than any previously released challenge set for MT. The extracted sets are large enough to allow reliable automatic evaluation, which makes the proposed approach a scalable and practical solution for evaluating MT performance on the long-tail of syntactic phenomena1.
Original language | English |
---|---|
Title of host publication | CoNLL 2019 - 23rd Conference on Computational Natural Language Learning, Proceedings of the Conference |
Publisher | Association for Computational Linguistics |
Pages | 291-303 |
Number of pages | 13 |
ISBN (Electronic) | 9781950737727 |
State | Published - 2019 |
Event | 23rd Conference on Computational Natural Language Learning, CoNLL 2019 - Hong Kong, China Duration: 3 Nov 2019 → 4 Nov 2019 |
Publication series
Name | CoNLL 2019 - 23rd Conference on Computational Natural Language Learning, Proceedings of the Conference |
---|
Conference
Conference | 23rd Conference on Computational Natural Language Learning, CoNLL 2019 |
---|---|
Country/Territory | China |
City | Hong Kong |
Period | 3/11/19 → 4/11/19 |
Bibliographical note
Publisher Copyright:© 2019 Association for Computational Linguistics.