Abstract
Human evaluation of machine translation normally uses sentence-level measures such as relative ranking or adequacy scales. However, these provide no insight into possible errors, and do not scale well with sentence length. We argue for a semantics-based evaluation, which captures what meaning components are retained in the MT output, thus providing a more fine-grained analysis of translation quality, and enabling the construction and tuning of semantics-based MT. We present a novel human semantic evaluation measure, Human UCCA-based MT Evaluation (HUME), building on the UCCA semantic representation scheme. HUME covers a wider range of semantic phenomena than previous methods and does not rely on semantic annotation of the potentially garbled MT output. We experiment with four language pairs, demonstrating HUME's broad applicability, and report good inter-annotator agreement rates and correlation with human adequacy scores.
Original language | English |
---|---|
Title of host publication | EMNLP 2016 - Conference on Empirical Methods in Natural Language Processing, Proceedings |
Publisher | Association for Computational Linguistics (ACL) |
Pages | 1264-1274 |
Number of pages | 11 |
ISBN (Electronic) | 9781945626258 |
DOIs | |
State | Published - 2016 |
Event | 2016 Conference on Empirical Methods in Natural Language Processing, EMNLP 2016 - Austin, United States Duration: 1 Nov 2016 → 5 Nov 2016 |
Publication series
Name | EMNLP 2016 - Conference on Empirical Methods in Natural Language Processing, Proceedings |
---|
Conference
Conference | 2016 Conference on Empirical Methods in Natural Language Processing, EMNLP 2016 |
---|---|
Country/Territory | United States |
City | Austin |
Period | 1/11/16 → 5/11/16 |
Bibliographical note
Publisher Copyright:© 2016 Association for Computational Linguistics