BLEU is widely considered to be an informative metric for text-to-text generation, including Text Simplification (TS). TS includes both lexical and structural aspects. In this paper we show that BLEU is not suitable for the evaluation of sentence splitting, the major structural simplification operation. We manually compiled a sentence splitting gold standard corpus containing multiple structural paraphrases, and performed a correlation analysis with human judgments.1 We find low or no correlation between BLEU and the grammaticality and meaning preservation parameters where sentence splitting is involved. Moreover, BLEU often negatively correlates with simplicity, essentially penalizing simpler sentences.
|Original language||American English|
|Title of host publication||Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, EMNLP 2018|
|Editors||Ellen Riloff, David Chiang, Julia Hockenmaier, Jun'ichi Tsujii|
|Publisher||Association for Computational Linguistics|
|Number of pages||7|
|State||Published - 2018|
|Event||2018 Conference on Empirical Methods in Natural Language Processing, EMNLP 2018 - Brussels, Belgium|
Duration: 31 Oct 2018 → 4 Nov 2018
|Name||Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, EMNLP 2018|
|Conference||2018 Conference on Empirical Methods in Natural Language Processing, EMNLP 2018|
|Period||31/10/18 → 4/11/18|
Bibliographical noteFunding Information:
We would like to thank the annotators for participating in our generation and evaluation experiments. We also thank the anonymous reviewers for their helpful advices. This work was partially supported by the Intel Collaborative Research Institute for Computational Intelligence (ICRI-CI) and by the Israel Science Foundation (grant No. 929/17), as well as by the HUJI Cyber Security Research Center in conjunction with the Israel National Cyber Bureau in the Prime Minister’s Office.
© 2018 Association for Computational Linguistics