Despite the impressive growth of the abilities of multilingual language models, such as XLM-R and mT5, it has been shown that they still face difficulties when tackling typologically-distant languages, particularly in the low-resource setting. One obstacle for effective cross-lingual transfer is variability in word-order patterns. It can be potentially mitigated via source- or target-side word reordering, and numerous approaches to reordering have been proposed. However, they rely on language-specific rules, work on the level of POS tags, or only target the main clause, leaving subordinate clauses intact. To address these limitations, we present a new powerful reordering method, defined in terms of Universal Dependencies, that is able to learn fine-grained word-order patterns conditioned on the syntactic context from a small amount of annotated data and can be applied at all levels of the syntactic tree. We conduct experiments on a diverse set of tasks and show that our method consistently outperforms strong baselines over different language pairs and model architectures. This performance advantage holds true in both zero-shot and few-shot scenarios.
|Title of host publication
|Findings of the Association for Computational Linguistics
|Subtitle of host publication
|Association for Computational Linguistics (ACL)
|Number of pages
|Published - 2023
|2023 Findings of the Association for Computational Linguistics: EMNLP 2023 - Singapore, Singapore
Duration: 6 Dec 2023 → 10 Dec 2023
|Findings of the Association for Computational Linguistics: EMNLP 2023
|2023 Findings of the Association for Computational Linguistics: EMNLP 2023
|6/12/23 → 10/12/23
Bibliographical notePublisher Copyright:
© 2023 Association for Computational Linguistics.