Abstract
The patterns in which the syntax of different languages converges and diverges are often used to inform work on cross-lingual transfer. Nevertheless, little empirical work has been done on quantifying the prevalence of different syntactic divergences across language pairs. We propose a framework for extracting divergence patterns for any language pair from a parallel corpus, building on Universal Dependencies (UD; Nivre et al., 2016). We show that our framework provides a detailed picture of cross-language divergences, generalizes previous approaches, and lends itself to full automation. We further present a novel dataset, a manually word-aligned subset of the Parallel UD corpus in five languages, and use it to perform a detailed corpus study. We demonstrate the usefulness of the resulting analysis by showing that it can help account for performance patterns of a cross-lingual parser.
Original language | English |
---|---|
Title of host publication | ACL 2020 - 58th Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference |
Publisher | Association for Computational Linguistics (ACL) |
Pages | 1159-1176 |
Number of pages | 18 |
ISBN (Electronic) | 9781952148255 |
State | Published - 2020 |
Event | 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020 - Virtual, Online, United States Duration: 5 Jul 2020 → 10 Jul 2020 |
Publication series
Name | Proceedings of the Annual Meeting of the Association for Computational Linguistics |
---|---|
ISSN (Print) | 0736-587X |
Conference
Conference | 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020 |
---|---|
Country/Territory | United States |
City | Virtual, Online |
Period | 5/07/20 → 10/07/20 |
Bibliographical note
Publisher Copyright:© 2020 Association for Computational Linguistics