Abstract
We present the first challenge set and evaluation protocol for the analysis of gender bias in machine translation (MT). Our approach uses two recent coreference resolution datasets composed of English sentences which cast participants into non-stereotypical gender roles (e.g., “The doctor asked the nurse to help her in the operation”). We devise an automatic gender bias evaluation method for eight target languages with grammatical gender, based on morphological analysis (e.g., the use of female inflection for the word “doctor”). Our analyses show that four popular industrial MT systems and two recent state-of-the-art academic MT models are significantly prone to gender-biased translation errors for all tested target languages. Our data and code are publicly available at https://github.com/gabrielStanovsky/mt_gender.
Original language | English |
---|---|
Title of host publication | ACL 2019 - 57th Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference |
Publisher | Association for Computational Linguistics (ACL) |
Pages | 1679-1684 |
Number of pages | 6 |
ISBN (Electronic) | 9781950737482 |
State | Published - 2020 |
Externally published | Yes |
Event | 57th Annual Meeting of the Association for Computational Linguistics, ACL 2019 - Florence, Italy Duration: 28 Jul 2019 → 2 Aug 2019 |
Publication series
Name | ACL 2019 - 57th Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference |
---|
Conference
Conference | 57th Annual Meeting of the Association for Computational Linguistics, ACL 2019 |
---|---|
Country/Territory | Italy |
City | Florence |
Period | 28/07/19 → 2/08/19 |
Bibliographical note
Publisher Copyright:© 2019 Association for Computational Linguistics