Recent works have found evidence of gender bias in models of machine translation and coreference resolution using mostly synthetic diagnostic datasets. While these quantify bias in a controlled experiment, they often do so on a small scale and consist mostly of artificial, out-of-distribution sentences. In this work, we find grammatical patterns indicating stereotypical and non-stereotypical gender-role assignments (e.g., female nurses versus male dancers) in corpora from three domains, resulting in a first large-scale gender bias dataset of 108K diverse real-world English sentences. We manually verify the quality of our corpus and use it to evaluate gender bias in various coreference resolution and machine translation models. We find that all tested models tend to over-rely on gender stereotypes when presented with natural inputs, which may be especially harmful when deployed in commercial systems. Finally, we show that our dataset lends itself to finetuning a coreference resolution model, finding it mitigates bias on a held out set. Our dataset and models are publicly available at github.com/ SLAB-NLP/BUG. We hope they will spur future research into gender bias evaluation mitigation techniques in realistic settings.
|Original language||American English|
|Title of host publication||Findings of the Association for Computational Linguistics, Findings of ACL|
|Subtitle of host publication||EMNLP 2021|
|Editors||Marie-Francine Moens, Xuanjing Huang, Lucia Specia, Scott Wen-Tau Yih|
|Publisher||Association for Computational Linguistics (ACL)|
|Number of pages||11|
|State||Published - 2021|
|Event||2021 Findings of the Association for Computational Linguistics, Findings of ACL: EMNLP 2021 - Punta Cana, Dominican Republic|
Duration: 7 Nov 2021 → 11 Nov 2021
|Name||Findings of the Association for Computational Linguistics, Findings of ACL: EMNLP 2021|
|Conference||2021 Findings of the Association for Computational Linguistics, Findings of ACL: EMNLP 2021|
|Period||7/11/21 → 11/11/21|
Bibliographical noteFunding Information:
We thank Micah Shlain, Hillel Taub-Tabib, Shoval Sadde, and Yoav Goldberg for their help with SPIKE during our experiments, for fruitful discussions and their comments on earlier drafts of the paper, and the anonymous reviewers for their helpful comments and feedback. This work was supported in part by a research gift from the Allen Institute for AI.
© 2021 Association for Computational Linguistics.