Fewer Errors, but More Stereotypes? The Effect of Model Size on Gender Bias

Yarden Tal, Inbal Magar, Roy Schwartz

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

13 Scopus citations

Abstract

The size of pretrained models is increasing, and so is their performance on a variety of NLP tasks. However, as their memorization capacity grows, they might pick up more social biases. In this work, we examine the connection between model size and its gender bias (specifically, occupational gender bias). We measure bias in three masked language model families (RoBERTa, DeBERTa, and T5) in two setups: directly using prompt based method, and using a downstream task (Winogender). We find on the one hand that larger models receive higher bias scores on the former task, but when evaluated on the latter, they make fewer gender errors. To examine these potentially conflicting results, we carefully investigate the behavior of the different models on Winogender. We find that while larger models outperform smaller ones, the probability that their mistakes are caused by gender bias is higher. Moreover, we find that the proportion of stereotypical errors compared to anti-stereotypical ones grows with the model size. Our findings highlight the potential risks that can arise from increasing model size.

Original languageEnglish
Title of host publicationGeBNLP 2022 - 4th Workshop on Gender Bias in Natural Language Processing, Proceedings of the Workshop
EditorsChristian Hardmeier, Christian Hardmeier, Christine Basta, Basta Christine, Marta R. Costa-Jussa, Gabriel Stanovsky, Hila Gonen
PublisherAssociation for Computational Linguistics (ACL)
Pages112-120
Number of pages9
ISBN (Electronic)9781955917681
StatePublished - 2022
Event4th Workshop on Gender Bias in Natural Language Processing, GeBNLP 2022 - Seattle, United States
Duration: 15 Jul 2022 → …

Publication series

NameGeBNLP 2022 - 4th Workshop on Gender Bias in Natural Language Processing, Proceedings of the Workshop

Conference

Conference4th Workshop on Gender Bias in Natural Language Processing, GeBNLP 2022
Country/TerritoryUnited States
CitySeattle
Period15/07/22 → …

Bibliographical note

Publisher Copyright:
© 2022 Association for Computational Linguistics.

Fingerprint

Dive into the research topics of 'Fewer Errors, but More Stereotypes? The Effect of Model Size on Gender Bias'. Together they form a unique fingerprint.

Cite this