Abstract
Recent work has shown that deep learning models in NLP are highly sensitive to lowlevel correlations between simple features and specific output labels, leading to overfitting and lack of generalization. To mitigate this problem, a common practice is to balance datasets by adding new instances or by filtering out "easy" instances (Sakaguchi et al., 2020), culminating in a recent proposal to eliminate single-word correlations altogether (Gardner et al., 2021). In this opinion paper, we identify that despite these efforts, increasingly-powerful models keep exploiting ever-smaller spurious correlations, and as a result even balancing all single-word features is insufficient for mitigating all of these correlations. In parallel, a truly balanced dataset may be bound to "throw the baby out with the bathwater" and miss important signal encoding common sense and world knowledge. We highlight several alternatives to dataset balancing, focusing on enhancing datasets with richer contexts, allowing models to abstain and interact with users, and turning from large-scale fine-tuning to zero- or few-shot setups.
Original language | English |
---|---|
Title of host publication | Findings of the Association for Computational Linguistics |
Subtitle of host publication | NAACL 2022 - Findings |
Publisher | Association for Computational Linguistics (ACL) |
Pages | 2182-2194 |
Number of pages | 13 |
ISBN (Electronic) | 9781955917766 |
State | Published - 2022 |
Event | 2022 Findings of the Association for Computational Linguistics: NAACL 2022 - Seattle, United States Duration: 10 Jul 2022 → 15 Jul 2022 |
Publication series
Name | Findings of the Association for Computational Linguistics: NAACL 2022 - Findings |
---|
Conference
Conference | 2022 Findings of the Association for Computational Linguistics: NAACL 2022 |
---|---|
Country/Territory | United States |
City | Seattle |
Period | 10/07/22 → 15/07/22 |
Bibliographical note
Publisher Copyright:© Findings of the Association for Computational Linguistics: NAACL 2022 - Findings.