Abstract
Recent works have shown that supervised models often exploit data artifacts to achieve good test scores while their performance severely degrades on samples outside their training distribution. Contrast sets (Gardner et al., 2020) quantify this phenomenon by perturbing test samples in a minimal way such that the output label is modified. While most contrast sets were created manually, requiring intensive annotation effort, we present a novel method which leverages rich semantic input representation to automatically generate contrast sets for the visual question answering task. Our method computes the answer of perturbed questions, thus vastly reducing annotation cost and enabling thorough evaluation of models’ performance on various semantic aspects (e.g., spatial or relational reasoning). We demonstrate the effectiveness of our approach on the popular GQA dataset (Hudson and Manning, 2019) and its semantic scene graph image representation. We find that, despite GQA’s compositionality and carefully balanced label distribution, two strong models drop 13–17% in accuracy on our automatically-constructed contrast set compared to the original validation set. Finally, we show that our method can be applied to the training set to mitigate the degradation in performance, opening the door to more robust models.
Original language | English |
---|---|
Title of host publication | NAACL-HLT 2021 - 2021 Conference of the North American Chapter of the Association for Computational Linguistics |
Subtitle of host publication | Human Language Technologies, Proceedings of the Conference |
Publisher | Association for Computational Linguistics (ACL) |
Pages | 94-105 |
Number of pages | 12 |
ISBN (Electronic) | 9781954085466 |
State | Published - 2021 |
Event | 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2021 - Virtual, Online Duration: 6 Jun 2021 → 11 Jun 2021 |
Publication series
Name | NAACL-HLT 2021 - 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Proceedings of the Conference |
---|
Conference
Conference | 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2021 |
---|---|
City | Virtual, Online |
Period | 6/06/21 → 11/06/21 |
Bibliographical note
Publisher Copyright:© 2021 Association for Computational Linguistics.