Probing neural models for the ability to perform downstream tasks using their activation patterns is often used to localize what parts of the network specialize in performing what tasks. However, little work addressed potential mediating factors in such comparisons. As a test-case mediating factor, we consider the prediction’s context length, namely the length of the span whose processing is minimally required to perform the prediction. We show that not controlling for context length may lead to contradictory conclusions as to the localization patterns of the network, depending on the distribution of the probing dataset. Indeed, when probing BERT with seven tasks, we find that it is possible to get 196 different rankings between them when manipulating the distribution of context lengths in the probing dataset. We conclude by presenting best practices for conducting such comparisons in the future.
|Original language||American English|
|Title of host publication||NAACL-HLT 2021 - 2021 Conference of the North American Chapter of the Association for Computational Linguistics|
|Subtitle of host publication||Human Language Technologies, Proceedings of the Conference|
|Editors||Kristina Toutanova, Anna Rumshisky, Luke Zettlemoyer, Dilek Hakkani-Tur, Iz Beltagy, Steven Bethard, Ryan Cotterell, Tanmoy Chakraborty, Zhou Yichao|
|Publisher||Association for Computational Linguistics (ACL)|
|Number of pages||8|
|State||Published - 2021|
|Event||2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2021 - Virtual, Online|
Duration: 6 Jun 2021 → 11 Jun 2021
|Name||NAACL-HLT 2021 - 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Proceedings of the Conference|
|Conference||2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2021|
|Period||6/06/21 → 11/06/21|
Bibliographical noteFunding Information:
This work was supported by the Israel Science Foundation (grant no. 929/17). We would also like to thank Amir Feder for his very insightful feedback on our paper.
© 2021 Association for Computational Linguistics.
- Computational Linguistics
- Natural language processing