Research in NLP is often supported by experimental results, and improved reporting of such results can lead to better understanding and more reproducible science. In this paper we analyze three statistical estimators for expected validation performance, a tool used for reporting performance (e.g., accuracy) as a function of computational budget (e.g., number of hyperparameter tuning experiments). Where previous work analyzing such estimators focused on the bias, we also examine the variance and mean squared error (MSE). In both synthetic and realistic scenarios, we evaluate three estimators and find the unbiased estimator has the highest variance, and the estimator with the smallest variance has the largest bias; the estimator with the smallest MSE strikes a balance between bias and variance, displaying a classic bias-variance tradeoff. We use expected validation performance to compare between different models, and analyze how frequently each estimator leads to drawing incorrect conclusions about which of two models performs best. We find that the two biased estimators lead to the fewest incorrect conclusions, which hints at the importance of minimizing variance and MSE.
|Original language||American English|
|Title of host publication||Findings of the Association for Computational Linguistics, Findings of ACL|
|Subtitle of host publication||EMNLP 2021|
|Editors||Marie-Francine Moens, Xuanjing Huang, Lucia Specia, Scott Wen-Tau Yih|
|Publisher||Association for Computational Linguistics (ACL)|
|Number of pages||8|
|State||Published - 2021|
|Event||2021 Findings of the Association for Computational Linguistics, Findings of ACL: EMNLP 2021 - Punta Cana, Dominican Republic|
Duration: 7 Nov 2021 → 11 Nov 2021
|Name||Findings of the Association for Computational Linguistics, Findings of ACL: EMNLP 2021|
|Conference||2021 Findings of the Association for Computational Linguistics, Findings of ACL: EMNLP 2021|
|Period||7/11/21 → 11/11/21|
Bibliographical noteFunding Information:
Dallas Card was supported in part by the Stanford Data Science Institute.
© 2021 Association for Computational Linguistics.