Abstract
Multi-head attentive neural architectures have achieved state-of-the-art results on a variety of natural language processing tasks. Evidence has shown that they are overparameterized; attention heads can be pruned without significant performance loss. In this work, we instead “reallocate” them-the model learns to activate different heads on different inputs. Drawing connections between multi-head attention and mixture of experts, we propose the mixture of attentive experts model (MAE). MAE is trained using a block coordinate descent algorithm that alternates between updating (1) the responsibilities of the experts and (2) their parameters. Experiments on machine translation and language modeling show that MAE outperforms strong baselines on both tasks. Particularly, on the WMT14 English to German translation dataset, MAE improves over “transformer-base” by 0.8 BLEU, with a comparable number of parameters. Our analysis shows that our model learns to specialize different experts to different inputs.
Original language | English |
---|---|
Title of host publication | ACL 2020 - 58th Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference |
Publisher | Association for Computational Linguistics (ACL) |
Pages | 6566-6577 |
Number of pages | 12 |
ISBN (Electronic) | 9781952148255 |
State | Published - 2020 |
Externally published | Yes |
Event | 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020 - Virtual, Online, United States Duration: 5 Jul 2020 → 10 Jul 2020 |
Publication series
Name | Proceedings of the Annual Meeting of the Association for Computational Linguistics |
---|---|
ISSN (Print) | 0736-587X |
Conference
Conference | 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020 |
---|---|
Country/Territory | United States |
City | Virtual, Online |
Period | 5/07/20 → 10/07/20 |
Bibliographical note
Publisher Copyright:© 2020 Association for Computational Linguistics