Abstract
The integration of syntactic structures into Transformer machine translation has shown positive results, but to our knowledge, no work has attempted to do so with semantic structures. In this work we propose two novel parameter-free methods for injecting semantic information into Transformers, both rely on semantics-aware masking of (some of) the attention heads. One such method operates on the encoder, through a Scene-Aware Self-Attention (SASA) head. Another on the decoder, through a Scene-Aware Cross-Attention (SACrA) head. We show a consistent improvement over the vanilla Transformer and syntax-aware models for four language pairs. We further show an additional gain when using both semantic and syntactic structures in some language pairs.
Original language | English |
---|---|
Title of host publication | *SEM 2022 - 11th Joint Conference on Lexical and Computational Semantics, Proceedings of the Conference |
Editors | Vivi Nastase, Ellie Pavlick, Mohammad Taher Pilehvar, Jose Camacho-Collados, Alessandro Raganato |
Publisher | Association for Computational Linguistics (ACL) |
Pages | 28-43 |
Number of pages | 16 |
ISBN (Electronic) | 9781955917988 |
State | Published - 2022 |
Event | 11th Joint Conference on Lexical and Computational Semantics, *SEM 2022 - Hybrid conference, Seattle, United States Duration: 14 Jul 2022 → 15 Jul 2022 Conference number: 11 https://aclanthology.org/volumes/2022.starsem-1/ |
Publication series
Name | *SEM 2022 - 11th Joint Conference on Lexical and Computational Semantics, Proceedings of the Conference |
---|
Conference
Conference | 11th Joint Conference on Lexical and Computational Semantics, *SEM 2022 |
---|---|
Abbreviated title | *SEM 2022 |
Country/Territory | United States |
City | Seattle |
Period | 14/07/22 → 15/07/22 |
Internet address |
Bibliographical note
Publisher Copyright:© 2022 Association for Computational Linguistics.