Abstract
Syntactic and semantic structure directly reflect relations expressed by the text at hand and are thus very useful for the relation extraction (RE) task. Their symbolic nature allows increased interpretability for end-users and developers, which is particularly appealing in RE. Although they have been somewhat overshadowed recently by the use of end-to-end neural network models and contextualized word embeddings, we show that they may be leveraged as input for neural networks to positive effect. We present two methods for integrating broad-coverage semantic structure (specifically, UCCA) into supervised neural RE models, demonstrating benefits over the use of exclusively syntactic integrations. The first method involves reduction of UCCA into a bilexical structure, while the second leverages a novel technique for encoding semantic DAG structures. Our approach is general and can be used for integrating a wide range of graph-based semantic structures.
Original language | English |
---|---|
Title of host publication | Findings of the Association for Computational Linguistics |
Subtitle of host publication | ACL-IJCNLP 2021 |
Editors | Chengqing Zong, Fei Xia, Wenjie Li, Roberto Navigli |
Publisher | Association for Computational Linguistics (ACL) |
Pages | 2614-2626 |
Number of pages | 13 |
ISBN (Electronic) | 9781954085541 |
State | Published - 2021 |
Event | Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021 - Virtual, Online Duration: 1 Aug 2021 → 6 Aug 2021 |
Publication series
Name | Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021 |
---|
Conference
Conference | Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021 |
---|---|
City | Virtual, Online |
Period | 1/08/21 → 6/08/21 |
Bibliographical note
Publisher Copyright:© 2021 Association for Computational Linguistics