Abstract
Open information extraction (Open IE) was presented as an unrestricted variant of traditional information extraction. It has been gaining substantial attention, manifested by a large number of automatic Open IE extractors and downstream applications. In spite of this broad attention, the Open IE task definition has been lacking - there are no formal guidelines and no large scale gold standard annotation. Subsequently, the various implementations of Open IE resorted to small scale post-hoc evaluations, inhibiting an objective and reproducible cross-system comparison. In this work, we develop a methodology that leverages the recent QA-SRL annotation to create a first independent and large scale Open IE annotation,1 and use it to automatically compare the most prominent Open IE systems.
Original language | English |
---|---|
Title of host publication | EMNLP 2016 - Conference on Empirical Methods in Natural Language Processing, Proceedings |
Publisher | Association for Computational Linguistics (ACL) |
Pages | 2300-2305 |
Number of pages | 6 |
ISBN (Electronic) | 9781945626258 |
DOIs | |
State | Published - 2016 |
Externally published | Yes |
Event | 2016 Conference on Empirical Methods in Natural Language Processing, EMNLP 2016 - Austin, United States Duration: 1 Nov 2016 → 5 Nov 2016 |
Publication series
Name | EMNLP 2016 - Conference on Empirical Methods in Natural Language Processing, Proceedings |
---|
Conference
Conference | 2016 Conference on Empirical Methods in Natural Language Processing, EMNLP 2016 |
---|---|
Country/Territory | United States |
City | Austin |
Period | 1/11/16 → 5/11/16 |
Bibliographical note
Publisher Copyright:© 2016 Association for Computational Linguistics