Abstract
In this paper, we explore the question of whether large language models can support cost-efficient information extraction from tables.We introduce schema-driven information extraction, a new task that transforms tabular data into structured records following a human-authored schema.To assess various LLM's capabilities on this task, we present a benchmark comprised of tables from four diverse domains: machine learning papers, chemistry literature, material science journals, and webpages.We use this collection of annotated tables to evaluate the ability of open-source and API-based language models to extract information from tables covering diverse domains and data formats.Our experiments demonstrate that surprisingly competitive performance can be achieved without requiring task-specific pipelines or labels, achieving F1 scores ranging from 74.2 to 96.1, while maintaining cost efficiency.Moreover, through detailed ablation studies and analyses, we investigate the factors contributing to model success and validate the practicality of distilling compact models to reduce API reliance.
Original language | English |
---|---|
Title of host publication | EMNLP 2024 - 2024 Conference on Empirical Methods in Natural Language Processing, Findings of EMNLP 2024 |
Editors | Yaser Al-Onaizan, Mohit Bansal, Yun-Nung Chen |
Publisher | Association for Computational Linguistics (ACL) |
Pages | 10252-10273 |
Number of pages | 22 |
ISBN (Electronic) | 9798891761681 |
State | Published - 2024 |
Event | 2024 Conference on Empirical Methods in Natural Language Processing, EMNLP 2024 - Hybrid, Miami, United States Duration: 12 Nov 2024 → 16 Nov 2024 |
Publication series
Name | EMNLP 2024 - 2024 Conference on Empirical Methods in Natural Language Processing, Findings of EMNLP 2024 |
---|
Conference
Conference | 2024 Conference on Empirical Methods in Natural Language Processing, EMNLP 2024 |
---|---|
Country/Territory | United States |
City | Hybrid, Miami |
Period | 12/11/24 → 16/11/24 |
Bibliographical note
Publisher Copyright:© 2024 Association for Computational Linguistics.