Abstract
Text Simplification (TS) is the task of converting a text into a form that is easier to read while maintaining the meaning of the original text. A sub-task of TS is Cognitive Simplification (CS), converting text to a form that is readily understood by people with cognitive disabilities without rendering it childish or simplistic. This sub-task has yet to be explored with neural methods in NLP, and resources for it are scarcely available. In this paper, we present a method for incorporating knowledge from the cognitive accessibility domain into a TS model, by introducing an inductive bias regarding what simplification operations to use. We show that by adding this inductive bias to a TS-trained model, it is able to adapt better to CS without ever seeing CS data, and outperform a baseline model on a traditional TS benchmark. In addition, we provide a novel test dataset for CS, and analyze the differences between CS corpora and existing TS corpora, in terms of how simplification operations are applied.
Original language | English |
---|---|
Title of host publication | CoNLL 2022 - 26th Conference on Computational Natural Language Learning, Proceedings of the Conference |
Publisher | Association for Computational Linguistics (ACL) |
Pages | 241-265 |
Number of pages | 25 |
ISBN (Electronic) | 9781959429074 |
State | Published - 2022 |
Event | 26th Conference on Computational Natural Language Learning, CoNLL 2022 collocated and co-organized with EMNLP 2022 - Abu Dhabi, United Arab Emirates Duration: 7 Dec 2022 → 8 Dec 2022 |
Publication series
Name | CoNLL 2022 - 26th Conference on Computational Natural Language Learning, Proceedings of the Conference |
---|
Conference
Conference | 26th Conference on Computational Natural Language Learning, CoNLL 2022 collocated and co-organized with EMNLP 2022 |
---|---|
Country/Territory | United Arab Emirates |
City | Abu Dhabi |
Period | 7/12/22 → 8/12/22 |
Bibliographical note
Publisher Copyright:©2022 Association for Computational Linguistics.