Reinforcement Learning with Large Action Spaces for Neural Machine Translation.

Asaf Yehudai, Leshem Choshen, Lior Fox, Omri Abend

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

Applying Reinforcement learning (RL) following maximum likelihood estimation (MLE) pre-training is a versatile method for enhancing neural machine translation (NMT) performance. However, recent work has argued that the gains produced by RL for NMT are mostly due to promoting tokens that have already received a fairly high probability in pre-training. We hypothesize that the large action space is a main obstacle to RL’s effectiveness in MT, and conduct two sets of experiments that lend support to our hypothesis. First, we find that reducing the size of the vocabulary improves RL’s effectiveness. Second, we find that effectively reducing the dimension of the action space without changing the vocabulary also yields notable improvement as evaluated by BLEU, semantic similarity, and human evaluation. Indeed, by initializing the network’s final fully connected layer (that maps the network’s internal dimension to the vocabulary dimension), with a layer that generalizes over similar actions, we obtain a substantial improvement in RL performance: 1.5 BLEU points on average.
Original languageEnglish
Title of host publicationCOLING 2022
PublisherAssociation for Computational Linguistics, ACL Anthology
Pages4544-4556
Number of pages13
StatePublished - 2022
Event29th International Conference on Computational Linguistics, COLING 2022 - Gyeongju, Korea, Republic of
Duration: 12 Oct 202217 Oct 2022
Conference number: 29

Conference

Conference29th International Conference on Computational Linguistics, COLING 2022
Abbreviated titleCOLING 2022
Country/TerritoryKorea, Republic of
CityGyeongju
Period12/10/2217/10/22

Fingerprint

Dive into the research topics of 'Reinforcement Learning with Large Action Spaces for Neural Machine Translation.'. Together they form a unique fingerprint.

Cite this