Transformers are Multi-State RNNs

Matanel Oren, Michael Hassid, Nir Yarden, Yossi Adi, Roy Schwartz

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

Transformers are considered conceptually different from the previous generation of state-of-the-art NLP models-recurrent neural networks (RNNs).In this work, we demonstrate that decoder-only transformers can in fact be conceptualized as unbounded multi-state RNNs-an RNN variant with unlimited hidden state size.We further show that transformers can be converted into bounded multi-state RNNs by fixing the size of their hidden state, effectively compressing their key-value cache.We introduce a novel, training-free compression policy-Token Omission Via Attention (TOVA). Our experiments with four long range tasks and several LLMs show that TOVA outperforms several baseline compression policies.Particularly, our results are nearly on par with the full model, using in some cases only 1/8 of the original cache size, which translates to 4.8X higher throughput.Our results shed light on the connection between transformers and RNNs, and help mitigate one of LLMs' most painful computational bottlenecks-the size of their key-value cache.

Original languageEnglish
Title of host publicationEMNLP 2024 - 2024 Conference on Empirical Methods in Natural Language Processing, Proceedings of the Conference
EditorsYaser Al-Onaizan, Mohit Bansal, Yun-Nung Chen
PublisherAssociation for Computational Linguistics (ACL)
Pages18724-18741
Number of pages18
ISBN (Electronic)9798891761643
StatePublished - 2024
Event2024 Conference on Empirical Methods in Natural Language Processing, EMNLP 2024 - Hybrid, Miami, United States
Duration: 12 Nov 202416 Nov 2024

Publication series

NameEMNLP 2024 - 2024 Conference on Empirical Methods in Natural Language Processing, Proceedings of the Conference

Conference

Conference2024 Conference on Empirical Methods in Natural Language Processing, EMNLP 2024
Country/TerritoryUnited States
CityHybrid, Miami
Period12/11/2416/11/24

Bibliographical note

Publisher Copyright:
© 2024 Association for Computational Linguistics.

Fingerprint

Dive into the research topics of 'Transformers are Multi-State RNNs'. Together they form a unique fingerprint.

Cite this