Attend First, Consolidate Later: On the Importance of Attention in Different LLM Layers

Amit Ben Artzy, Roy Schwartz

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

In decoder-based LLMs, the representation of a given token at a certain layer serves two purposes: as input to the attention mechanism of the current token; and as input to the attention mechanism of future tokens. In this work, we show that the importance of the latter role might be overestimated for some layers. To show that, we start by manipulating the representations of previous tokens; e.g., by replacing the hidden states at some layer k with random vectors (Fig. 1). Our experimenting with four LLMs and four tasks show that this operation often leads to small to negligible drop in performance. Importantly, this happens if the manipulation occurs in the top part of the model—k is in the final 30–50% of the layers. In contrast, doing the same manipulation in earlier layers can lead to chance level performance. We continue by switching the hidden state of certain tokens with hidden states of other tokens from another prompt; e.g., replacing the word “Italy” with “France” in “What is the capital of Italy?”. We find that when applying this switch in the top 1/3 of the model, the model ignores it (answering “Rome”). However if we apply it before, the model conforms to the switch (“Paris”). Our results hint at a two stage process in transformer-based LLMs: the first part gathers input from previous tokens, while the second mainly processes that information internally.

Original languageEnglish
Title of host publicationBlackboxNLP 2024 - 7th BlackboxNLP Workshop
Subtitle of host publicationAnalyzing and Interpreting Neural Networks for NLP - Proceedings of the Workshop
EditorsYonatan Belinkov, Najoung Kim, Jaap Jumelet, Hosein Mohebbi, Aaron Mueller, Hanjie Chen
PublisherAssociation for Computational Linguistics (ACL)
Pages177-184
Number of pages8
ISBN (Electronic)9798891761704
StatePublished - 2024
Event7th BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP, BlackboxNLP 2024 - Miami, United States
Duration: 15 Nov 2024 → …

Publication series

NameBlackboxNLP 2024 - 7th BlackboxNLP Workshop: Analyzing and Interpreting Neural Networks for NLP - Proceedings of the Workshop

Conference

Conference7th BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP, BlackboxNLP 2024
Country/TerritoryUnited States
CityMiami
Period15/11/24 → …

Bibliographical note

Publisher Copyright:
©2024 Association for Computational Linguistics.

Fingerprint

Dive into the research topics of 'Attend First, Consolidate Later: On the Importance of Attention in Different LLM Layers'. Together they form a unique fingerprint.

Cite this