Aligning brains into a shared space improves their alignment with large language models

  • Arnab Bhattacharjee*
  • , Zaid Zada
  • , Haocheng Wang
  • , Bobbi Aubrey
  • , Werner Doyle
  • , Patricia Dugan
  • , Daniel Friedman
  • , Orrin Devinsky
  • , Adeen Flinker
  • , Peter J. Ramadge
  • , Uri Hasson
  • , Ariel Goldstein
  • , Samuel A. Nastase
  • *Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

1 Scopus citations

Abstract

Recent research demonstrates that large language models can predict neural activity recorded via electrocorticography during natural language processing. To predict word-by-word neural activity, most prior work evaluates encoding models within individual electrodes and participants, limiting generalizability. Here we analyze electrocorticography data from eight participants listening to the same 30-min podcast. Using a shared response model, we estimate a common information space across participants. This shared space substantially enhances large language model-based encoding performance and enables denoising of individual brain responses by projecting back into participant-specific electrode spaces—yielding a 37% average improvement in encoding accuracy (from r = 0.188 to r = 0.257). The greatest gains occur in brain areas specialized for language comprehension, particularly the superior temporal gyrus and inferior frontal gyrus. Our findings highlight that estimating a shared space allows us to construct encoding models that better generalize across individuals.

Original languageEnglish
JournalNature Computational Science
DOIs
StateAccepted/In press - 2025

Bibliographical note

Publisher Copyright:
© The Author(s), under exclusive licence to Springer Nature America, Inc. 2025.

Fingerprint

Dive into the research topics of 'Aligning brains into a shared space improves their alignment with large language models'. Together they form a unique fingerprint.

Cite this