ProteinBERT: a universal deep-learning model of protein sequence and function

Nadav Brandes*, Dan Ofer, Yam Peleg, Nadav Rappoport, Michal Linial

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

220 Scopus citations

Abstract

Summary: Self-supervised deep language modeling has shown unprecedented success across natural language tasks, and has recently been repurposed to biological sequences. However, existing models and pretraining methods are designed and optimized for text analysis. We introduce ProteinBERT, a deep language model specifically designed for proteins. Our pretraining scheme combines language modeling with a novel task of Gene Ontology (GO) annotation prediction. We introduce novel architectural elements that make the model highly efficient and flexible to long sequences. The architecture of ProteinBERT consists of both local and global representations, allowing end-to-end processing of these types of inputs and outputs. ProteinBERT obtains near state-of-the-art performance, and sometimes exceeds it, on multiple benchmarks covering diverse protein properties (including protein structure, post-translational modifications and biophysical attributes), despite using a far smaller and faster model than competing deep-learning methods. Overall, ProteinBERT provides an efficient framework for rapidly training protein predictors, even with limited labeled data.

Original languageEnglish
Article number8
Pages (from-to)2102-2110
Number of pages9
JournalBioinformatics
Volume38
Issue number8
DOIs
StatePublished - 15 Apr 2022

Bibliographical note

Publisher Copyright:
© 2022 The Author(s) 2022. Published by Oxford University Press. All rights reserved.

Fingerprint

Dive into the research topics of 'ProteinBERT: a universal deep-learning model of protein sequence and function'. Together they form a unique fingerprint.

Cite this