Separation of scales and a thermodynamic description of feature learning in some CNNs

Inbar Seroussi*, Gadi Naveh, Zohar Ringel

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

4 Scopus citations

Abstract

Deep neural networks (DNNs) are powerful tools for compressing and distilling information. Their scale and complexity, often involving billions of inter-dependent parameters, render direct microscopic analysis difficult. Under such circumstances, a common strategy is to identify slow variables that average the erratic behavior of the fast microscopic variables. Here, we identify a similar separation of scales occurring in fully trained finitely over-parameterized deep convolutional neural networks (CNNs) and fully connected networks (FCNs). Specifically, we show that DNN layers couple only through the second cumulant (kernels) of their activations and pre-activations. Moreover, the latter fluctuates in a nearly Gaussian manner. For infinite width DNNs, these kernels are inert, while for finite ones they adapt to the data and yield a tractable data-aware Gaussian Process. The resulting thermodynamic theory of deep learning yields accurate predictions in various settings. In addition, it provides new ways of analyzing and understanding DNNs in general.

Original languageAmerican English
Article number908
JournalNature Communications
Volume14
Issue number1
DOIs
StatePublished - Dec 2023

Bibliographical note

Publisher Copyright:
© 2023, The Author(s).

Fingerprint

Dive into the research topics of 'Separation of scales and a thermodynamic description of feature learning in some CNNs'. Together they form a unique fingerprint.

Cite this