Abstract
Utilizing recent advances in machine learning, we introduce a systematic approach to characterize neurons’ input/output (I/O) mapping complexity. Deep neural networks (DNNs) were trained to faithfully replicate the I/O function of various biophysical models of cortical neurons at millisecond (spiking) resolution. A temporally convolutional DNN with five to eight layers was required to capture the I/O mapping of a realistic model of a layer 5 cortical pyramidal cell (L5PC). This DNN generalized well when presented with inputs widely outside the training distribution. When NMDA receptors were removed, a much simpler network (fully connected neural network with one hidden layer) was sufficient to fit the model. Analysis of the DNNs’ weight matrices revealed that synaptic integration in dendritic branches could be conceptualized as pattern matching from a set of spatiotemporal templates. This study provides a unified characterization of the computational complexity of single neurons and suggests that cortical networks therefore have a unique architecture, potentially supporting their computational power.
Original language | English |
---|---|
Pages (from-to) | 2727-2739.e3 |
Journal | Neuron |
Volume | 109 |
Issue number | 17 |
DOIs | |
State | Published - 1 Sep 2021 |
Bibliographical note
Funding Information:We thank Oren Amsalem, Guy Eyal, Michael Doron, Toviah Moldwin, Yair Deitcher, Eyal Gal, and all lab members of the Segev and London Labs for many fruitful discussions and valuable feedback regarding this work. This work was supported by ONR grant N00014-19-1-2036 , Israeli Science Foundation grant 1024/17 (to M.L.), and a grant from the Gatsby Charitable Foundation .
Publisher Copyright:
© 2021 Elsevier Inc.
Keywords
- NMDA spike
- calcium spike
- compartmental model
- cortical pyramidal neuron
- deep learning
- dendritic computation
- dendritic nonlinearities
- machine learning
- neural coding
- synaptic integration