TY - JOUR
T1 - Quantum Entanglement in Deep Learning Architectures
AU - Levine, Yoav
AU - Sharir, Or
AU - Cohen, Nadav
AU - Shashua, Amnon
N1 - Publisher Copyright:
© 2019 American Physical Society.
PY - 2019/2/12
Y1 - 2019/2/12
N2 - Modern deep learning has enabled unprecedented achievements in various domains. Nonetheless, employment of machine learning for wave function representations is focused on more traditional architectures such as restricted Boltzmann machines (RBMs) and fully connected neural networks. In this Letter, we establish that contemporary deep learning architectures, in the form of deep convolutional and recurrent networks, can efficiently represent highly entangled quantum systems. By constructing tensor network equivalents of these architectures, we identify an inherent reuse of information in the network operation as a key trait which distinguishes them from standard tensor network-based representations, and which enhances their entanglement capacity. Our results show that such architectures can support volume-law entanglement scaling, polynomially more efficiently than presently employed RBMs. Thus, beyond a quantification of the entanglement capacity of leading deep learning architectures, our analysis formally motivates a shift of trending neural-network-based wave function representations closer to the state-of-the-art in machine learning.
AB - Modern deep learning has enabled unprecedented achievements in various domains. Nonetheless, employment of machine learning for wave function representations is focused on more traditional architectures such as restricted Boltzmann machines (RBMs) and fully connected neural networks. In this Letter, we establish that contemporary deep learning architectures, in the form of deep convolutional and recurrent networks, can efficiently represent highly entangled quantum systems. By constructing tensor network equivalents of these architectures, we identify an inherent reuse of information in the network operation as a key trait which distinguishes them from standard tensor network-based representations, and which enhances their entanglement capacity. Our results show that such architectures can support volume-law entanglement scaling, polynomially more efficiently than presently employed RBMs. Thus, beyond a quantification of the entanglement capacity of leading deep learning architectures, our analysis formally motivates a shift of trending neural-network-based wave function representations closer to the state-of-the-art in machine learning.
UR - http://www.scopus.com/inward/record.url?scp=85061535703&partnerID=8YFLogxK
U2 - 10.1103/PhysRevLett.122.065301
DO - 10.1103/PhysRevLett.122.065301
M3 - ???researchoutput.researchoutputtypes.contributiontojournal.article???
C2 - 30822082
AN - SCOPUS:85061535703
SN - 0031-9007
VL - 122
JO - Physical Review Letters
JF - Physical Review Letters
IS - 6
M1 - 065301
ER -