TY - JOUR
T1 - Probing Biological and Artificial Neural Networks with Task-dependent Neural Manifolds
AU - Kuoch, Michael
AU - Chou, Chi Ning
AU - Parthasarathy, Nikhil
AU - Dapello, Joel
AU - DiCarlo, James J.
AU - Sompolinsky, Haim
AU - Chung, Sue Yeon
N1 - Publisher Copyright:
© 2024 Proceedings of Machine Learning Research
PY - 2024
Y1 - 2024
N2 - Recently, growth in our understanding of the computations performed in both biological and artificial neural networks has largely been driven by either low-level mechanistic studies or global normative approaches. However, concrete methodologies for bridging the gap between these levels of abstraction remain elusive. In this work, we investigate the internal mechanisms of neural networks through the lens of neural population geometry, aiming to provide understanding at an intermediate level of abstraction, as a way to bridge that gap. Utilizing manifold capacity theory (MCT) from statistical physics and manifold alignment analysis (MAA) from high-dimensional statistics, we probe the underlying organization of task-dependent manifolds in deep neural networks and macaque neural recordings. Specifically, we quantitatively characterize how different learning objectives lead to differences in the organizational strategies of these models and demonstrate how these geometric analyses are connected to the decodability of task-relevant information. These analyses present a strong direction for bridging mechanistic and normative theories in neural networks through neural population geometry, potentially opening up many future research avenues in both machine learning and neuroscience.
AB - Recently, growth in our understanding of the computations performed in both biological and artificial neural networks has largely been driven by either low-level mechanistic studies or global normative approaches. However, concrete methodologies for bridging the gap between these levels of abstraction remain elusive. In this work, we investigate the internal mechanisms of neural networks through the lens of neural population geometry, aiming to provide understanding at an intermediate level of abstraction, as a way to bridge that gap. Utilizing manifold capacity theory (MCT) from statistical physics and manifold alignment analysis (MAA) from high-dimensional statistics, we probe the underlying organization of task-dependent manifolds in deep neural networks and macaque neural recordings. Specifically, we quantitatively characterize how different learning objectives lead to differences in the organizational strategies of these models and demonstrate how these geometric analyses are connected to the decodability of task-relevant information. These analyses present a strong direction for bridging mechanistic and normative theories in neural networks through neural population geometry, potentially opening up many future research avenues in both machine learning and neuroscience.
UR - http://www.scopus.com/inward/record.url?scp=85183901216&partnerID=8YFLogxK
M3 - ???researchoutput.researchoutputtypes.contributiontojournal.conferencearticle???
AN - SCOPUS:85183901216
SN - 2640-3498
VL - 234
SP - 395
EP - 418
JO - Proceedings of Machine Learning Research
JF - Proceedings of Machine Learning Research
T2 - 1st Conference on Parsimony and Learning, CPAL 2024
Y2 - 3 January 2024 through 6 January 2024
ER -