Untrained neural networks can demonstrate memorization-independent abstract reasoning

Tomer Barak*, Yonatan Loewenstein

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

The nature of abstract reasoning is a matter of debate. Modern artificial neural network (ANN) models, like large language models, demonstrate impressive success when tested on abstract reasoning problems. However, it has been argued that their success reflects some form of memorization of similar problems (data contamination) rather than a general-purpose abstract reasoning capability. This concern is supported by evidence of brittleness, and the requirement of extensive training. In our study, we explored whether abstract reasoning can be achieved using the toolbox of ANNs, without prior training. Specifically, we studied an ANN model in which the weights of a naive network are optimized during the solution of the problem, using the problem data itself, rather than any prior knowledge. We tested this modeling approach on visual reasoning problems and found that it performs relatively well. Crucially, this success does not rely on memorization of similar problems. We further suggest an explanation of how it works. Finally, as problem solving is performed by changing the ANN weights, we explored the connection between problem solving and the accumulation of knowledge in the ANNs.

Original languageEnglish
Article number27249
JournalScientific Reports
Volume14
Issue number1
DOIs
StatePublished - Dec 2024

Bibliographical note

Publisher Copyright:
© The Author(s) 2024.

Fingerprint

Dive into the research topics of 'Untrained neural networks can demonstrate memorization-independent abstract reasoning'. Together they form a unique fingerprint.

Cite this