RedEx: Beyond Fixed Representation Methods via Convex Optimization

Amit Daniely, Mariano Schain, Gilad Yehudai

Research output: Contribution to journalConference articlepeer-review

Abstract

Optimizing Neural networks is a difficult task which is still not well understood. On the other hand, fixed representation methods such as kernels and random features have provable optimization guarantees but inferior performance due to their inherent inability to learn the representations. In this paper, we aim at bridging this gap by presenting a novel architecture called RedEx (Reduced Expander Extractor) that is as expressive as neural networks and can also be trained in a layer-wise fashion via a convex program with semi-definite constraints and optimization guarantees. We also show that RedEx provably surpasses fixed representation methods, in the sense that it can efficiently learn a family of target functions which fixed representation methods cannot.

Original languageEnglish
Pages (from-to)518-543
Number of pages26
JournalProceedings of Machine Learning Research
Volume237
StatePublished - 2024
Event35th International Conference on Algorithmic Learning Theory, ALT 2024 - La Jolla, United States
Duration: 25 Feb 202428 Feb 2024

Bibliographical note

Publisher Copyright:
© 2024 A. Daniely, M. Schain & G. Yehudai.

Fingerprint

Dive into the research topics of 'RedEx: Beyond Fixed Representation Methods via Convex Optimization'. Together they form a unique fingerprint.

Cite this