TY - JOUR
T1 - Neural Network Approximation of Refinable Functions
AU - Daubechies, Ingrid
AU - Devore, Ronald
AU - Dym, Nadav
AU - Faigenbaum-Golovin, Shira
AU - Kovalsky, Shahar Z.
AU - Lin, Kung Chin
AU - Park, Josiah
AU - Petrova, Guergana
AU - Sober, Barak
N1 - Publisher Copyright:
© 2022 IEEE.
PY - 2023/1/1
Y1 - 2023/1/1
N2 - In the desire to quantify the success of neural networks in deep learning and other applications, there is a great interest in understanding which functions are efficiently approximated by the outputs of neural networks. By now, there exists a variety of results which show that a wide range of functions can be approximated with sometimes surprising accuracy by these outputs. For example, it is known that the set of functions that can be approximated with exponential accuracy (in terms of the number of parameters used) includes, on one hand, very smooth functions such as polynomials and analytic functions and, on the other hand, very rough functions such as the Weierstrass function, which is nowhere differentiable. In this paper, we add to the latter class of rough functions by showing that it also includes refinable functions. Namely, we show that refinable functions are approximated by the outputs of deep ReLU neural networks with a fixed width and increasing depth with accuracy exponential in terms of their number of parameters. Our results apply to functions used in the standard construction of wavelets as well as to functions constructed via subdivision algorithms in Computer Aided Geometric Design.
AB - In the desire to quantify the success of neural networks in deep learning and other applications, there is a great interest in understanding which functions are efficiently approximated by the outputs of neural networks. By now, there exists a variety of results which show that a wide range of functions can be approximated with sometimes surprising accuracy by these outputs. For example, it is known that the set of functions that can be approximated with exponential accuracy (in terms of the number of parameters used) includes, on one hand, very smooth functions such as polynomials and analytic functions and, on the other hand, very rough functions such as the Weierstrass function, which is nowhere differentiable. In this paper, we add to the latter class of rough functions by showing that it also includes refinable functions. Namely, we show that refinable functions are approximated by the outputs of deep ReLU neural networks with a fixed width and increasing depth with accuracy exponential in terms of their number of parameters. Our results apply to functions used in the standard construction of wavelets as well as to functions constructed via subdivision algorithms in Computer Aided Geometric Design.
KW - Neural networks
KW - cascade algorithm
KW - exponential accuracy
KW - neural network approximation
KW - refinable functions
UR - http://www.scopus.com/inward/record.url?scp=85136896642&partnerID=8YFLogxK
U2 - 10.1109/TIT.2022.3199601
DO - 10.1109/TIT.2022.3199601
M3 - ???researchoutput.researchoutputtypes.contributiontojournal.article???
AN - SCOPUS:85136896642
SN - 0018-9448
VL - 69
SP - 482
EP - 495
JO - IEEE Transactions on Information Theory
JF - IEEE Transactions on Information Theory
IS - 1
M1 - 1
ER -