Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
Date
Availability
1-1 of 1
Shijun Zhang
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Neural Computation (2021) 33 (4): 1005–1036.
Published: 26 March 2021
Abstract
View article
PDF
A new network with super-approximation power is introduced. This network is built with Floor ( ⌊ x ⌋ ) or ReLU ( max { 0 , x } ) activation function in each neuron; hence, we call such networks Floor-ReLU networks. For any hyperparameters N ∈ N + and L ∈ N + , we show that Floor-ReLU networks with width max { d , 5 N + 13 } and depth 64 d L + 3 can uniformly approximate a Hölder function f on [ 0 , 1 ] d with an approximation error 3 λ d α / 2 N - α L , where α ∈ ( 0 , 1 ] and λ are the Hölder order and constant, respectively. More generally for an arbitrary continuous function f on [ 0 , 1 ] d with a modulus of continuity ω f ( · ) , the constructive approximation rate is ω f ( d N - L ) + 2 ω f ( d ) N - L . As a consequence, this new class of networks overcomes the curse of dimensionality in approximation power when the variation of ω f ( r ) as r → 0 is moderate (e.g., ω f ( r ) ≲ r α for Hölder continuous functions), since the major term to be considered in our approximation rate is essentially d times a function of N and L independent of d within the modulus of continuity.