Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
Date
Availability
1-2 of 2
Changcun Huang
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Neural Computation (2023) 35 (9): 1566–1592.
Published: 07 August 2023
FIGURES
| View All (5)
Abstract
View article
PDF
This letter first constructs a typical solution of ResNets for multicategory classifications based on the idea of the gate control of LSTMs, from which a general interpretation of the ResNet architecture is given and the performance mechanism is explained. We also use more solutions to further demonstrate the generality of that interpretation. The classification result is then extended to the universal-approximation capability of the type of ResNet with two-layer gate networks, an architecture that was proposed in an original paper of ResNets and has both theoretical and practical significance.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2020) 32 (11): 2249–2278.
Published: 01 November 2020
Abstract
View article
PDF
This letter proves that a ReLU network can approximate any continuous function with arbitrary precision by means of piecewise linear or constant approximations. For univariate function f ( x ) , we use the composite of ReLUs to produce a line segment; all of the subnetworks of line segments comprise a ReLU network, which is a piecewise linear approximation to f ( x ) . For multivariate function f ( x ) , ReLU networks are constructed to approximate a piecewise linear function derived from triangulation methods approximating f ( x ) . A neural unit called TRLU is designed by a ReLU network; the piecewise constant approximation, such as Haar wavelets, is implemented by rectifying the linear output of a ReLU network via TRLUs. New interpretations of deep layers, as well as some other results, are also presented.