Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
Date
Availability
1-2 of 2
H. N. Mhaskar
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Neural Computation (1997) 9 (1): 143–159.
Published: 01 January 1997
Abstract
View articletitled, Neural Networks for Functional Approximation and System Identification
View
PDF
for article titled, Neural Networks for Functional Approximation and System Identification
We construct generalized translation networks to approximate uniformly a class of nonlinear, continuous functionals defined on L p ([—1, 1] s ) for integer s ≥ 1, 1 ≤ p < ∞, or C([—1, 1] s ). We obtain lower bounds on the possible order of approximation for such functionals in terms of any approximation process depending continuously on a given number of parameters. Our networks almost achieve this order of approximation in terms of the number of parameters (neurons) involved in the network. The training is simple and noniterative; in particular, we avoid any optimization such as that involved in the usual backpropagation.
Journal Articles
Publisher: Journals Gateway
Neural Computation (1996) 8 (1): 164–177.
Published: 01 January 1996
Abstract
View articletitled, Neural Networks for Optimal Approximation of Smooth and Analytic Functions
View
PDF
for article titled, Neural Networks for Optimal Approximation of Smooth and Analytic Functions
We prove that neural networks with a single hidden layer are capable of providing an optimal order of approximation for functions assumed to possess a given number of derivatives, if the activation function evaluated by each principal element satisfies certain technical conditions. Under these conditions, it is also possible to construct networks that provide a geometric order of approximation for analytic target functions. The permissible activation functions include the squashing function (1 − e −x ) −1 as well as a variety of radial basis functions. Our proofs are constructive. The weights and thresholds of our networks are chosen independently of the target function; we give explicit formulas for the coefficients as simple, continuous, linear functionals of the target function.