Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
Date
Availability
1-3 of 3
Yoshifusa Ito
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Simultaneous Approximations of Polynomials and Derivatives and Their Applications to Neural Networks
Publisher: Journals Gateway
Neural Computation (2008) 20 (11): 2757–2791.
Published: 01 November 2008
Abstract
View articletitled, Simultaneous Approximations of Polynomials and Derivatives and Their Applications to Neural Networks
View
PDF
for article titled, Simultaneous Approximations of Polynomials and Derivatives and Their Applications to Neural Networks
We have constructed one-hidden-layer neural networks capable of approximating polynomials and their derivatives simultaneously. Generally, optimizing neural network parameters to be trained at later steps of the BP training is more difficult than optimizing those to be trained at the first step. Taking into account this fact, we suppressed the number of parameters of the former type. We measure degree of approximation in both the uniform norm on compact sets and the L p -norm on the whole space with respect to probability measures.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2003) 15 (9): 2199–2226.
Published: 01 September 2003
Abstract
View articletitled, Activation Functions Defined on Higher-Dimensional Spaces for Approximation on Compact Sets with and without Scaling
View
PDF
for article titled, Activation Functions Defined on Higher-Dimensional Spaces for Approximation on Compact Sets with and without Scaling
Let g be a slowly increasing function of locally bounded variation defined on R c , 1 ≤c≤d . We investigate when g can be an activation function of the hidden-layer units of three-layer neural networks that approximate continuous functions on compact sets. If the support of the Fourier transform of g includes a converging sequence of points with distinct distances from the origin, it can be an activation function without scaling. If and only if the support of its Fourier transform includes a point other than the origin, it can be an activation function with scaling. We also look for a condition on which an activation function can be used for approximation without rotation. Any nonpolynomial functions can be activation functions with scaling, and many familiar functions, such as sigmoid functions and radial basis functions, can be activation functions without scaling. With or without scaling, some of them defined on R d can be used without rotation even if they are not spherically symmetric.
Journal Articles
Publisher: Journals Gateway
Neural Computation (1994) 6 (6): 1233–1243.
Published: 01 November 1994
Abstract
View articletitled, Approximation Capability of Layered Neural Networks with Sigmoid Units on Two Layers
View
PDF
for article titled, Approximation Capability of Layered Neural Networks with Sigmoid Units on Two Layers
Using only an elementary constructive method, we prove the universal approximation capability of three-layered feedforward neural networks that have sigmoid units on two layers. We regard the Heaviside function as a special case of sigmoid function and measure accuracy of approximation in either the supremum norm or in the L p -norm. Given a continuous function defined on a unit hypercube and the required accuracy of approximation, we can estimate the numbers of necessary units on the respective sigmoid unit layers. In the case where the sigmoid function is the Heaviside function, our result improves the estimation of Kůrková (1992). If the accuracy of approximation is measured in the L P -norm, our estimation also improves that of Kůrková (1992), even when the sigmoid function is not the Heaviside function.