Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-4 of 4
Věra Kůrková
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Neural Computation (2009) 21 (10): 2970–2989.
Published: 01 October 2009
Abstract
View article
PDF
Complexity of one-hidden-layer networks is studied using tools from nonlinear approximation and integration theory. For functions with suitable integral representations in the form of networks with infinitely many hidden units, upper bounds are derived on the speed of decrease of approximation error as the number of network units increases. These bounds are obtained for various norms using the framework of Bochner integration. Results are applied to perceptron networks.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2008) 20 (1): 252–270.
Published: 01 January 2008
Abstract
View article
PDF
Supervised learning of perceptron networks is investigated as an optimization problem. It is shown that both the theoretical and the empirical error functionals achieve minima over sets of functions computable by networks with a given number n of perceptrons. Upper bounds on rates of convergence of these minima with n increasing are derived. The bounds depend on a certain regularity of training data expressed in terms of variational norms of functions interpolating the data (in the case of the empirical error) and the regression function (in the case of the expected error). Dependence of this type of regularity on dimensionality and on magnitudes of partial derivatives is investigated. Conditions on the data, which guarantee that a good approximation of global minima of error functionals can be achieved using networks with a limited complexity, are derived. The conditions are in terms of oscillatory behavior of the data measured by the product of a function of the number of variables d , which is decreasing exponentially fast, and the maximum of the magnitudes of the squares of the L 1 -norms of the iterated partial derivatives of the order d of the regression function or some function, which interpolates the sample of the data. The results are illustrated by examples of data with small and high regularity constructed using Boolean functions and the gaussian function.
Journal Articles
Publisher: Journals Gateway
Neural Computation (1994) 6 (3): 543–558.
Published: 01 May 1994
Abstract
View article
PDF
For a feedforward perceptron type architecture with a single hidden layer but with a quite general activation function, we characterize the relation between pairs of weight vectors determining networks with the same input-output function.
Journal Articles
Publisher: Journals Gateway
Neural Computation (1991) 3 (4): 617–622.
Published: 01 December 1991
Abstract
View article
PDF
We show that Kolmogorov's theorem on representations of continuous functions of n -variables by sums and superpositions of continuous functions of one variable is relevant in the context of neural networks. We give a version of this theorem with all of the one-variable functions approximated arbitrarily well by linear combinations of compositions of affine functions with some given sigmoidal function. We derive an upper estimate of the number of hidden units.