Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-5 of 5
Robert C. Williamson
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Neural Computation (2001) 13 (7): 1443–1471.
Published: 01 July 2001
Abstract
View article
PDF
Suppose you are given some data set drawn from an underlying probability distribution P and you want to estimate a “simple” subset S of input space such that the probability that a test point drawn from P lies outside of S equals some a priori specified value between 0 and 1. We propose a method to approach this problem by trying to estimate a function f that is positive on S and negative on the complement. The functional form of f is given by a kernel expansion in terms of a potentially small subset of the training data; it is regularized by controlling the length of the weight vector in an associated feature space. The expansion coefficients are found by solving a quadratic programming problem, which we do by carrying out sequential optimization over pairs of input patterns. We also provide a theoretical analysis of the statistical performance of our algorithm. The algorithm is a natural extension of the support vector algorithm to the case of unlabeled data.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2000) 12 (5): 1207–1245.
Published: 01 May 2000
Abstract
View article
PDF
We propose a new class of support vector algorithms for regression and classification. In these algorithms, a parameter ν lets one effectively control the number of support vectors. While this can be useful in its own right, the parameterization has the additional benefit of enabling us to eliminate one of the other free parameters of the algorithm: the accuracy parameter ε in the regression case, and the regularization constant C in the classification case. We describe the algorithms, give some theoretical results concerning the meaning and the choice of ν, and report experimental results.
Journal Articles
Publisher: Journals Gateway
Neural Computation (1997) 9 (4): 765–769.
Published: 15 May 1997
Abstract
View article
PDF
The earlier article gives lower bounds on the VC-dimension of various smoothly parameterized function classes. The results were proved by showing a relationship between the uniqueness of decision boundaries and the VC-dimension of smoothly parameterized function classes. The proof is incorrect; there is no such relationship under the conditions stated in the article. For the case of neural networks with tanh activation functions, we give an alternative proof of a lower bound for the VC-dimension proportional to the number of parameters, which holds even when the magnitude of the parameters is restricted to be arbitrarily small.
Journal Articles
Publisher: Journals Gateway
Neural Computation (1996) 8 (3): 625–628.
Published: 01 April 1996
Abstract
View article
PDF
We give upper bounds on the Vapnik-Chervonenkis dimension and pseudodimension of two-layer neural networks that use the standard sigmoid function or radial basis function and have inputs from {− D , …, D } n . In Valiant's probably approximately correct (pac) learning framework for pattern classification, and in Haussler's generalization of this framework to nonlinear regression, the results imply that the number of training examples necessary for satisfactory learning performance grows no more rapidly than W log ( WD ), where W is the number of weights. The previous best bound for these networks was O ( W 4 ).
Journal Articles
Publisher: Journals Gateway
Neural Computation (1995) 7 (5): 1040–1053.
Published: 01 September 1995
Abstract
View article
PDF
We examine the relationship between the VC dimension and the number of parameters of a threshold smoothly parameterized function class. We show that the VC dimension of such a function class is at least k if there exists a k -dimensional differentiable manifold in the parameter space such that each member of the manifold corresponds to a different decision boundary. Using this result, we are able to obtain lower bounds on the VC dimension proportional to the number of parameters for several thresholded function classes including two-layer neural networks with certain smooth activation functions and radial basis functions with a gaussian basis. These lower bounds hold even if the magnitudes of the parameters are restricted to be arbitrarily small. In Valiant's probably approximately correct learning framework, this implies that the number of examples necessary for learning these function classes is at least linear in the number of parameters.