Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-2 of 2
V. Vapnik
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Neural Computation (2000) 12 (9): 2013–2036.
Published: 01 September 2000
Abstract
View article
PDF
We introduce the concept of span of support vectors (SV) and show that the generalization ability of support vector machines (SVM) depends on this new geometrical concept. We prove that the value of the span is always smaller (and can be much smaller) than the diameter of the smallest sphere containing the support vectors, used in previous bounds (Vapnik, 1998). We also demonstate experimentally that the prediction of the test error given by the span is very accurate and has direct application in model selection (choice of the optimal parameters of the SVM).
Journal Articles
Publisher: Journals Gateway
Neural Computation (1993) 5 (6): 893–909.
Published: 01 November 1993
Abstract
View article
PDF
In previous publications (Bottou and Vapnik 1992; Vapnik 1992) we described local learning algorithms, which result in performance improvements for real problems. We present here the theoretical framework on which these algorithms are based. First, we present a new statement of certain learning problems, namely the local risk minimization. We review the basic results of the uniform convergence theory of learning, and extend these results to local risk minimization. We also extend the structural risk minimization principle for both pattern recognition problems and regression problems. This extended induction principle is the basis for a new class of algorithms.