Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-3 of 3
Shirish Shevade
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Neural Computation (2015) 27 (10): 2183–2206.
Published: 01 October 2015
FIGURES
| View All (13)
Abstract
View articletitled, A Simple Label Switching Algorithm for Semisupervised Structural SVMs
View
PDF
for article titled, A Simple Label Switching Algorithm for Semisupervised Structural SVMs
In structured output learning, obtaining labeled data for real-world applications is usually costly, while unlabeled examples are available in abundance. Semisupervised structured classification deals with a small number of labeled examples and a large number of unlabeled structured data. In this work, we consider semisupervised structural support vector machines with domain constraints. The optimization problem, which in general is not convex, contains the loss terms associated with the labeled and unlabeled examples, along with the domain constraints. We propose a simple optimization approach that alternates between solving a supervised learning problem and a constraint matching problem. Solving the constraint matching problem is difficult for structured prediction, and we propose an efficient and effective label switching method to solve it. The alternating optimization is carried out within a deterministic annealing framework, which helps in effective constraint matching and avoiding poor local minima, which are not very useful. The algorithm is simple and easy to implement. Further, it is suitable for any structured output learning problem where exact inference is available. Experiments on benchmark sequence labeling data sets and a natural language parsing data set show that the proposed approach, though simple, achieves comparable generalization performance.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2009) 21 (7): 2082–2103.
Published: 01 July 2009
FIGURES
| View All (4)
Abstract
View articletitled, Validation-Based Sparse Gaussian Process Classifier Design
View
PDF
for article titled, Validation-Based Sparse Gaussian Process Classifier Design
Gaussian processes (GPs) are promising Bayesian methods for classification and regression problems. Design of a GP classifier and making predictions using it is, however, computationally demanding, especially when the training set size is large. Sparse GP classifiers are known to overcome this limitation. In this letter, we propose and study a validation-based method for sparse GP classifier design. The proposed method uses a negative log predictive (NLP) loss measure, which is easy to compute for GP models. We use this measure for both basis vector selection and hyperparameter adaptation. The experimental results on several real-world benchmark data sets show better or comparable generalization performance over existing methods.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2007) 19 (1): 283–301.
Published: 01 January 2007
Abstract
View articletitled, Fast Generalized Cross-Validation Algorithm for Sparse Model Learning
View
PDF
for article titled, Fast Generalized Cross-Validation Algorithm for Sparse Model Learning
We propose a fast, incremental algorithm for designing linear regression models. The proposed algorithm generates a sparse model by optimizing multiple smoothing parameters using the generalized cross-validation approach. The performances on synthetic and real-world data sets are compared with other incremental algorithms such as Tipping and Faul's fast relevance vector machine, Chen et al.'s orthogonal least squares, and Orr's regularized forward selection. The results demonstrate that the proposed algorithm is competitive.