Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-13 of 13
Chih-Jen Lin
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Neural Computation (2018) 30 (6): 1673–1724.
Published: 01 June 2018
FIGURES
| View All (5)
Abstract
View article
PDF
Deep learning involves a difficult nonconvex optimization problem with a large number of weights between any two adjacent layers of a deep structure. To handle large data sets or complicated networks, distributed training is needed, but the calculation of function, gradient, and Hessian is expensive. In particular, the communication and the synchronization cost may become a bottleneck. In this letter, we focus on situations where the model is distributedly stored and propose a novel distributed Newton method for training deep neural networks. By variable and feature-wise data partitions and some careful designs, we are able to explicitly use the Jacobian matrix for matrix-vector products in the Newton method. Some techniques are incorporated to reduce the running time as well as memory consumption. First, to reduce the communication cost, we propose a diagonalization method such that an approximate Newton direction can be obtained without communication between machines. Second, we consider subsampled Gauss-Newton matrices for reducing the running time as well as the communication cost. Third, to reduce the synchronization cost, we terminate the process of finding an approximate Newton direction even though some nodes have not finished their tasks. Details of some implementation issues in distributed environments are thoroughly investigated. Experiments demonstrate that the proposed method is effective for the distributed training of deep neural networks. Compared with stochastic gradient methods, it is more robust and may give better test accuracy.
Includes: Supplementary data
Journal Articles
Publisher: Journals Gateway
Neural Computation (2015) 27 (8): 1766–1795.
Published: 01 August 2015
FIGURES
| View All (20)
Abstract
View article
PDF
Newton methods can be applied in many supervised learning approaches. However, for large-scale data, the use of the whole Hessian matrix can be time-consuming. Recently, subsampled Newton methods have been proposed to reduce the computational time by using only a subset of data for calculating an approximation of the Hessian matrix. Unfortunately, we find that in some situations, the running speed is worse than the standard Newton method because cheaper but less accurate search directions are used. In this work, we propose some novel techniques to improve the existing subsampled Hessian Newton method. The main idea is to solve a two-dimensional subproblem per iteration to adjust the search direction to better minimize the second-order approximation of the function value. We prove the theoretical convergence of the proposed method. Experiments on logistic regression, linear SVM, maximum entropy, and deep networks indicate that our techniques significantly reduce the running time of the subsampled Hessian Newton method. The resulting algorithm becomes a compelling alternative to the standard Newton method for large-scale data classification.
Includes: Supplementary data
Journal Articles
Publisher: Journals Gateway
Neural Computation (2014) 26 (4): 781–817.
Published: 01 April 2014
FIGURES
| View All (12)
Abstract
View article
PDF
Linear rankSVM is one of the widely used methods for learning to rank. Although its performance may be inferior to nonlinear methods such as kernel rankSVM and gradient boosting decision trees, linear rankSVM is useful to quickly produce a baseline model. Furthermore, following its recent development for classification, linear rankSVM may give competitive performance for large and sparse data. A great deal of works have studied linear rankSVM. The focus is on the computational efficiency when the number of preference pairs is large. In this letter, we systematically study existing works, discuss their advantages and disadvantages, and propose an efficient algorithm. We discuss different implementation issues and extensions with detailed experiments. Finally, we develop a robust linear rankSVM tool for public use.
Includes: Supplementary data
Journal Articles
Publisher: Journals Gateway
Neural Computation (2013) 25 (5): 1302–1323.
Published: 01 May 2013
Abstract
View article
PDF
Crammer and Singer's method is one of the most popular multiclass support vector machines (SVMs). It considers L1 loss (hinge loss) in a complicated optimization problem. In SVM, squared hinge loss (L2 loss) is a common alternative to L1 loss, but surprisingly we have not seen any paper studying the details of Crammer and Singer's method using L2 loss. In this letter, we conduct a thorough investigation. We show that the derivation is not trivial and has some subtle differences from the L1 case. Details provided in this work can be a useful reference for those who intend to use Crammer and Singer's method with L2 loss. They do not need a tedious process to derive everything by themselves. Furthermore, we present some new results on and discussion of both L1- and L2-loss formulations.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2007) 19 (10): 2756–2779.
Published: 01 October 2007
Abstract
View article
PDF
Nonnegative matrix factorization (NMF) can be formulated as a minimization problem with bound constraints. Although bound-constrained optimization has been studied extensively in both theory and practice, so far no study has formally applied its techniques to NMF. In this letter, we propose two projected gradient methods for NMF, both of which exhibit strong optimization properties. We discuss efficient implementations and demonstrate that one of the proposed methods converges faster than the popular multiplicative update approach. A simple Matlab code is also provided.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2005) 17 (5): 1188–1222.
Published: 01 May 2005
Abstract
View article
PDF
Minimizing bounds of leave-one-out errors is an important and efficient approach for support vector machine (SVM) model selection. Past research focuses on their use for classification but not regression. In this letter, we derive various leave-one-out bounds for support vector regression (SVR) and discuss the difference from those for classification. Experiments demonstrate that the proposed bounds are competitive with Bayesian SVR for parameter selection. We also discuss the differentiability of leave-one-out bounds.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2004) 16 (8): 1689–1704.
Published: 01 August 2004
Abstract
View article
PDF
In this letter, we show that decomposition methods with alpha seeding are extremely useful for solving a sequence of linear support vector machines (SVMs) with more data than attributes. This strategy is motivated by Keerthi and Lin (2003), who proved that for an SVM with data not linearly separable, after C is large enough, the dual solutions have the same free and bounded components. We explain why a direct use of decomposition methods for linear SVMs is sometimes very slow and then analyze why alpha seeding is much more effective for linear than nonlinear SVMs. We also conduct comparisons with other methods that are efficient for linear SVMs and demonstrate the effectiveness of alpha seeding techniques in model selection.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2003) 15 (11): 2643–2681.
Published: 01 November 2003
Abstract
View article
PDF
An important approach for efficient support vector machine (SVM) model selection is to use differentiable bounds of the leave-one-out (loo) error. Past efforts focused on finding tight bounds of loo (e.g., radius margin bounds, span bounds). However, their practical viability is still not very satisfactory. Duan, Keerthi, and Poo (2003) showed that radius margin bound gives good prediction for L2-SVM, one of the cases we look at. In this letter, through analyses about why this bound performs well for L2-SVM, we show that finding a bound whose minima are in a region with small loo values may be more important than its tightness. Based on this principle, we propose modified radius margin bounds for L1-SVM (the other case) where the original bound is applicable only to the hard-margin case. Our modification for L1-SVM achieves comparable performance to L2-SVM. To study whether L1-or L2-SVM should be used, we analyze other properties, such as their differentiability, number of support vectors, and number of free support vectors. In this aspect, L1-SVM possesses the advantage of having fewer support vectors. Their implementations are also different, so we discuss related issues in detail.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2003) 15 (7): 1667–1689.
Published: 01 July 2003
Abstract
View article
PDF
Support vector machines (SVMs) with the gaussian (RBF) kernel have been popular for practical use. Model selection in this class of SVMs involves two hyper parameters: the penalty parameter C and the kernel width σ. This letter analyzes the behavior of the SVM classifier when these hyper parameters take very small or very large values. Our results help in understanding the hyperparameter space that leads to an efficient heuristic method of searching for hyperparameter values with small generalization errors. The analysis also indicates that if complete model selection using the gaussian kernel has been conducted, there is no need to consider linear SVM.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2002) 14 (8): 1959–1977.
Published: 01 August 2002
Abstract
View article
PDF
We discuss the relation betweenɛ-support vector regression (ɛ-SVR) and v -support vector regression ( v -SVR). In particular, we focus on properties that are different from those of C -support vector classification ( C -SVC) and v -support vector classification ( v -SVC). We then discuss some issues that do not occur in the case of classification: the possible range of ɛ and the scaling of target values. A practical decomposition method for v -SVR is implemented, and computational experiments are conducted. We show some interesting numerical observations specific to regression.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2002) 14 (6): 1267–1281.
Published: 01 June 2002
Abstract
View article
PDF
The dual formulation of support vector regression involves two closely related sets of variables. When the decomposition method is used, many existing approaches use pairs of indices from these two sets as the working set. Basically, they select a base set first and then expand it so all indices are pairs. This makes the implementation different from that for support vector classification. In addition, a larger optimization subproblem has to be solved in each iteration. We provide theoretical proofs and conduct experiments to show that using the base set as the working set leads to similar convergence (number of iterations). Therefore, by using a smaller working set while keeping a similar number of iterations, the program can be simpler and more efficient.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2001) 13 (9): 2119–2147.
Published: 01 September 2001
Abstract
View article
PDF
The ν-support vector machine (ν-SVM) for classification proposed by Schölkopf, Smola, Williamson, and Bartlett (2000) has the advantage of using a parameter ν on controlling the number of support vectors. In this article, we investigate the relation between ν-SVM and C -SVM in detail. We show that in general they are two different problems with the same optimal solution set. Hence, we may expect that many numerical aspects of solving them are similar. However, compared to regular C -SVM, the formulation of ν-SVM is more complicated, so up to now there have been no effective methods for solving large-scale ν-SVM. We propose a decomposition method for ν-SVM that is competitive with existing methods for C -SVM. We also discuss the behavior of ν-SVM by some numerical experiments.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2001) 13 (2): 307–317.
Published: 01 February 2001
Abstract
View article
PDF
In this article, we discuss issues about formulations of support vector machines (SVM) from an optimization point of view. First, SVMs map training data into a higher- (maybe infinite-) dimensional space. Currently primal and dual formulations of SVM are derived in the finite dimensional space and readily extend to the infinite-dimensional space. We rigorously discuss the primal-dual relation in the infinite-dimensional spaces. Second, SVM formulations contain penalty terms, which are different from unconstrained penalty functions in optimization. Traditionally unconstrained penalty functions approximate a constrained problem as the penalty parameter increases. We are interested in similar properties for SVM formulations. For two of the most popular SVM formulations, we show that one enjoys properties of exact penalty functions, but the other is only like traditional penalty functions, which converge when the penalty parameter goes to infinity.