Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-2 of 2
Ching-Pei Lee
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Neural Computation (2014) 26 (4): 781–817.
Published: 01 April 2014
FIGURES
| View All (12)
Abstract
View article
PDF
Linear rankSVM is one of the widely used methods for learning to rank. Although its performance may be inferior to nonlinear methods such as kernel rankSVM and gradient boosting decision trees, linear rankSVM is useful to quickly produce a baseline model. Furthermore, following its recent development for classification, linear rankSVM may give competitive performance for large and sparse data. A great deal of works have studied linear rankSVM. The focus is on the computational efficiency when the number of preference pairs is large. In this letter, we systematically study existing works, discuss their advantages and disadvantages, and propose an efficient algorithm. We discuss different implementation issues and extensions with detailed experiments. Finally, we develop a robust linear rankSVM tool for public use.
Includes: Supplementary data
Journal Articles
Publisher: Journals Gateway
Neural Computation (2013) 25 (5): 1302–1323.
Published: 01 May 2013
Abstract
View article
PDF
Crammer and Singer's method is one of the most popular multiclass support vector machines (SVMs). It considers L1 loss (hinge loss) in a complicated optimization problem. In SVM, squared hinge loss (L2 loss) is a common alternative to L1 loss, but surprisingly we have not seen any paper studying the details of Crammer and Singer's method using L2 loss. In this letter, we conduct a thorough investigation. We show that the derivation is not trivial and has some subtle differences from the L1 case. Details provided in this work can be a useful reference for those who intend to use Crammer and Singer's method with L2 loss. They do not need a tedious process to derive everything by themselves. Furthermore, we present some new results on and discussion of both L1- and L2-loss formulations.