Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
Date
Availability
1-2 of 2
Li-Zhi Liao
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Neural Computation (2018) 30 (12): 3281–3308.
Published: 01 December 2018
FIGURES
Abstract
View article
PDF
We study a multi-instance (MI) learning dimensionality-reduction algorithm through sparsity and orthogonality, which is especially useful for high-dimensional MI data sets. We develop a novel algorithm to handle both sparsity and orthogonality constraints that existing methods do not handle well simultaneously. Our main idea is to formulate an optimization problem where the sparse term appears in the objective function and the orthogonality term is formed as a constraint. The resulting optimization problem can be solved by using approximate augmented Lagrangian iterations as the outer loop and inertial proximal alternating linearized minimization (iPALM) iterations as the inner loop. The main advantage of this method is that both sparsity and orthogonality can be satisfied in the proposed algorithm. We show the global convergence of the proposed iterative algorithm. We also demonstrate that the proposed algorithm can achieve high sparsity and orthogonality requirements, which are very important for dimensionality reduction. Experimental results on both synthetic and real data sets show that the proposed algorithm can obtain learning performance comparable to that of other tested MI learning algorithms.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2006) 18 (8): 1818–1846.
Published: 01 August 2006
Abstract
View article
PDF
Based on the inherent properties of convex quadratic minimax problems, this article presents a new neural network model for a class of convex quadratic minimax problems. We show that the new model is stable in the sense of Lyapunov and will converge to an exact saddle point in finite time by defining a proper convex energy function. Furthermore, global exponential stability of the new model is shown under mild conditions. Compared with the existing neural networks for the convex quadratic minimax problem, the proposed neural network has finite-time convergence, a simpler structure, and lower complexity. Thus, the proposed neural network is more suitable for parallel implementation by using simple hardware units. The validity and transient behavior of the proposed neural network are illustrated by some simulation results.