Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-2 of 2
Qingshan Liu
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Neural Computation (2010) 22 (11): 2962–2978.
Published: 01 November 2010
FIGURES
| View All (6)
Abstract
View article
PDF
In this letter, a novel recurrent neural network based on the gradient method is proposed for solving linear programming problems. Finite-time convergence of the proposed neural network is proved by using the Lyapunov method. Compared with the existing neural networks for linear programming, the proposed neural network is globally convergent to exact optimal solutions in finite time, which is remarkable and rare in the literature of neural networks for optimization. Some numerical examples are given to show the effectiveness and excellent performance of the new recurrent neural network.
Journal Articles
A One-Layer Recurrent Neural Network with a Discontinuous Activation Function for Linear Programming
Publisher: Journals Gateway
Neural Computation (2008) 20 (5): 1366–1383.
Published: 01 May 2008
Abstract
View article
PDF
A one-layer recurrent neural network with a discontinuous activation function is proposed for linear programming. The number of neurons in the neural network is equal to that of decision variables in the linear programming problem. It is proven that the neural network with a sufficiently high gain is globally convergent to the optimal solution. Its application to linear assignment is discussed to demonstrate the utility of the neural network. Several simulation examples are given to show the effectiveness and characteristics of the neural network.