Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-6 of 6
Youshen Xia
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Neural Computation (2014) 26 (2): 449–465.
Published: 01 February 2014
Abstract
View article
PDF
In this letter, we propose a novel iterative method for computing generalized inverse, based on a novel KKT formulation. The proposed iterative algorithm requires making four matrix and vector multiplications at each iteration and thus has low computational complexity. The proposed method is proved to be globally convergent without any condition. Furthermore, for fast computing generalized inverse, we present an acceleration scheme based on the proposed iterative method. The global convergence of the proposed acceleration algorithm is also proved. Finally, the effectiveness of the proposed iterative algorithm is evaluated numerically.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2008) 20 (9): 2227–2237.
Published: 01 September 2008
Abstract
View article
PDF
Recently the extended projection neural network was proposed to solve constrained monotone variational inequality problems and a class of constrained nonmonotontic variational inequality problems. Its exponential convergence was developed under the positive definiteness condition of the Jacobian matrix of the nonlinear mapping. This note proposes new results on the exponential convergence of the output trajectory of the extended projection neural network under the weak conditions that the Jacobian matrix of the nonlinear mapping may be positive semidefinite or not. Therefore, new results further demonstrate that the extended projection neural network has a fast convergence rate when solving a class of constrained monotone variational inequality problems and nonmonotonic variational inequality problems. Illustrative examples show the significance of the obtained results.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2008) 20 (3): 844–872.
Published: 01 March 2008
Abstract
View article
PDF
The constrained L 1 estimation is an attractive alternative to both the unconstrained L 1 estimation and the least square estimation. In this letter, we propose a cooperative recurrent neural network (CRNN) for solving L 1 estimation problems with general linear constraints. The proposed CRNN model combines four individual neural network models automatically and is suitable for parallel implementation. As a special case, the proposed CRNN includes two existing neural networks for solving unconstrained and constrained L 1 estimation problems, respectively. Unlike existing neural networks, with penalty parameters, for solving the constrained L 1 estimation problem, the proposed CRNN is guaranteed to converge globally to the exact optimal solution without any additional condition. Compared with conventional numerical algorithms, the proposed CRNN has a low computational complexity and can deal with the L 1 estimation problem with degeneracy. Several applied examples show that the proposed CRNN can obtain more accurate estimates than several existing algorithms.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2007) 19 (6): 1589–1632.
Published: 01 June 2007
Abstract
View article
PDF
Identification of a general nonlinear noisy system viewed as an estimation of a predictor function is studied in this article. A measurement fusion method for the predictor function estimate is proposed. In the proposed scheme, observed data are first fused by using an optimal fusion technique, and then the optimal fused data are incorporated in a nonlinear function estimator based on a robust least squares support vector machine (LS-SVM). A cooperative learning algorithm is proposed to implement the proposed measurement fusion method. Compared with related identification methods, the proposed method can minimize both the approximation error and the noise error. The performance analysis shows that the proposed optimal measurement fusion function estimate has a smaller mean square error than the LS-SVM function estimate. Moreover, the proposed cooperative learning algorithm can converge globally to the optimal measurement fusion function estimate. Finally, the proposed measurement fusion method is applied to ARMA signal and spatial temporal signal modeling. Experimental results show that the proposed measurement fusion method can provide a more accurate model.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2005) 17 (3): 515–525.
Published: 01 March 2005
Abstract
View article
PDF
The output trajectory convergence of an extended projection neural network was developed under the positive definiteness condition of the Jacobian matrix of nonlinear mapping. This note offers several new convergence results. The state trajectory convergence and the output trajectory convergence of the extended projection neural network are obtained under the positive semidefiniteness condition of the Jacobian matrix. Comparison and illustrative examples demonstrate applied significance of these new results.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2004) 16 (4): 863–883.
Published: 01 April 2004
Abstract
View article
PDF
Recently, a projection neural network has been shown to be a promising computational model for solving variational inequality problems with box constraints. This letter presents an extended projection neural network for solving monotone variational inequality problems with linear and nonlinear constraints. In particular, the proposed neural network can include the projection neural network as a special case. Compared with the modified projection-type methods for solving constrained monotone variational inequality problems, the proposed neural network has a lower complexity and is suitable for parallel implementation. Furthermore, the proposed neural network is theoretically proven to be exponentially convergent to an exact solution without a Lipschitz condition. Illustrative examples show that the extended projection neural network can be used to solve constrained monotone variational inequality problems.