Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-4 of 4
Jinde Cao
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Neural Computation (2018) 30 (7): 1775–1800.
Published: 01 July 2018
FIGURES
| View All (12)
Abstract
View article
PDF
As the optical lenses for cameras always have limited depth of field, the captured images with the same scene are not all in focus. Multifocus image fusion is an efficient technology that can synthesize an all-in-focus image using several partially focused images. Previous methods have accomplished the fusion task in spatial or transform domains. However, fusion rules are always a problem in most methods. In this letter, from the aspect of focus region detection, we propose a novel multifocus image fusion method based on a fully convolutional network (FCN) learned from synthesized multifocus images. The primary novelty of this method is that the pixel-wise focus regions are detected through a learning FCN, and the entire image, not just the image patches, are exploited to train the FCN. First, we synthesize 4500 pairs of multifocus images by repeatedly using a gaussian filter for each image from PASCAL VOC 2012, to train the FCN. After that, a pair of source images is fed into the trained FCN, and two score maps indicating the focus property are generated. Next, an inversed score map is averaged with another score map to produce an aggregative score map, which take full advantage of focus probabilities in two score maps. We implement the fully connected conditional random field (CRF) on the aggregative score map to accomplish and refine a binary decision map for the fusion task. Finally, we exploit the weighted strategy based on the refined decision map to produce the fused image. To demonstrate the performance of the proposed method, we compare its fused results with several start-of-the-art methods not only on a gray data set but also on a color data set. Experimental results show that the proposed method can achieve superior fusion performance in both human visual quality and objective assessment.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2017) 29 (6): 1721–1744.
Published: 01 June 2017
FIGURES
| View All (19)
Abstract
View article
PDF
This letter focuses on lag synchronization control analysis for memristor-based coupled neural networks with parameter mismatches. Due to the parameter mismatches, lag complete synchronization in general cannot be achieved. First, based on the -measure method, generalized Halanay inequality, together with control algorithms, some sufficient conditions are obtained to ensure that coupled memristor-based neural networks are in a state of lag synchronization with an error. Moreover, the error level is estimated. Second, we show that memristor-based coupled neural networks with parameter mismatches can reach lag complete synchronization under a discontinuous controller. Finally, two examples are given to illustrate the effectiveness of the proposed criteria and well support theoretical results.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2016) 28 (4): 778–799.
Published: 01 April 2016
Abstract
View article
PDF
In this letter, we investigate the sampled-data state feedback control (SDSFC) problem of Boolean control networks (BCNs). Some necessary and sufficient conditions are obtained for the global stabilization of BCNs by SDSFC. Different from conventional state feedback controls, new phenomena observed the study of SDSFC. Based on the controllability matrix, we derive some necessary and sufficient conditions under which the trajectories of BCNs can be stabilized to a fixed point by piecewise constant control (PCC). It is proved that the global stabilization of BCNs under SDSFC is equivalent to that by PCC. Moreover, algorithms are given to construct the sampled-data state feedback controllers. Numerical examples are given to illustrate the efficiency of the obtained results.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2010) 22 (11): 2962–2978.
Published: 01 November 2010
FIGURES
| View All (6)
Abstract
View article
PDF
In this letter, a novel recurrent neural network based on the gradient method is proposed for solving linear programming problems. Finite-time convergence of the proposed neural network is proved by using the Lyapunov method. Compared with the existing neural networks for linear programming, the proposed neural network is globally convergent to exact optimal solutions in finite time, which is remarkable and rare in the literature of neural networks for optimization. Some numerical examples are given to show the effectiveness and excellent performance of the new recurrent neural network.