Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-4 of 4
Faming Liang
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Neural Computation (2019) 31 (6): 1183–1214.
Published: 01 June 2019
FIGURES
| View All (6)
Abstract
View article
PDF
Bayesian networks have been widely used in many scientific fields for describing the conditional independence relationships for a large set of random variables. This letter proposes a novel algorithm, the so-called p -learning algorithm, for learning moral graphs for high-dimensional Bayesian networks. The moral graph is a Markov network representation of the Bayesian network and also the key to construction of the Bayesian network for constraint-based algorithms. The consistency of the p -learning algorithm is justified under the small- n , large- p scenario. The numerical results indicate that the p -learning algorithm significantly outperforms the existing ones, such as the PC, grow-shrink, incremental association, semi-interleaved hiton, hill-climbing, and max-min hill-climbing. Under the sparsity assumption, the p -learning algorithm has a computational complexity of O(p 2 ) even in the worst case, while the existing algorithms have a computational complexity of O(p 3 ) in the worst case.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2013) 25 (8): 2199–2234.
Published: 01 August 2013
FIGURES
| View All (13)
Abstract
View article
PDF
Simulating from distributions with intractable normalizing constants has been a long-standing problem in machine learning. In this letter, we propose a new algorithm, the Monte Carlo Metropolis-Hastings (MCMH) algorithm, for tackling this problem. The MCMH algorithm is a Monte Carlo version of the Metropolis-Hastings algorithm. It replaces the unknown normalizing constant ratio by a Monte Carlo estimate in simulations, while still converges, as shown in the letter, to the desired target distribution under mild conditions. The MCMH algorithm is illustrated with spatial autologistic models and exponential random graph models. Unlike other auxiliary variable Markov chain Monte Carlo (MCMC) algorithms, such as the Møller and exchange algorithms, the MCMH algorithm avoids the requirement for perfect sampling, and thus can be applied to many statistical models for which perfect sampling is not available or very expensive. The MCMH algorithm can also be applied to Bayesian inference for random effect models and missing data problems that involve simulations from a distribution with intractable integrals.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2005) 17 (6): 1385–1410.
Published: 01 June 2005
Abstract
View article
PDF
Bayesian neural networks play an increasingly important role in modeling and predicting nonlinear phenomena in scientific computing. In this article, we propose to use the contour Monte Carlo algorithm to evaluate evidence for Bayesian neural networks. In the new method, the evidence is dynamically learned for each of the models. Our numerical results show that the new method works well for both the regression and classification multilayer perceptrons. It often leads to an improved estimate, in terms of overall accuracy, for the evidence of multiple MLPs in comparison with the reversible-jump Markov chain Monte Carlo method and the gaussian approximation method. For the simulated data, it can identify the true models, and for the real data, it can produce results consistent with those published in the literature.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2003) 15 (8): 1959–1989.
Published: 01 August 2003
Abstract
View article
PDF
We propose a new Bayesian neural network classifier, different from that commonly used in several respects, including the likelihood function, prior specification, and network structure. Under regularity conditions, we show that the decision boundary determined by the new classifier will converge to the true one. We also propose a systematic implementation for the new classifier. In our implementation, the tune of connection weights, the selection of hidden units, and the selection of input variables are unified by sampling from the joint posterior distribution of the network structure and connection weights. The numerical results show that the new classifier consistently outperforms the commonly used Bayesian neural network classifier and the support vector machine in terms of generalization performance. The reason for the inferiority of the commonly used Bayesian neural network classifier and the support vector machine is discussed at length.