Bayesian neural networks play an increasingly important role in modeling and predicting nonlinear phenomena in scientific computing. In this article, we propose to use the contour Monte Carlo algorithm to evaluate evidence for Bayesian neural networks. In the new method, the evidence is dynamically learned for each of the models. Our numerical results show that the new method works well for both the regression and classification multilayer perceptrons. It often leads to an improved estimate, in terms of overall accuracy, for the evidence of multiple MLPs in comparison with the reversible-jump Markov chain Monte Carlo method and the gaussian approximation method. For the simulated data, it can identify the true models, and for the real data, it can produce results consistent with those published in the literature.

This content is only available as a PDF.
You do not currently have access to this content.