Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-12 of 12
Marc M. Van Hulle
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Neural Computation (2008) 20 (5): 1344–1365.
Published: 01 May 2008
Abstract
View article
PDF
A new gradient technique is introduced for linear independent component analysis (ICA) based on the Edgeworth expansion of mutual information, for which the algorithm operates sequentially using fixed-point iterations. In order to address the adverse effect of outliers, a robust version of the Edgeworth expansion is adopted, in terms of robust cumulants, and robust derivatives of the Hermite polynomials are used. Also, a new constrained version of ICA is introduced, based on goal programming of mutual information objectives, which is applied to the extraction of the antepartum fetal electrocardiogram from multielectrode cutaneous recordings on the mother's thorax and abdomen.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2008) 20 (4): 964–973.
Published: 01 April 2008
Abstract
View article
PDF
We introduce a new approach to constrained independent component analysis (ICA) by formulating the original, unconstrained ICA problem as well as the constraints in mutual information terms directly. As an estimate of mutual information, a robust version of the Edgeworth expansion is used, on which gradient descent is performed. As an application, we consider the extraction of both the mother and the fetal antepartum electrocardiograms (ECG) from multielectrode cutaneous recordings on the mother's thorax and abdomen.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2006) 18 (2): 430–445.
Published: 01 February 2006
Abstract
View article
PDF
We introduce a new unbiased metric for assessing the quality of density estimation based on gaussian mixtures, called differential log likelihood. As an application, we determine the optimal smoothness and the optimal number of kernels in gaussian mixtures. Furthermore, we suggest a learning strategy for gaussian mixture density estimation and compare its performance with log likelihood maximization for a wide range of real-world data sets.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2005) 17 (9): 1903–1910.
Published: 01 September 2005
Abstract
View article
PDF
We develop the general, multivariate case of the Edgeworth approximation of differential entropy and show that it can be more accurate than the nearest-neighbor method in the multivariate case and that it scales better with sample size. Furthermore, we introduce mutual information estimation as an application.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2005) 17 (8): 1706–1714.
Published: 01 August 2005
Abstract
View article
PDF
Instead of increasing the order of the Edgeworth expansion of a single gaussian kernel, we suggest using mixtures of Edgeworth-expanded gaussian kernels of moderate order. We introduce a simple closed-form solution for estimating the kernel parameters based on weighted moment matching. Furthermore, we formulate the extension to the multivariate case, which is not always feasible with algebraic density approximation procedures.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2005) 17 (3): 503–513.
Published: 01 March 2005
Abstract
View article
PDF
We introduce a new unsupervised learning algorithm for kernel-based topographic map formation of heteroscedastic gaussian mixtures that allows for a unified account of distortion error (vector quantization), log-likelihood, and Kullback-Leibler divergence.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2002) 14 (8): 1887–1906.
Published: 01 August 2002
Abstract
View article
PDF
A new learning algorithm for kernel-based topographic map formation is introduced. The kernel parameters are adjusted individually so as to maximize the joint entropy of the kernel outputs. This is done by maximizing the differential entropies of the individual kernel outputs, given that the map's output redundancy, due to the kernel overlap, needs to be minimized. The latter is achieved by minimizing the mutual information between the kernel outputs. As a kernel, the (radial) incomplete gamma distribution is taken since, for a gaussian input density, the differential entropy of the kernel output will be maximal. Since the theoretically optimal joint entropy performance can be derived for the case of nonoverlapping gaussian mixture densities, a new clustering algorithm is suggested that uses this optimum as its “null” distribution. Finally, it is shown that the learning algorithm is similar to one that performs stochastic gradient descent on the Kullback-Leibler divergence for a heteroskedastic gaussian mixture density model.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2002) 14 (7): 1561–1573.
Published: 01 July 2002
Abstract
View article
PDF
We introduce a new learning algorithm for kernel-based topographic map formation. The algorithm generates a gaussian mixture density model by individually adapting the gaussian kernels' centers and radii to the assumed gaussian local input densities.
Journal Articles
Publisher: Journals Gateway
Neural Computation (1998) 10 (7): 1847–1871.
Published: 01 October 1998
Abstract
View article
PDF
We introduce a new unsupervised competitive learning rule, the kernel-based maximum entropy learning rule (kMER), which performs equiprobabilistic topographic map formation in regular, fixed-topology lattices, for use with nonparametric density estimation as well as nonparametric regression analysis. The receptive fields of the formal neurons are overlapping radially symmetric kernels, compatible with radial basis functions (RBFs); but unlike other learning schemes, the radii of these kernels do not have to be chosen in an ad hoc manner: the radii are adapted to the local input density, together with the weight vectors that define the kernel centers, so as to produce maps of which the neurons have an equal probability to be active (equiprobabilistic maps). Both an “online” and a “batch” version of the learning rule are introduced, which are applied to nonparametric density estimation and regression, respectively. The application envisaged is blind source separation (BSS) from nonlinear, noisy mixtures.
Journal Articles
Publisher: Journals Gateway
Neural Computation (1998) 10 (2): 295–312.
Published: 15 February 1998
Abstract
View article
PDF
The projective transformation onto the retina loses the explicit 3D shape description of a moving object. Theoretical studies show that the reconstruction of 3D shape from 2D motion information (shape from motion, SFM) is feasible provided that the first- and second-order directional derivatives of the 2D velocity field are available. Experimental recordings have revealed that the receptive fields of the majority of the cells in macaque area middle temporal (MT) display an antagonistic (suppressive) surround and that a sizable portion of these surrounds are asymmetrical. This has led to the conjecture that these cells provide a local measure for the directional derivatives of the 2D velocity field. In this article, we adopt a nonparametric and biologically plausible approach to modeling the role played by the MT surrounds in the recovery of the orientation in depth (the slant and tilt) of a moving (translating) plane. A three-layered neural network is trained to represent the slant and tilt from the projected motion vectors. The hidden units of the network have speed-tuning characteristics and represent the MT model neurons with their surrounds. We conjecture that the MT surround results from lateral inhibitory connections with other MT cells and that populations of these cells, with different surround types, code linearly for slant and tilt of translating planes.
Journal Articles
Publisher: Journals Gateway
Neural Computation (1997) 9 (3): 595–606.
Published: 01 March 1997
Abstract
View article
PDF
This article introduces an extremely simple and local learning rule for to pographic map formation. The rule, called the maximum entropy learning rule (MER), maximizes the unconditional entropy of the map's output for any type of input distribution. The aim of this article is to show that MER is a viable strategy for building topographic maps that maximize the average mutual information of the output responses to noiseless input signals when only input noise and noise-added input signals are available.
Journal Articles
Publisher: Journals Gateway
Neural Computation (1993) 5 (6): 939–953.
Published: 01 November 1993
Abstract
View article
PDF
A novel unsupervised learning rule, called Boundary Adaptation Rule (BAR), is introduced for scalar quantization. It is shown that the rule maximizes information-theoretic entropy and thus yields equiprobable quantizations of univariate probability density functions. It is shown by simulations that BAR outperforms other unsupervised competitive learning rules in generating equiprobable quantizations. It is also shown that our rule can do better or worse than the Lloyd I algorithm in minimizing average mean square error, depending on the input distribution. Finally, an application to adaptive non-uniform analog to digital (A/D) conversion is considered.