Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-3 of 3
Kun Zhang
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Neural Computation (2014) 26 (7): 1484–1517.
Published: 01 July 2014
FIGURES
| View All (12)
Abstract
View articletitled, Causal Discovery via Reproducing Kernel Hilbert Space Embeddings
View
PDF
for article titled, Causal Discovery via Reproducing Kernel Hilbert Space Embeddings
Causal discovery via the asymmetry between the cause and the effect has proved to be a promising way to infer the causal direction from observations. The basic idea is to assume that the mechanism generating the cause distribution p ( x ) and that generating the conditional distribution p ( y | x ) correspond to two independent natural processes and thus p ( x ) and p ( y | x ) fulfill some sort of independence condition. However, in many situations, the independence condition does not hold for the anticausal direction; if we consider p ( x , y ) as generated via p ( y ) p ( x | y ), then there are usually some contrived mutual adjustments between p ( y ) and p ( x | y ). This kind of asymmetry can be exploited to identify the causal direction. Based on this postulate, in this letter, we define an uncorrelatedness criterion between p ( x ) and p ( y | x ) and, based on this uncorrelatedness, show asymmetry between the cause and the effect in terms that a certain complexity metric on p ( x ) and p ( y | x ) is less than the complexity metric on p ( y ) and p ( x | y ). We propose a Hilbert space embedding-based method EMD (an abbreviation for EMbeDding) to calculate the complexity metric and show that this method preserves the relative magnitude of the complexity metric. Based on the complexity metric, we propose an efficient kernel-based algorithm for causal discovery. The contribution of this letter is threefold. It allows a general transformation from the cause to the effect involving the noise effect and is applicable to both one-dimensional and high-dimensional data. Furthermore it can be used to infer the causal ordering for multiple variables. Extensive experiments on simulated and real-world data are conducted to show the effectiveness of the proposed method.
Journal Articles
An Adaptive Method for Subband Decomposition ICA
UnavailablePublisher: Journals Gateway
Neural Computation (2006) 18 (1): 191–223.
Published: 01 January 2006
Abstract
View articletitled, An Adaptive Method for Subband Decomposition ICA
View
PDF
for article titled, An Adaptive Method for Subband Decomposition ICA
Subband decomposition ICA (SDICA), an extension of ICA, assumes that each source is represented as the sum of some independent subcomponents and dependent subcomponents, which have different frequency bands. In this article, we first investigate the feasibility of separating the SDICA mixture in an adaptive manner. Second, we develop an adaptive method for SDICA, namely band-selective ICA (BS-ICA), which finds the mixing matrix and the estimate of the source independent subcomponents. This method is based on the minimization of the mutual information between outputs. Some practical issues are discussed. For better applicability, a scheme to avoid the high-dimensional score function difference is given. Third, we investigate one form of the overcomplete ICA problems with sources having specific frequency characteristics, which BS-ICA can also be used to solve. Experimental results illustrate the success of the proposed method for solving both SDICA and the over-complete ICA problems.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2005) 17 (2): 425–452.
Published: 01 February 2005
Abstract
View articletitled, Extended Gaussianization Method for Blind Separation of Post-Nonlinear Mixtures
View
PDF
for article titled, Extended Gaussianization Method for Blind Separation of Post-Nonlinear Mixtures
The linear mixture model has been investigated in most articles tackling the problem of blind source separation. Recently, several articles have addressed a more complex model: blind source separation (BSS) of post-nonlinear (PNL) mixtures. These mixtures are assumed to be generated by applying an unknown invertible nonlinear distortion to linear instantaneous mixtures of some independent sources. The gaussianization technique for BSS of PNL mixtures emerged based on the assumption that the distribution of the linear mixture of independent sources is gaussian. In this letter, we review the gaussianization method and then extend it to apply to PNL mixture in which the linear mixture is close to gaussian. Our proposed method approximates the linear mixture using the Cornish-Fisher expansion. We choose the mutual information as the independence measurement to develop a learning algorithm to separate PNL mixtures. This method provides better applicability and accuracy. We then discuss the sufficient condition for the method to be valid. The characteristics of the nonlinearity do not affect the performance of this method. With only a few parameters to tune, our algorithm has a comparatively low computation. Finally, we present experiments to illustrate the efficiency of our method.