Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-5 of 5
Laiwan Chan
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Confounder Detection in High-Dimensional Linear Models Using First Moments of Spectral Measures
UnavailablePublisher: Journals Gateway
Neural Computation (2018) 30 (8): 2284–2318.
Published: 01 August 2018
FIGURES
| View All (9)
Abstract
View articletitled, Confounder Detection in High-Dimensional Linear Models Using First Moments of Spectral Measures
View
PDF
for article titled, Confounder Detection in High-Dimensional Linear Models Using First Moments of Spectral Measures
In this letter, we study the confounder detection problem in the linear model, where the target variable Y is predicted using its n potential causes X n = ( x 1 , … , x n ) T . Based on an assumption of a rotation-invariant generating process of the model, recent study shows that the spectral measure induced by the regression coefficient vector with respect to the covariance matrix of X n is close to a uniform measure in purely causal cases, but it differs from a uniform measure characteristically in the presence of a scalar confounder. Analyzing spectral measure patterns could help to detect confounding. In this letter, we propose to use the first moment of the spectral measure for confounder detection. We calculate the first moment of the regression vector–induced spectral measure and compare it with the first moment of a uniform spectral measure, both defined with respect to the covariance matrix of X n . The two moments coincide in nonconfounding cases and differ from each other in the presence of confounding. This statistical causal-confounding asymmetry can be used for confounder detection. Without the need to analyze the spectral measure pattern, our method avoids the difficulty of metric choice and multiple parameter optimization. Experiments on synthetic and real data show the performance of this method.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2018) 30 (5): 1394–1425.
Published: 01 May 2018
FIGURES
| View All (6)
Abstract
View articletitled, A Kernel Embedding–Based Approach for Nonstationary Causal Model Inference
View
PDF
for article titled, A Kernel Embedding–Based Approach for Nonstationary Causal Model Inference
Although nonstationary data are more common in the real world, most existing causal discovery methods do not take nonstationarity into consideration. In this letter, we propose a kernel embedding–based approach, ENCI, for nonstationary causal model inference where data are collected from multiple domains with varying distributions. In ENCI, we transform the complicated relation of a cause-effect pair into a linear model of variables of which observations correspond to the kernel embeddings of the cause-and-effect distributions in different domains. In this way, we are able to estimate the causal direction by exploiting the causal asymmetry of the transformed linear model. Furthermore, we extend ENCI to causal graph discovery for multiple variables by transforming the relations among them into a linear nongaussian acyclic model. We show that by exploiting the nonstationarity of distributions, both cause-effect pairs and two kinds of causal graphs are identifiable under mild conditions. Experiments on synthetic and real-world data are conducted to justify the efficacy of ENCI over major existing methods.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2016) 28 (5): 801–814.
Published: 01 May 2016
FIGURES
| View All (14)
Abstract
View articletitled, Causal Inference on Discrete Data via Estimating Distance Correlations
View
PDF
for article titled, Causal Inference on Discrete Data via Estimating Distance Correlations
In this article, we deal with the problem of inferring causal directions when the data are on discrete domain. By considering the distribution of the cause and the conditional distribution mapping cause to effect as independent random variables, we propose to infer the causal direction by comparing the distance correlation between and with the distance correlation between and . We infer that X causes Y if the dependence coefficient between and is smaller. Experiments are performed to show the performance of the proposed method.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2014) 26 (7): 1484–1517.
Published: 01 July 2014
FIGURES
| View All (12)
Abstract
View articletitled, Causal Discovery via Reproducing Kernel Hilbert Space Embeddings
View
PDF
for article titled, Causal Discovery via Reproducing Kernel Hilbert Space Embeddings
Causal discovery via the asymmetry between the cause and the effect has proved to be a promising way to infer the causal direction from observations. The basic idea is to assume that the mechanism generating the cause distribution p ( x ) and that generating the conditional distribution p ( y | x ) correspond to two independent natural processes and thus p ( x ) and p ( y | x ) fulfill some sort of independence condition. However, in many situations, the independence condition does not hold for the anticausal direction; if we consider p ( x , y ) as generated via p ( y ) p ( x | y ), then there are usually some contrived mutual adjustments between p ( y ) and p ( x | y ). This kind of asymmetry can be exploited to identify the causal direction. Based on this postulate, in this letter, we define an uncorrelatedness criterion between p ( x ) and p ( y | x ) and, based on this uncorrelatedness, show asymmetry between the cause and the effect in terms that a certain complexity metric on p ( x ) and p ( y | x ) is less than the complexity metric on p ( y ) and p ( x | y ). We propose a Hilbert space embedding-based method EMD (an abbreviation for EMbeDding) to calculate the complexity metric and show that this method preserves the relative magnitude of the complexity metric. Based on the complexity metric, we propose an efficient kernel-based algorithm for causal discovery. The contribution of this letter is threefold. It allows a general transformation from the cause to the effect involving the noise effect and is applicable to both one-dimensional and high-dimensional data. Furthermore it can be used to infer the causal ordering for multiple variables. Extensive experiments on simulated and real-world data are conducted to show the effectiveness of the proposed method.
Journal Articles
Causality in Linear Nongaussian Acyclic Models in the Presence of Latent Gaussian Confounders
UnavailablePublisher: Journals Gateway
Neural Computation (2013) 25 (6): 1605–1641.
Published: 01 June 2013
FIGURES
| View All (11)
Abstract
View articletitled, Causality in Linear Nongaussian Acyclic Models in the Presence of
Latent Gaussian Confounders
View
PDF
for article titled, Causality in Linear Nongaussian Acyclic Models in the Presence of
Latent Gaussian Confounders
LiNGAM has been successfully applied to some real-world causal discovery problems. Nevertheless, causal sufficiency is assumed; that is, there is no latent confounder of the observations, which may be unrealistic for real-world problems. Taking into the consideration latent confounders will improve the reliability and accuracy of estimations of the real causal structures. In this letter, we investigate a model called linear nongaussian acyclic models in the presence of latent gaussian confounders (LiNGAM-GC) which can be seen as a specific case of lvLiNGAM. This model includes the latent confounders, which are assumed to be independent gaussian distributed and statistically independent of the disturbances. To tackle the causal discovery problem of this model, first we propose a pairwise cumulant-based measure of causal directions for cause-effect pairs. We prove that in spite of the presence of latent gaussian confounders, the causal direction of the observed cause-effect pair can be identified under the mild condition that the disturbances are simultaneously supergaussian or subgaussian. We propose a simple and efficient method to detect the violation of this condition. We extend our work to multivariate causal network discovery problems. Specifically we propose algorithms to estimate the causal network structure, including causal ordering and causal strengths, using an iterative root finding-removing scheme based on pairwise measure. To address the redundant edge problem due to the finite sample size effect, we develop an efficient bootstrapping-based pruning algorithm. Experiments on synthetic data and real-world data have been conducted to show the applicability of our model and the effectiveness of our proposed algorithms.
Includes: Supplementary data