Skip Nav Destination
1-1 of 1
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2021) 33 (2): 226–247.
Published: 01 February 2021
AbstractView article PDF
Whereas probabilistic models describe the dependence structure between observed variables, causal models go one step further: They predict, for example, how cognitive functions are affected by external interventions that perturb neuronal activity. In this review and perspective article, we introduce the concept of causality in the context of cognitive neuroscience and review existing methods for inferring causal relationships from data. Causal inference is an ambitious task that is particularly challenging in cognitive neuroscience. We discuss two difficulties in more detail: the scarcity of interventional data and the challenge of finding the right variables. We argue for distributional robustness as a guiding principle to tackle these problems. Robustness (or invariance) is a fundamental principle underlying causal methodology. A (correctly specified) causal model of a target variable generalizes across environments or subjects as long as these environments leave the causal mechanisms of the target intact. Consequently, if a candidate model does not generalize, then either it does not consist of the target variable's causes or the underlying variables do not represent the correct granularity of the problem. In this sense, assessing generalizability may be useful when defining relevant variables and can be used to partially compensate for the lack of interventional data.