Abstract
In many multivariate time series, the correlation structure is nonstationary, that is, it changes over time. The correlation structure may also change as a function of other cofactors, for example, the identity of the subject in biomedical data. A fundamental approach for the analysis of such data is to estimate the correlation structure (connectivities) separately in short time windows or for different subjects and use existing machine learning methods, such as principal component analysis (PCA), to summarize or visualize the changes in connectivity. However, the visualization of such a straightforward PCA is problematic because the ensuing connectivity patterns are much more complex objects than, say, spatial patterns. Here, we develop a new framework for analyzing variability in connectivities using the PCA approach as the starting point. First, we show how to analyze and visualize the principal components of connectivity matrices by a tailor-made rank-two matrix approximation in which we use the outer product of two orthogonal vectors. This leads to a new kind of transformation of eigenvectors that is particularly suited for this purpose and often enables interpretation of the principal component as connectivity between two groups of variables. Second, we show how to incorporate the orthogonality and the rank-two constraint in the estimation of PCA itself to improve the results. We further provide an interpretation of these methods in terms of estimation of a probabilistic generative model related to blind separation of dependent sources. Experiments on brain imaging data give very promising results.