Abstract
Cortical pyramidal neurons receive inputs from multiple distinct neural populations and integrate these inputs in separate dendritic compartments. We explore the possibility that cortical microcircuits implement canonical correlation analysis (CCA), an unsupervised learning method that projects the inputs onto a common subspace so as to maximize the correlations between the projections. To this end, we seek a multichannel CCA algorithm that can be implemented in a biologically plausible neural network. For biological plausibility, we require that the network operates in the online setting and its synaptic update rules are local. Starting from a novel CCA objective function, we derive an online optimization algorithm whose optimization steps can be implemented in a single-layer neural network with multicompartmental neurons and local non-Hebbian learning rules. We also derive an extension of our online CCA algorithm with adaptive output rank and output whitening. Interestingly, the extension maps onto a neural network whose neural architecture and synaptic updates resemble neural circuitry and non-Hebbian plasticity observed in the cortex.
1 Introduction
Our brains can effortlessly extract a latent source contributing to two synchronous data streams, often from different sensory modalities. Consider, for example, following an actor while watching a movie with a soundtrack. We can easily pay attention to the actor's gesticulation and voice while filtering out irrelevant visual and auditory signals. How can biological neurons accomplish such multisensory integration?
In this article, we explore an algorithm for solving a linear version of this problem known as canonical correlation analysis (CCA; Hotelling, 1936). In CCA, the two synchronous data sets, known as views, are projected onto a common lower-dimensional subspace so that the projections are maximally correlated. For simple generative models, the sum of these projections yields an optimal estimate of the latent source (Bach & Jordan, 2005). CCA is a popular method because it has a closed-form exact solution in terms of the singular value decomposition (SVD) of the correlation matrix. Therefore, the projections can be computed using fast and well-understood spectral numerical methods.
To serve as a viable model of a neuronal circuit, the CCA algorithm must map onto a neural network consistent with basic biological facts. For our purposes, we say that a network is “biologically plausible” if it satisfies the following two minimal requirements: (1) the network operates in the online setting, that is, upon receiving an input, it computes the corresponding output without relying on the storage of any significant fraction of the full data set, and (2) the learning rules are local in the sense that each synaptic update depends only on the variables that are available as biophysical quantities represented in the pre- or postsynaptic neurons.
There are a number of neural network implementations of CCA (Lai & Fyfe, 1999; Pezeshki, Azimi-Sadjadi, & Scharf, 2003; Gou & Fyfe, 2004; Vía, Santamaría, & Pérez, 2007); however, most of these networks use nonlocal learning rules and are therefore not biologically plausible. One exception is the normative neural network model derived by Pehlevan, Zhao, Sengupta, and Chklovskii (2020). They start with formulating an objective for single-(output) channel CCA and derive an online optimization algorithm (previously proposed in Lai and Fyfe, 1999) that maps onto a pyramidal neuron with three electrotonic compartments: soma, as well as apical and basal dendrites. The apical and basal synaptic inputs represent the two views, the two dendritic compartments extract highly correlated CCA projections of the inputs, and the soma computes the sum of projections and outputs it downstream as action potentials. The communication between the compartments is implemented by calcium plateaus that also mediate non-Hebbian but local synaptic plasticity.
Whereas Pehlevan et al. (2020) also propose circuits of pyramidal neurons for multichannel CCA, their implementations lack biological plausibility. In one implementation, they resort to deflation where the circuit sequentially finds projections of the two views. Implementing this algorithm in a neural network requires a centralized mechanism to facilitate the sequential updates, and there is no experimental evidence of such a biological mechanism. In another implementation that does not require a centralized mechanism, the neural network has asymmetric lateral connections among pyramidal neurons. However, that algorithm is not derived from a principled objective for CCA and the network architecture does not match the neuronal circuitry observed in cortical microcircuits.
There are a number of existing consequential models of cortical microcircuits with multicompartmental neurons and non-Hebbian plasticity (Körding & König, 2001; Urbanczik & Senn, 2014; Guerguiev, Lillicrap, & Richards, 2017; Sacramento, Costa, Bengio, & Senn, 2018; Haga & Fukai, 2018; Richards & Lillicrap, 2019; Milstein et al., 2020). These models provide mechanistic descriptions of the neural dynamics and synaptic plasticity and account for many experimental observations, including the nonlinearity of neural outputs and the layered organization of the cortex. While our neural network model is single-layered and linear, it is derived from a principled CCA objective function, which has several advantages. First, since biological neural networks evolved to adaptively perform behaviorally relevant computations, it is natural to view them as optimizing a relevant objective function. Second, our approach clarifies which features of the network (e.g., multicompartmental neurons and non-Hebbian synaptic updates) are central to computing correlations. Finally, since the optimization algorithm is derived from a CCA objective that can be solved offline, the neural activities and synaptic weights can be analytically predicted for any input without resorting to numerical simulation. In this way, our neural network model is interpretable and analytically tractable, and it provides a useful complement to nonlinear, layered neural network models.
The remainder of this work is organized as follows. We state the CCA problem in section 2. In section 3, we introduce a novel objective for the CCA problem and derive offline and online CCA algorithms. In section 4, we derive an extension of our CCA algorithm, and in section 5, we map the extension onto a simplified cortical microcircuit. We provide results of numerical simulations in section 6.
Notation. For positive integers , let denote -dimensional Euclidean space, and let denote the set of real-valued matrices equipped with the Frobenius norm . We use boldface lowercase letters (e.g., ) to denote vectors and boldface uppercase letters (e.g., ) to denote matrices. We let denote the set of orthogonal matrices and denote the set of positive definite matrices. We let denote the identity matrix.
2 Canonical Correlation Analysis
While CCA is typically viewed as an unsupervised learning method, it can also be interpreted as a special case of the supervised learning method reduced-rank regression, in which case one input is the feature vector and the other input is the label (see, e.g., Velu & Reinsel, 2013). With this supervised learning view of CCA, the natural output of a CCA network is the CCSP of the feature vector. In separate work (Golkar et al., 2020), we derive an algorithm for the general reduced-rank regression problem, which includes CCA as a special case, for outputting the projection of the feature vector. The algorithm derived in Golkar et al. (2020) resembles the adaptive CCA with output whitening algorithm that we derive in section 4 of this work (see algorithm 3 as well as appendix B.2 for a detailed comparison of the two algorithms); however, there are significant advantages to the algorithm derived here. First, our network outputs the (whitened) sum of the CCSPs, which is a relevant statistic in applications. The algorithm in Golkar et al. (2020) only outputs the CCSP of the feature vector, which is natural when viewing CCA as a supervised learning method, but not when viewing CCA as an unsupervised learning method for integrating multiview inputs. Second, in contrast to the algorithm derived in Golkar et al. (2020), our adaptive CCA with output whitening algorithm allows for adaptive output rank. This is particularly important for analyzing nonstationary input streams, a challenge that brains regularly face.
3 A Biologically Plausible CCA Algorithm
To derive a network that computes the sums of CCSPs for arbitrary input data sets, we adopt a normative approach in which we identify an appropriate cost function whose optimization leads to an online algorithm that can be implemented by a network with local learning rules. Previously, such an approach was taken to derive a biologically plausible PCA network from a similarity matching objective function (Pehlevan, Hu, & Chklovskii, 2015). We leverage this work by reformulating a CCA problem in terms of PCA of a modified data set and then solving it using similarity matching.
3.1 A Similarity Matching Objective
3.2 A Min-Max Objective
While the similarity matching objective, equations 3.3, can be minimized by taking gradient descent steps with respect to , this would not lead to an online algorithm because such computation requires combining data from different time steps. Rather, we introduce auxiliary matrix variables, which store sufficient statistics allowing for the CCA computation using solely contemporary inputs and will correspond to synaptic weights in the network implementation, and rewrite the minimization problem 3.3 as a min-max problem.
3.3 An Offline CCA Algorithm
Recall that is optimized over the set of positive-definite matrices . To ensure that remains positive definite after each update, note that the update rule for can be rewritten as the following convex combination (provided ): . Since is positive semidefinite, to guarantee that remains positive definite given a positive-definite initialization, it suffices to assume that .
3.4 An Online CCA Algorithm
4 Online Adaptive CCA with Output Whitening
We now introduce an extension of Bio-CCA that addresses two biologically relevant issues. First, Bio-CCA a priori sets the output rank at ; however, it may be advantageous for a neural circuit to instead adaptively set the output rank depending on the level of correlation captured. In particular, this can be achieved by projecting each view onto the subspace spanned by the canonical correlation basis vectors, which correspond to canonical correlations that exceed a threshold. Second, it is useful from an information-theoretic perspective for neural circuits to whiten their outputs (Plumbley, 1993), and there is experimental evidence that neural outputs in the cortex are decorrelated (Ecker et al., 2010; Miura, Mainen, & Uchida, 2012). Both adaptive output rank and output whitening modifications were implemented for a PCA network by Pehlevan and Chklovskii (2015) and can be adapted to the CCA setting. Here we present the modifications without providing detailed proofs, which can be found in the supplement of Pehlevan and Chklovskii (2015).
5 Relation to Cortical Microcircuits
We now show that Adaptive Bio-CCA with output whitening (algorithm 3) maps onto a neural network with local, non-Hebbian synaptic update rules that emulate salient aspects of synaptic plasticity found experimentally in cortical microcircuits (in both the neocortex and the hippocampus).
Cortical microcircuits contain two classes of neurons: excitatory pyramidal neurons and inhibitory interneurons. Pyramidal neurons receive excitatory synaptic inputs from two distinct sources via their apical and basal dendrites. The apical dendrites are all oriented in a single direction, and the basal dendrites branch from the cell body in the opposite direction (Takahashi & Magee, 2009; Larkum, 2013); see Figure 3. The excitatory synaptic currents in the apical and basal dendrites are first integrated separately in their respective compartments (Takahashi & Magee, 2009; Larkum, 2013). If the integrated excitatory current in the apical compartment exceeds the corresponding inhibitory input (the source of which is explained below) it produces a calcium plateau potential that propagates through the basal dendrites, driving plasticity (Takahashi & Magee, 2009; Larkum, 2013; Bittner et al., 2015). When the apical calcium plateau potential and basal dendritic current coincidentally arrive in the soma, they generate a burst in spiking output (Larkum, Zhu, & Sakmann, 1999; Larkum, 2013; Bittner et al., 2015). Inhibitory interneurons integrate pyramidal outputs and reciprocally inhibit the apical dendrites of pyramidal neurons, thus closing the loop.
Note that the update rule for the synapses in the apical dendrites, , depends on the basal calcium plateau potentials . Experimental evidence is focused on apical calcium plateau potentials, and it is not clear whether differences between basal inputs and inhibitory signals generate calcium signals for driving plasticity in the apical dendrites. Alternatively, the learning rule for coincides with the learning rule for the apical dendrites in Golkar et al. (2020), where a biological implementation in terms of local depolarization and backpropagating spikes was proposed. Due to the inconclusive evidence pertaining to plasticity in the apical tuft, we find it useful to put forth both interpretations.
Multicompartmental models of pyramidal neurons have been invoked previously in the context of biological implementation of the backpropagation algorithm (Körding & König, 2001; Urbanczik & Senn, 2014; Guerguiev et al., 2017; Haga & Fukai, 2018; Sacramento et al., 2018; Richards & Lillicrap, 2019). Under this interpretation, the apical compartment represents the target output, the basal compartment represents the algorithm prediction, and calcium plateau potentials communicate the error from the apical to the basal compartment, which is used for synaptic weight updates. The difference between these models and ours is that we use a normative approach to derive not only the learning rules but also the neural dynamics of the CCA algorithm, ensuring that the output of the network is known for any input. On the other hand, the linearity of neural dynamics in our network means that stacking our networks will not lead to any nontrivial results expected of a deep learning architecture. We leave introducing nonlinearities into neural dynamics and stacking our network to future work.
We conclude this section with comments on the interneuron-to-pyramidal neuron synaptic weight matrix and pyramidal neuron-to-interneuron synaptic weight matrix , as well as the computational role of the interneurons in this network. First, the algorithm appears to require a weight-sharing mechanism between the two sets of synapses to ensure the symmetry between the weight matrices, which is biologically unrealistic and commonly referred to as the weight transport problem. However, even without any initial symmetry between these feedforward and feedback synaptic weights, because of the symmetry of the local learning rules, the difference between the two will decay exponentially without requiring weight transport (see section B.3). Second, in equations 5.1 and 5.2, the interneuron-to-pyramidal neuron synaptic weight matrix is preceded by a negative sign, and the pyramidal neuron-to-interneuron synaptic weight matrix is preceded by a positive sign, which is consistent with the fact that in simplified cortical microcircuits, interneuron-to-pyramidal neuron synapses are inhibitory, whereas the pyramidal neuron-to-interneuron synapses are excitatory. That being said, this interpretation is superficial because the weight matrices are not constrained to be nonnegative, which is due to the fact that we are implementing a linear statistical method. Imposing nonnegativity constraints on the weights and may be useful for implementing nonlinear statistical methods; however, this requires further investigation. Finally, the activities of the interneurons were introduced in equation 4.2 to decorrelate the output. This is consistent with previous models of the cortex (King, Zylberberg, & DeWeese, 2013; Wanner & Friedrich, 2020), which have introduced inhibitory interneurons to decorrelate excitatory outputs; however, in contrast to the current work, the models proposed in King et al. (2013) and Wanner and Friedrich (2020) are not normative.
6 Numerical Experiments
We now evaluate the performance of the online algorithms, Bio-CCA and Adaptive Bio-CCA with output whitening. In each plot, the lines and shaded regions respectively denote the means and 90% confidence intervals over five runs. Detailed descriptions of the implementations are given in section C.1. All experiments were performed in Python on an iMac Pro equipped with a 3.2 GHz 8-Core Intel Xeon W CPU. The evaluation code is available at https://github.com/flatironinstitute/bio-cca.
6.1 Data Sets
We first describe the evaluation data sets.
6.1.1 Synthetic
6.1.2 Mediamill
The data set Mediamill (Snoek, Worring, Van Gemert, Geusebroek, & Smeulders, 2006) consists of samples (including training and testing sets) of video data and text annotations, and has been previously used to evaluate CCA algorithms (Arora, Marinov, Mianjy, & Srebro, 2017; Pehlevan et al., 2020). The first view consists of 120-dimensional visual features extracted from representative video frames. The second view consists of 101-dimensional vectors whose components correspond to manually labeled semantic concepts associated with the video frames (e.g., “basketball” or “tree”). To ensure that the problem is well conditioned, we add gaussian noise with covariance matrix (resp. ), for , to the first (resp. second) view to generate the data matrix (resp. ). The first 10 canonical correlations are plotted in Figure 4 (right).
6.1.3 Nonstationary
To evaluate Adaptive Bio-CCA with output whitening, we generated a nonstationary synthetic data set with samples, which are streamed from three distinct distributions that are generated according to the probabilistic model in Bach and Jordan (2005). In this case, the first samples are generated from a 4-dimensional latent source, the second samples are generated from an 8-dimensional latent source, and the final samples are generated from a 1-dimensional latent source.
6.2 Bio-CCA
We now evaluate the performance of Bio-CCA (see algorithm 2) on the synthetic dataset and Mediamill.
6.2.1 Competing Algorithms
We compare the performance of Bio-CCA with the following state-of-the-art online CCA algorithms:
- •
A two-timescale algorithm for computing the top canonical correlation basis vectors (i.e., ) introduced by Bhatia et al. (2018). The algorithm is abbreviated “Gen-Oja” due to its resemblance to Oja's method (Oja, 1982).
- •
An inexact matrix stochastic gradient method for solving CCA, abbreviated “MSG-CCA,” which was derived by Arora et al. (2017).
- •
The asymmetric neural network proposed by Pehlevan et al. (2020), which we abbreviate as “Asym-NN.”
- •
The biologically plausible reduced-rank regression algorithm derived by Golkar et al. (2020), abbreviated “Bio-RRR,” which implements a supervised version of CCA when (see algorithm 4 in section B.2).
Detailed descriptions of the implementations of each algorithm are provided in section C.1.
6.2.2 Performance Metrics
6.2.3 Evaluation on the Synthetic Data Set
6.2.4 Evaluation on Mediamill
6.3 Adaptive Bio-CCA with Output Whitening
Next, we evaluate the performance of Adaptive Bio-CCA with output whitening (see algorithm 3) on all three data sets. Since we are unaware of competing online algorithms for adaptive CCA with output whitening, to compare the performance of algorithm 3 to existing methods, we also plot the performance of Bio-RRR Golkar et al. (2020) with respect to subspace error, where we a priori select the target dimension. We chose Bio-RRR because the algorithm also maps onto a neural network that resembles the cortical microcircuit and because it performs relatively well on the synthetic data set and Mediamill.
6.3.1 Performance Metric
6.3.2 Evaluation on the Synthetic Data Set
6.3.3 Evaluation on Mediamill
6.3.4 Evaluation on the Nonstationary Data Set
7 Discussion
In this work, we derived an online algorithm for CCA that can be implemented in a neural network with multicompartmental neurons and local, non-Hebbian learning rules. We also derived an extension that adaptively chooses the output rank and whitens the output. Remarkably, the neural architecture and non-Hebbian learning rules of our extension resembled neural circuitry and non-Hebbian plasticity in cortical pyramidal neurons. Thus, our neural network model may be useful for understanding the computational role of multicompartmental neurons with non-Hebbian plasticity.
While our neural network model captures salient features of cortical microcircuits, there are important biophysical properties that are not explained by our model. First, our model uses linear neurons to solve the linear CCA problem, which substantially limits its computational capabilities and is a major simplification of cortical pyramidal neurons that can perform nonlinear operations (Gidon et al., 2020). However, studying the analytically tractable and interpretable linear neural network model is useful for understanding more complex nonlinear models. Such an approach has proven successful for studying deep networks in the machine learning literature (Arora, Cohen, Golowich, & Hu, 2019). In future work, we plan to incorporate nonlinear neurons in our model.
Second, our neural network implementation requires the same number of interneurons and principal neurons, whereas in the cortex, there are approximately four times more pyramidal neurons than interneurons (Larkum et al., 1999). In our model, the interneurons decorrelate the output, and, in practice, the optimal fixed points of the algorithm can destabilize when there are fewer interneurons than principal neurons (see remark 3A in the supplementary material of Pehlevan & Chklovskii, 2015). In biological circuits, these instabilities could be mitigated by other biophysical constraints; however, a theoretical justification would require additional work.
Third, the output of our neural network is the equally weighted sum of the basal and apical projections. However, experimental evidence suggests that the pyramidal neurons integrate their apical and basal inputs asymmetrically (Larkum, Nevian, Sandler, Polsky, & Schiller, 2009; Larkum, 2013; Major, Larkum, & Schiller, 2013). In addition, in our model, the apical learning rule is non-Hebbian and depends on a calcium plateau potential that travels from the basal dendrites to the apical tuft. Experimental evidence for calcium plateau potential dependent plasticity is focused on the basal dendrites, with inconclusive evidence on the plasticity rules for the apical dendrites (Golding et al., 2002; Sjöström & Häusser, 2006).
To provide an alternative explanation of cortical computation, in a separate work (Golkar et al., 2020), we derive an online algorithm for the general supervised learning method reduced-rank regression, which includes CCA as a special case (see section B.2 for a detailed comparison of the two algorithms). The algorithm also maps onto a neural network with multicompartmental neurons and non-Hebbian plasticity in the basal dendrites. Both models adopt a normative approach in which the algorithms are derived from principled objective functions. This approach is highly instructive as the differences between the models highlight which features of the network that are central to implementing an unsupervised learning method versus a supervised learning method.
There are three main differences between the biological interpretation of the two algorithms. First, the output of the network in Golkar et al. (2020) is the projection of the basal inputs, with no apical contribution. Second, the network in Golkar et al. (2020) allows for a range of apical synaptic update rules, including Hebbian updates. Third, the adaptive network derived here includes a threshold parameter , which adaptively sets the output dimension and is not included in Golkar et al. (2020). In our model, this parameter corresponds to the contribution of the somatic output to plasticity in the basal dendrites. These differences can be compared to experimental outcomes to provide evidence that cortical microcircuits implement unsupervised algorithms, supervised algorithms, or mixtures of both. Thus, we find it informative to put forth and contrast the two models.
Finally, we did not prove theoretical guarantees that our algorithms converge. As we show in appendix D, Offline-CCA and Bio-CCA can be viewed as gradient descent-ascent and stochastic gradient descent-ascent algorithms for solving a nonconvex-concave min-max problem. While gradient descent-ascent algorithms are natural methods for solving such min-max problems, they are not always guaranteed to converge to a desired solution. In fact, when the gradient descent step size is not sufficiently small relative to the gradient ascent step size (i.e., when is not sufficiently small), gradient descent-ascent algorithms for solving nonconvex-concave min-max problems can converge to limit cycles (Hommes & Ochea, 2012; Mertikopoulos, Papadimitriou, & Piliouras, 2018). Establishing local or global convergence and convergence rate guarantees that for general gradient descent-ascent algorithms is an active area of research, and even recent advances (Lin, Jin, & Jordan, 2020) impose assumptions that are not satisfied in our setting. In appendix D, we discuss these challenges and place our algorithms within the broader context of gradient descent-ascent algorithms for solving nonconvex-concave min-max problems.
Appendix A: Sums of CCSPs as Principal Subspace Projections
Appendix B: Adaptive Bio-CCA with Output Whitening
B.1 Detailed Derivation of Algorithm 3
B.2 Comparison with Bio-RRR
In this section, we compare Adaptive Bio-CCA with output whitening (see algorithm 3) and Bio-RRR (Golkar et al., 2020, algorithm 2). We first state the Bio-RRR algorithm.3
. | Adaptive Bio-CCA . | Bio-RRR . |
---|---|---|
Unsupervised/supervised | unsupervised | supervised |
Whitened outputs | ||
Adaptive output rank |
. | Adaptive Bio-CCA . | Bio-RRR . |
---|---|---|
Unsupervised/supervised | unsupervised | supervised |
Whitened outputs | ||
Adaptive output rank |
B.3 Decoupling the Interneuron Synapses
The neural network for Adaptive Bio-CCA with output whitening derived in section 4 requires the pyramidal neuron-to-interneuron synaptic weight matrix to be the the transpose of the interneuron-to-pyramidal neuron synaptic weight matrix . Enforcing this symmetry via a centralized mechanism is not biologically plausible. Rather, following (Golkar et al., 2020, appendix D), we show that the symmetry between these two sets of weights naturally follows from the local learning rules.
Appendix C: Numerics
C.1 Experimental Details
C.1.1 Bio-CCA
C.1.2 MSG-CCA
We implemented the online algorithm stated in Arora et al. (2017). MSG-CCA requires a training set to estimate the covariance matrices and . We provided the algorithm with 1000 samples to initially estimate the covariance matrix. Following Arora et al. (2017), we use the time-dependent learning rate .
C.1.3 Gen-Oja
We implemented the online algorithm stated in Bhatia et al. (2018). The algorithm includes two learning rates: and . As stated in Bhatia et al. (2018), the Gen-Oja's performance is robust to changes in the learning rate but sensitive to changes in the learning rate . Following Bhatia et al. (2018), we set to be constant and equal to where . To optimize over , we used a time-dependent learning rate of the form and performed a grid search over and . The best-performing parameters are reported in Table 2.
C.1.4 Asymmetric CCA Network
We implemented the online multi-channel CCA algorithm derived in Pehlevan et al. (2020). Following Pehlevan et al. (2020), we use the linearly decaying learning rate . To optimize the performance of the algorithm, we performed a grid search over and . The best-performing parameters are reported in Table 2.
C.1.5 Bio-RRR
We implemented the online CCA algorithm derived in Golkar et al. (2020) with (see algorithm 4). The algorithm includes learning rates , , and . Following Golkar et al. (2020), we set and and use the time-dependent learning rate of the form . We performed a grid search over , and and list best-performing parameters in Table 2.
C.1.6 Adaptive Bio-CCA with Output Whitening
We implemented algorithm 3. We initialized , , and to be random matrices with i.i.d. standard normal entries. To find the optimal hyperparameters, we performed a grid search over , , and . The best-performing parameters are reported in Table 2.
C.2 Orthonormality Constraints
C.2.1 Bio-CCA
C.2.2 Adaptive Bio-CCA with Output Whitening
Appendix D: On Convergence of the CCA Algorithms
Establishing theoretical guarantees for solving nonconvex-concave min-max problems of the form, equation D.1 via stochastic gradient descent-ascent is an active area of research (Razaviyayn et al., 2020). Borkar (1997, 2009) proved asymptotic convergence to the solution of the min-max problem for a two timescale stochastic gradient descent-ascent algorithm, where the ratio between the learning rates for the minimization step and the maximization step, , depends on the iteration and converges to zero in the limit as the iteration number approaches infinity. Lin et al. (2020) established convergence rate guarantees for a stochastic gradient descent-ascent algorithm to an equilibrium point (not necessarily a solution of the min-max problem). Both results, however, impose assumptions that do not hold in our setting: the partial derivatives of are Lipschitz continuous and is restricted to a bounded convex set. Therefore, establishing global stability with convergence rate guarantees for our offline and online CCA algorithms requires new mathematical techniques that are beyond the scope of this work.
Even proving local convergence properties is nontrivial. In the special case that , Pehlevan et al. (2018) carefully analyzed the continuous dynamical system obtained by formally taking the step size to zero in equations D.3 and D.4. They computed an explicit value , in terms the eigenvalues of such that if , then solutions of the min-max problem, equation D.1, are the only linearly stable fixed points of the continuous dynamics. The case that is more complicated and the approach in Pehlevan et al. (2018) is not readily extended. In ongoing work, we take a step toward understanding the asymptotics of our algorithms by analyzing local stability properties for a general class of gradient descent-ascent algorithms, which includes Offline-CCA and Bio-CCA as special cases.
Notes
This constraint differs slightly from the usual CCA whitening constraint ; however, the constraints are equivalent up to a scaling factor of 2.
Since the competing algorithms are not adaptive and need to have their output dimension set by hand, we do not include a competing algorithm for comparison.
In Golkar et al. (2020) the inputs are the basal inputs and the inputs are the apical inputs. Here we switch the inputs to be consistent with algorithm 3.
Acknowledgments
We thank Nati Srebro for drawing our attention to CCA, and we thank Tiberiu Tesileanu and Charles Windolf for their helpful feedback on an earlier draft of this manuscript.
References
Author notes
D.L., Y.B., and S.G. contributed equally.