Skip to Main Content
Bio-RRR implements a supervised learning method for minimizing reconstruction error, with the parameter 0s1 specifying the norm under which the error is measured (see Golkar et al., 2020, section 3, for details). Importantly, when s=1, algorithm 4 implements a supervised version of CCA. Setting α=1 in algorithm 3 and s=1 in algorithm 4, the algorithms have identical network architectures and synaptic update rules, namely:
WxWx+ηx(zt-at)xt,WyWy+ηyctayt,PP+ηp(ztnt-P),
where we recall that cta:=at-Pnt. For this parameter choice, the main difference between the algorithms is that Adaptive Bio-CCA with output whitening is an unsupervised learning algorithm, whereas Bio-RRR is a supervised learning algorithm, which is reflected in their outputs zt: the output of algorithm 3 is the whitened sum of the basal dendritic currents and the apical calcium plateau potential, that is, zt=bt+cta, whereas the output of (Golkar et al., 2020, algorithm 2) is the (whitened) CCSP of the basal inputs, zt=bt. In other words, the apical inputs do not directly contribute to the output of the network in Golkar et al. (2020), only indirectly via plasticity in the basal dendrites. Experimental evidence suggests that apical calcium plateau potentials contribute significantly to the outputs of pyramidal cells, which supports the model derived here. Furthermore, the model in this work allows one to adjust the parameter α to adaptively set the output rank, which is important for analyzing non-stationary input streams. In Table 1 we summarize the differences between the two algorithms.
Table 1:

Comparison of Adaptive Bio-CCA with Output Whitening and Bio-RRR.

Adaptive Bio-CCABio-RRR
Unsupervised/supervised unsupervised supervised 
Whitened outputs   
Adaptive output rank  × 
Adaptive Bio-CCABio-RRR
Unsupervised/supervised unsupervised supervised 
Whitened outputs   
Adaptive output rank  × 
Close Modal

or Create an Account

Close Modal
Close Modal