Skip to Main Content
Table A1:
Statistics of Quadratic Fit to (Average) $l1$-Norm of Error (top) and Hessian-Weighted Error (Bottom) as a Function of Heterogeneity for Stimulus Filter $k$, Postspike Filter $h$, and Constant $b$.
HeterogeneityNetwork Size234567
$k$ $R2$ 0.0293 0.0140 0.0347 0.0534 0.0725 0.0974
$p$-value for F-statistic versus Constant $3.9×10-42$ $1.8×10-20$ $7.1×10-50$ $4.9×10-77$ $2.2×10-105$ $3.9×10-143$
$h$ $R2$ 0.0103 0.01794 0.0606 0.0382 0.0530 0.0888
$p$-value for F-statistic versus Constant $3.6×10-15$ $6.7×10-26$ $1.4×10-87$ $6.7×10-55$ $2.3×10-76$ $5.2×10-130$
$b$ $R2$ 0.0141 0.0164 0.0357 0.0577 0.0620 0.0918
$p$-value for F-statistic versus Constant $1.6×10-20$ $9.1×10-24$ $3.3×10-51$ $2.1×10-83$ $1.2×10-89$ $1.3×10-134$
Heterogeneity Network Size
$k$ $R2$ 0.0068 0.0019 0.0060 0.0191 0.0315 0.0401
$p$-value for F-statistic versus Constant $3.6×10-10$ 0.0022 $4.5×10-9$ $1.5×10-27$ $3.0×10-45$ $1.6×10-57$
$h$ $R2$ 0.0014 0.0020 0.0101 0.0171 0.0181 0.0356
$p$-value for F-statistic versus Constant 0.0032 $3.8×10-4$ $7.2×10-15$ $1.2×10-24$ $4.3×10-26$ $4.5×10-51$
$b$ $R2$ 0.0029 0.0047 0.0062 0.0222 0.0290 0.0329
$p$-value for F-statistic versus Constant $8.3×10-5$ $2.5×10-7$ $2.4×10-9$ $6.5×10-32$ $1.4×10-41$ $3.2×10-47$
HeterogeneityNetwork Size234567
$k$ $R2$ 0.0293 0.0140 0.0347 0.0534 0.0725 0.0974
$p$-value for F-statistic versus Constant $3.9×10-42$ $1.8×10-20$ $7.1×10-50$ $4.9×10-77$ $2.2×10-105$ $3.9×10-143$
$h$ $R2$ 0.0103 0.01794 0.0606 0.0382 0.0530 0.0888
$p$-value for F-statistic versus Constant $3.6×10-15$ $6.7×10-26$ $1.4×10-87$ $6.7×10-55$ $2.3×10-76$ $5.2×10-130$
$b$ $R2$ 0.0141 0.0164 0.0357 0.0577 0.0620 0.0918
$p$-value for F-statistic versus Constant $1.6×10-20$ $9.1×10-24$ $3.3×10-51$ $2.1×10-83$ $1.2×10-89$ $1.3×10-134$
Heterogeneity Network Size
$k$ $R2$ 0.0068 0.0019 0.0060 0.0191 0.0315 0.0401
$p$-value for F-statistic versus Constant $3.6×10-10$ 0.0022 $4.5×10-9$ $1.5×10-27$ $3.0×10-45$ $1.6×10-57$
$h$ $R2$ 0.0014 0.0020 0.0101 0.0171 0.0181 0.0356
$p$-value for F-statistic versus Constant 0.0032 $3.8×10-4$ $7.2×10-15$ $1.2×10-24$ $4.3×10-26$ $4.5×10-51$
$b$ $R2$ 0.0029 0.0047 0.0062 0.0222 0.0290 0.0329
$p$-value for F-statistic versus Constant $8.3×10-5$ $2.5×10-7$ $2.4×10-9$ $6.5×10-32$ $1.4×10-41$ $3.2×10-47$

Note: The numbers in bold correspond to quadratic fits where the optimal level of heterogeneity is not in the interior.

Table A2:
Statistics of Quadratic Fit to Pearson's Correlation (with Prewhitening) as a Function of Heterogeneity for Stimulus Filter $k$, Postspike Filter $h$, and Constant $b$.
HeterogeneityNetwork Size234567
$k$ $R2$ 0.0018 0.0123 0.0076 0.0149 0.0212 0.0357
$p$-value for F-statistic versus Constant 0.003 $6.0×10-18$ $2.2×10-11$ $1.4×10-21$ $1.6×10-30$ $2.9×10-51$
$h$ $R2$ 0.0199 0.0159 0.0166 0.0156 0.0232 0.0356
$p$-value for F-statistic versus Constant $1.1×10-28$ $4.6×10-23$ $5.3×10-24$ $1.4×10-22$ $2.4×10-33$ $4.4×10-51$
$b$ $R2$ 0.0008 0.0027 0.0032 0.0048 0.0160 0.0251
$p$-value for F-statistic versus Constant 0.0237 $1.8×10-4$ $3.9×10-5$ $1.9×10-7$ $4.3×10-23$ $4.0×10-36$
HeterogeneityNetwork Size234567
$k$ $R2$ 0.0018 0.0123 0.0076 0.0149 0.0212 0.0357
$p$-value for F-statistic versus Constant 0.003 $6.0×10-18$ $2.2×10-11$ $1.4×10-21$ $1.6×10-30$ $2.9×10-51$
$h$ $R2$ 0.0199 0.0159 0.0166 0.0156 0.0232 0.0356
$p$-value for F-statistic versus Constant $1.1×10-28$ $4.6×10-23$ $5.3×10-24$ $1.4×10-22$ $2.4×10-33$ $4.4×10-51$
$b$ $R2$ 0.0008 0.0027 0.0032 0.0048 0.0160 0.0251
$p$-value for F-statistic versus Constant 0.0237 $1.8×10-4$ $3.9×10-5$ $1.9×10-7$ $4.3×10-23$ $4.0×10-36$

Note: The numbers in bold correspond to quadratic fits where the optimal level of heterogeneity is not in the interior.

Figure A1:

(A) For the raw stimulus sampled at $0.5ms$ and then differenced once to ensure stationarity, the autocorrelation functions (ACF, top) show an autoregressive process on the stimulus values, while the partial autocorrelation function (PACF, bottom) indicates a binary oscillatory pattern with some autocorrelation on the moving average. (B) Similar to panel A but for a once-differentiated stimulus chosen randomly: $yt=xt-xt-1$, for length $200ms$. For prewhitening, chosen models were ARIMA(6,1,3), followed by ARIMA(8,1,4) and ARIMA(4,1,2) in cases where a model of the initial choice could not be constructed. (C) We systematically determined the length of the stimulus filter $k$ and lag of the last postspike basis vector by considering 168 pairs of these values for a network with all 7 cells. We chose the pair that gave the smallest summed negative log likelihood (i.e., the largest maximum likelihood) after fitting to 10 segments of time, each of length of approximately $2sec$. The magenta oval indicates our choice ($180ms$ peak for last basis vector of $h$ corresponds to a total lag of $240ms$; see Figure 1(B); the magenta oval with dashed outline had a similar log likelihood but required more computational resources.

Figure A1:

(A) For the raw stimulus sampled at $0.5ms$ and then differenced once to ensure stationarity, the autocorrelation functions (ACF, top) show an autoregressive process on the stimulus values, while the partial autocorrelation function (PACF, bottom) indicates a binary oscillatory pattern with some autocorrelation on the moving average. (B) Similar to panel A but for a once-differentiated stimulus chosen randomly: $yt=xt-xt-1$, for length $200ms$. For prewhitening, chosen models were ARIMA(6,1,3), followed by ARIMA(8,1,4) and ARIMA(4,1,2) in cases where a model of the initial choice could not be constructed. (C) We systematically determined the length of the stimulus filter $k$ and lag of the last postspike basis vector by considering 168 pairs of these values for a network with all 7 cells. We chose the pair that gave the smallest summed negative log likelihood (i.e., the largest maximum likelihood) after fitting to 10 segments of time, each of length of approximately $2sec$. The magenta oval indicates our choice ($180ms$ peak for last basis vector of $h$ corresponds to a total lag of $240ms$; see Figure 1(B); the magenta oval with dashed outline had a similar log likelihood but required more computational resources.

Figure A2:

The $l1$-norm of the error as a function of all three forms of heterogeneity: stimulus filter (left column), postspike filter (middle column), and bias heterogeneity (right column). Here the network sizes ranged from $N=2$ (top row) and $N=3$ (middle row), to $N=4$ (bottom row); we ensured $N$ cells were selected for each network. See section 4 for how random samples were generated. In all cases, a quadratic regression fit shows an optimal level of heterogeneity in the domain. With $N=3,4$ for postspike filter heterogeneity, the optimal levels are for smaller values of heterogeneity.

Figure A2:

The $l1$-norm of the error as a function of all three forms of heterogeneity: stimulus filter (left column), postspike filter (middle column), and bias heterogeneity (right column). Here the network sizes ranged from $N=2$ (top row) and $N=3$ (middle row), to $N=4$ (bottom row); we ensured $N$ cells were selected for each network. See section 4 for how random samples were generated. In all cases, a quadratic regression fit shows an optimal level of heterogeneity in the domain. With $N=3,4$ for postspike filter heterogeneity, the optimal levels are for smaller values of heterogeneity.

Figure A3:

Similar to Figure 8 but with the remaining network sizes ($N=5,6,7$). The $l1$-norm of the error as a function of all three forms of heterogeneity: stimulus filter (left column), postspike filter (middle column), and bias heterogeneity (right column). Here the network sizes ranged from $N=5$ (top row) and $N=6$ (middle row), to $N=7$ (bottom row); we ensured $N$ cells were selected for each network. See section 4 for how random samples were generated. In all cases, a quadratic regression fit shows an optimal level of heterogeneity in the domain.

Figure A3:

Similar to Figure 8 but with the remaining network sizes ($N=5,6,7$). The $l1$-norm of the error as a function of all three forms of heterogeneity: stimulus filter (left column), postspike filter (middle column), and bias heterogeneity (right column). Here the network sizes ranged from $N=5$ (top row) and $N=6$ (middle row), to $N=7$ (bottom row); we ensured $N$ cells were selected for each network. See section 4 for how random samples were generated. In all cases, a quadratic regression fit shows an optimal level of heterogeneity in the domain.

Figure A4:

The Hessian-weighted error as a function of all three forms of heterogeneity: stimulus filter (left column), postspike filter (middle column), and bias heterogeneity (right column). Here the network sizes ranged from $N=2$ (top row) and $N=3$ (middle row), to $N=4$ (bottom row); we ensured $N$ distinct cells were selected for each network. See section 4 for how random samples were generated. In almost all cases, a quadratic regression fit shows an optimal level of heterogeneity in the domain; the exceptions are for postspike filter heterogeneity for $N=2,3,4$ (middle column).

Figure A4:

The Hessian-weighted error as a function of all three forms of heterogeneity: stimulus filter (left column), postspike filter (middle column), and bias heterogeneity (right column). Here the network sizes ranged from $N=2$ (top row) and $N=3$ (middle row), to $N=4$ (bottom row); we ensured $N$ distinct cells were selected for each network. See section 4 for how random samples were generated. In almost all cases, a quadratic regression fit shows an optimal level of heterogeneity in the domain; the exceptions are for postspike filter heterogeneity for $N=2,3,4$ (middle column).

Figure A5:

Similar to Figure A4 but with the remaining network sizes ($N=5,6,7$). The Hessian-weighted error as a function of all three forms of heterogeneity: stimulus filter (left column), postspike filter (middle column), and bias heterogeneity (right column). Here the network sizes ranged from $N=5$ (top row) and $N=6$ (middle row), to $N=7$ (bottom row); we ensured $N$ distinct cells were selected for each network. See section 4 for how random samples were generated. In all but one case, a quadratic regression fit shows an optimal level of heterogeneity in the domain; the exception is with postspike filter heterogeneity for $N=5$ (middle column, top row).

Figure A5:

Similar to Figure A4 but with the remaining network sizes ($N=5,6,7$). The Hessian-weighted error as a function of all three forms of heterogeneity: stimulus filter (left column), postspike filter (middle column), and bias heterogeneity (right column). Here the network sizes ranged from $N=5$ (top row) and $N=6$ (middle row), to $N=7$ (bottom row); we ensured $N$ distinct cells were selected for each network. See section 4 for how random samples were generated. In all but one case, a quadratic regression fit shows an optimal level of heterogeneity in the domain; the exception is with postspike filter heterogeneity for $N=5$ (middle column, top row).

Figure A6:

When using Pearson's correlation after prewhitening as a measure of decoding accuracy, there is an optimal intermediate level of heterogeneity for all types, for $N=2,3,4$, with only one exception: bias heterogeneity for $N=2$ (right column, top row).

Figure A6:

When using Pearson's correlation after prewhitening as a measure of decoding accuracy, there is an optimal intermediate level of heterogeneity for all types, for $N=2,3,4$, with only one exception: bias heterogeneity for $N=2$ (right column, top row).

Figure A7:

Similar to Figure A6 but with larger network sizes ($N=5,6,7$). There is an optimal level of heterogeneity in the domain in all cases except for postspike filter heterogeneity with $N=5$ (middle column, top row).

Figure A7:

Similar to Figure A6 but with larger network sizes ($N=5,6,7$). There is an optimal level of heterogeneity in the domain in all cases except for postspike filter heterogeneity with $N=5$ (middle column, top row).

Close Modal