Figure 5:
Intermediate ConvRNN circuits with highest OST consistency conserve on network size while maintaining task performance. Across all models considered, the intermediate ConvRNNs (denoted by ×) that attain high categorization performance (x-axis) while maintaining a low unit count (panel B) rather than parameter count (panel A) for their given performance level, achieve the highest mean OST consistency (Spearman correlation with IT population OST, averaged across N=10 train/test splits). The color bar indicates this mean OST consistency (monotonically increasing from purple to red), binned into six equal ranges. Models with a larger network size at a fixed performance level are less consistent with primate object recognition behavior (e.g., deep feedforward models, denoted by boxes), with recurrence maintaining a fundamental trade-off between network size and task performance.

Intermediate ConvRNN circuits with highest OST consistency conserve on network size while maintaining task performance. Across all models considered, the intermediate ConvRNNs (denoted by ×) that attain high categorization performance (x-axis) while maintaining a low unit count (panel B) rather than parameter count (panel A) for their given performance level, achieve the highest mean OST consistency (Spearman correlation with IT population OST, averaged across N=10 train/test splits). The color bar indicates this mean OST consistency (monotonically increasing from purple to red), binned into six equal ranges. Models with a larger network size at a fixed performance level are less consistent with primate object recognition behavior (e.g., deep feedforward models, denoted by boxes), with recurrence maintaining a fundamental trade-off between network size and task performance.

Close Modal

or Create an Account

Close Modal
Close Modal