Abstract
Biophysical modeling studies have suggested that neurons with active dendrites can be viewed as linear units augmented by product terms that arise from interactions between synaptic inputs within the same dendritic subregions. However, the degree to which local nonlinear synaptic interactions could augment the memory capacity of a neuron is not known in a quantitative sense. To approach this question, we have studied the family of subsampled quadratic classifiers: linear classifiers augmented by the best k terms from the set of K = (d2 + d)/2 second-order product terms available in d dimensions. We developed an expression for the total parameter entropy, whose form shows that the capacity of an SQ classifier does not reside solely in its conventional weight values—the explicit memory used to store constant, linear, and higher-order coefficients. Rather, we identify a second type of parameter flexibility that jointly contributes to an SQ classifier's capacity: the choice as to which product terms are included in the model and which are not. We validate the form of the entropy expression using empirical studies of relative capacity within families of geometrically isomorphic SQ classifiers. Our results have direct implications for neurobiological (and other hardware) learning systems, where in the limit of high-dimensional input spaces and low-resolution synaptic weight values, this relatively little explored form of choice flexibility could constitute a major source of trainable model capacity.