We investigate the complexity of logistic regression models, which is defined by counting the number of indistinguishable distributions that the model can represent (Balasubramanian, 1997). We find that the complexity of logistic models with binary inputs depends not only on the number of parameters but also on the distribution of inputs in a nontrivial way that standard treatments of complexity do not address. In particular, we observe that correlations among inputs induce effective dependencies among parameters, thus constraining the model and, consequently, reducing its complexity. We derive simple relations for the upper and lower bounds of the complexity. Furthermore, we show analytically that defining the model parameters on a finite support rather than the entire axis decreases the complexity in a manner that critically depends on the size of the domain. Based on our findings, we propose a novel model selection criterion that takes into account the entropy of the input distribution. We test our proposal on the problem of selecting the input variables of a logistic regression model in a Bayesian model selection framework. In our numerical tests, we find that while the reconstruction errors of standard model selection approaches (AIC, BIC, 1 regularization) strongly depend on the sparsity of the ground truth, the reconstruction error of our method is always close to the minimum in all conditions of sparsity, data size, and strength of input correlations. Finally, we observe that when considering categorical instead of binary inputs, in a simple and mathematically tractable case, the contribution of the alphabet size to the complexity is very small compared to that of parameter space dimension. We further explore the issue by analyzing the data set of the “13 keys to the White House,” a method for forecasting the outcomes of US presidential elections.

Supplementary data

You do not currently have access to this content.