The role of correlations in the activity of neural populations responding to a set of stimuli can be studied within an information theory framework. Regardless of whether one approaches the problem from an encoding or decoding perspective, the main measures used to study the role of correlations can be derived from a common source: the expansion of the mutual information. Two main formalisms of mutual information expansion have been proposed: the series expansion and the exact breakdown. Here we clarify that these two formalisms have a different representation of autocorrelations, so that even when the total information estimated differs by less than 1%, individual terms can diverge. More precisely, the series expansion explicitly evaluates the informational contribution of autocorrelations in the count of spikes, that is, count autocorrelations, whereas the exact breakdown does not. We propose a new formalism of mutual information expansion, the Poisson exact breakdown, which introduces Poisson equivalents in order to explicitly evaluate the informational contribution of count autocorrelations with no approximation involved. Because several widely employed manipulations of spike trains, most notably binning and pooling, alter the structure of count autocorrelations, the new formalism can provide a useful general framework for studying the role of correlations in population codes.