Cortical neurons are predominantly excitatory and highly interconnected. In spite of this, the cortex is remarkably stable: normal brains do not exhibit the kind of runaway excitation one might expect of such a system. How does the cortex maintain stability in the face of this massive excitatory feedback? More importantly, how does it do so during computations, which necessarily involve elevated firing rates? Here we address these questions in the context of attractor networks—networks that exhibit multiple stable states, or memories. We find that such networks can be stabilized at the relatively low firing rates observed in vivo if two conditions are met: (1) the background state, where all neurons are firing at low rates, is inhibition dominated, and (2) the fraction of neurons involved in a memory is above some threshold, so that there is sufficient coupling between the memory neurons and the background. This allows “dynamical stabilization” of the attractors, meaning feedback from the pool of background neurons stabilizes what would otherwise be an unstable state. We suggest that dynamical stabilization may be a strategy used for a broad range of computations, not just those involving attractors.