Spatiotemporal connectionist networks (STCNs) comprise an important class of neural models that can deal with patterns distributed in both time and space. In this article, we widen the application domain of the taxonomy for supervised STCNs recently proposed by Kremer (2001) to the unsupervised case. This is possible through a reinterpretation of the state vector as a vector of latent (hidden) variables, as proposed by Meinicke (2000). The goal of this generalized taxonomy is then to provide a nonlinear generative framework for describing unsupervised spatiotemporal networks, making it easier to compare and contrast their representational and operational characteristics. Computational properties, representational issues, and learning are also discussed, and a number of references to the relevant source publications are provided. It is argued that the proposed approach is simple and more powerful than the previous attempts from a descriptive and predictive viewpoint. We also discuss the relation of this taxonomy with automata theory and state-space modeling and suggest directions for further work.