Donald Hebb proposed in his 1949 book The Organization of behavior that cell assemblies organized by temporally-asymmetric excitation form the basis of cognition. This basic idea has inspired a large body of research in neuroscience, and to a lesser extent in artificial intelligence. The modern manifestation of Hebb's principle is Spike-Timing Dependent Plasticity (STDP), and though we have a large body of experimental work investigating STDP, there is still little understanding of how networks of spiking neurons organize themselves into complex functional circuits, even though some progress has been made with models such as Liquid State Machines. Networks popular in artificial intelligence (e.g. MLPs) and in artificial life (e.g. CTRNNs) tend to eschew Hebb's insight and use error-backpropagation by gradient descent, in the case of AI, or an a-temporal Hebbian learning rule based on the outer product of neural activities, in the case of AL. Both of these approaches have greater interpretability than Spiking Neural Networks (SNNs), but both lack the mechanism that Hebb claimed was fundamental to cognition. This paper proposes to use complex-valued neurons (CVNs) to address this limitation, simultaneously promoting biological interpretation and computational tractability. The CVNs encode the firing rate and spike-time of a spiking neuron in the magnitude and angle, respectively, of a complex number. We also introduce an unsupervised piecewise-linear STDP learning rule compatible with CVNs, which for brevity we call complex-valued STDP (CVSTDP).We demonstrate both learning through error-backpropagation, and the spontaneous formation and dissolution of cell assemblies via the CVSTDP rule.