Self-organization is a fundamental principle of the development and function of living systems. Decentralized self-assembly of neurons that act as autonomous agents leads to complicated neural networks in the brain without the need of a blueprint, i.e. without endpoint information. Key principles of the self-assembly of neural networks are (1) algorithmic growth based on limited input information, (2) reliance on iterations of simple rules that often utilize stochastic dynamic processes, and (3) non-deterministic variability, yet functional robustness of the resulting network. Approaches to morphogenetic engineering of functionally robust computational networks through self-assembly may benefit from an understanding of such principles from biological systems. The extraction of such principles is dependent on our ability to observe the self-assembly of neural networks at sufficient spatiotemporal resolution in order to aid data-driven computational modeling. Here, I present quantitative 4D microscopic video data and computational modeling of the self-assembly process of a neural network with more than a million synaptic connections in the fly visual system. Based on long-term imaging we have previously extracted a set of self-assembly rules and engineered a deterministic computational model that recapitulates the network’s self-organization at the cellular (autonomous agents) level. In a second step, we have now measured and modelled the underlying stochastic dynamics at subcellular levels. Our analyses indicate that stochastic dynamics of neuronal extensions are a prerequisite for flexible and robust self-assembly through algorithmic growth based on simple rules.