It has recently been demonstrated that a Hopfield neural network that learns its own attractor configurations, for instance by repeatedly resetting the network to an arbitrary state and applying Hebbian learning after convergence, is able to form an associative memory of its attractors and thereby facilitate future convergences on better attractors. This process of structural self-optimization has so far only been demonstrated on relatively small artificial neural networks with random or highly regular and constrained topologies, and it remains an open question to what extent it can be generalized to more biologically realistic topologies. In this work, we therefore test this process by running it on the connectome of the widely studied nematode worm, C. elegans, the only living being whose neural system has been mapped in its entirety. Our results demonstrate, for the first time, that the self-optimization process can be generalized to bigger and biologically plausible networks. We conclude by speculating that the reset-convergence mechanism could find a biological equivalent in the sleep-wake cycle in C. elegans.

This content is only available as a PDF.