Temporal derivatives are computed by a wide variety of neural circuits, but the problem of performing this computation accurately has received little theoretical study. Here we systematically compare the performance of diverse networks that calculate derivatives using cell-intrinsic adaptation and synaptic depression dynamics, feedforward network dynamics, and recurrent network dynamics. Examples of each type of network are compared by quantifying the errors they introduce into the calculation and their rejection of high-frequency input noise. This comparison is based on both analytical methods and numerical simulations with spiking leaky-integrate-and-fire (LIF) neurons. Both adapting and feedforward-network circuits provide good performance for signals with frequency bands that are well matched to the time constants of postsynaptic current decay and adaptation, respectively. The synaptic depression circuit performs similarly to the adaptation circuit, although strictly speaking, precisely linear differentiation based on synaptic depression is not possible, because depression scales synaptic weights multiplicatively. Feedback circuits introduce greater errors than functionally equivalent feedforward circuits, but they have the useful property that their dynamics are determined by feedback strength. For this reason, these circuits are better suited for calculating the derivatives of signals that evolve on timescales outside the range of membrane dynamics and, possibly, for providing the wide range of timescales needed for precise fractional-order differentiation.

You do not currently have access to this content.