Figure 4:
SNNs can have both implicit and explicit recurrence. Schematic of the computational graph of a single SNN layer composed of leaky integrate-and-fire (LIF) neurons (see section 4). Input spike trains S(0) enter at the bottom and affect the synaptic current variable I(1) through the feedforward weights W(1). Time flows from left to right. Any link that connects temporally adjacent nodes in the graph constitutes a form of recurrence in the computation whereby the synaptic connections V(1) contribute explicit recurrence to the graph. Implicit recurrence is contributed, for instance, by the decay of synaptic current variables and the membrane potentials U(1). Additionally, the spike reset contributes another form of implicit recurrence by coupling the future states to the output spike train S(1). Recurrences involving the surrogate derivative (e.g. the reset) depend on both the shape and the scale of the surrogate chosen and can substantially alter the surrogate gradient.

SNNs can have both implicit and explicit recurrence. Schematic of the computational graph of a single SNN layer composed of leaky integrate-and-fire (LIF) neurons (see section 4). Input spike trains S(0) enter at the bottom and affect the synaptic current variable I(1) through the feedforward weights W(1). Time flows from left to right. Any link that connects temporally adjacent nodes in the graph constitutes a form of recurrence in the computation whereby the synaptic connections V(1) contribute explicit recurrence to the graph. Implicit recurrence is contributed, for instance, by the decay of synaptic current variables and the membrane potentials U(1). Additionally, the spike reset contributes another form of implicit recurrence by coupling the future states to the output spike train S(1). Recurrences involving the surrogate derivative (e.g. the reset) depend on both the shape and the scale of the surrogate chosen and can substantially alter the surrogate gradient.

Close Modal

or Create an Account

Close Modal
Close Modal