Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-2 of 2
Thomas Voegtlin
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Neural Computation (2009) 21 (6): 1749–1775.
Published: 01 June 2009
FIGURES
| View All (10)
Abstract
View article
PDF
Predictive learning rules, where synaptic changes are driven by the difference between a random input and its reconstruction derived from internal variables, have proven to be very stable and efficient. However, it is not clear how such learning rules could take place in biological synapses. Here we propose an implementation that exploits the synchronization of neural activities within a recurrent network. In this framework, the asymmetric shape of spike-timing-dependent plasticity (STDP) can be interpreted as a self-stabilizing mechanism. Our results suggest a novel hypothesis concerning the computational role of neural synchrony and oscillations.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2009) 21 (1): 9–45.
Published: 01 January 2009
FIGURES
| View All (8)
Abstract
View article
PDF
The main contribution of this letter is the derivation of a steepest gradient descent learning rule for a multilayer network of theta neurons, a one-dimensional nonlinear neuron model. Central to our model is the assumption that the intrinsic neuron dynamics are sufficient to achieve consistent time coding, with no need to involve the precise shape of postsynaptic currents; this assumption departs from other related models such as SpikeProp and Tempotron learning. Our results clearly show that it is possible to perform complex computations by applying supervised learning techniques to the spike times and time response properties of nonlinear integrate and fire neurons. Networks trained with our multilayer training rule are shown to have similar generalization abilities for spike latency pattern classification as Tempotron learning. The rule is also able to train networks to perform complex regression tasks that neither SpikeProp or Tempotron learning appears to be capable of.