Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-2 of 2
John G. Taylor
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Neural Computation (1997) 9 (5): 985–1000.
Published: 01 July 1997
Abstract
View article
PDF
Partial reset is a simple and powerful tool for controlling the irregularity of spike trains fired by a leaky integrator neuron model with random inputs. In particular, a single neuron model with a realistic membrane time constant of 10 ms can reproduce the highly irregular firing of cortical neurons reported by Softky and Koch (1993). In this article, the mechanisms by which partial reset affects the firing pattern are investigated. Itisshown theoretically that partial reset is equivalent to the use of a time-dependent threshold, similar to a technique proposed by Wilbur and Rinzel (1983) to produce high irregularity. This equivalent model allows establishing that temporal integration and fluctuation detection can coexist and cooperate to cause highly irregular firing. This study also reveals that reverse correlation curves cannot be used reliably to assess the causes of firing. For instance, they do not reveal temporal integration when it takes place. Further, the peak near time zero does not always indicate coincidence detection. An alternative qualitative method is proposed here for that later purpose. Finally, it is noted that as the reset becomes weaker, the firing pattern shows a progressive transition from regular firing, to random, to temporally clustered, and eventually to bursting firing. Concurrently the slope of the transfer function increases. Thus, simulations suggest a correlation between high gain and highly irregular firing.
Journal Articles
Publisher: Journals Gateway
Neural Computation (1995) 7 (2): 284–289.
Published: 01 March 1995
Abstract
View article
PDF
We present a geometric interpretation of ordering in self-organizing feature maps. This view provides simpler proofs of Kohonen ordering theorem and of convergence to an ordered state in the one-dimensional case. At the same time it explains intuitively the origin of the problems in higher dimensional cases. Furthermore it provides a geometric view of the known characteristics of learning in self-organizing nets.