Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
Date
Availability
1-4 of 4
David Zipser
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Neural Computation (1998) 10 (2): 353–371.
Published: 15 February 1998
Abstract
View article
PDF
The relative contributions of feedforward and recurrent connectivity to the direction-selective responses of cells in layer IVB of primary visual cortex are currently the subject of debate in the neuroscience community. Recently, biophysically detailed simulations have shown that realistic direction-selective responses can be achieved via recurrent cortical interactions between cells with nondirection-selective feedforward input (Suarez et al., 1995; Maex & Orban, 1996). Unfortunately these models, while desirable for detailed comparison with biology, are complex and thus difficult to analyze mathematically. In this article, a relatively simple cortical dynamical model is used to analyze the emergence of direction-selective responses via recurrent interactions. A comparison between a model based on our analysis and physiological data is presented. The approach also allows analysis of the recurrently propagated signal, revealing the predictive nature of the implementation.
Journal Articles
Publisher: Journals Gateway
Neural Computation (1991) 3 (2): 179–193.
Published: 01 June 1991
Abstract
View article
PDF
Two decades of single unit recording in monkeys performing short-term memory tasks has established that information can be stored as sustained neural activity. The mechanism of this information storage is unknown. The learning-based model described here demonstrates that a mechanism using only the dynamic activity in recurrent networks is sufficient to account for the observed phenomena. The temporal activity patterns of neurons in the model match those of real memory-associated neurons, while the model's gating properties and attractor dynamics provide explanations for puzzling aspects of the experimental data.
Journal Articles
Publisher: Journals Gateway
Neural Computation (1989) 1 (4): 552–558.
Published: 01 December 1989
Abstract
View article
PDF
An algorithm, called RTRL, for training fully recurrent neural networks has recently been studied by Williams and Zipser (1989a, b). Whereas RTRL has been shown to have great power and generality, it has the disadvantage of requiring a great deal of computation time. A technique is described here for reducing the amount of computation required by RTRL without changing the connectivity of the networks. This is accomplished by dividing the original network into subnets for the purpose of error propagation while leaving them undivided for activity propagation. An example is given of a 12-unit network that learns to be the finite-state part of a Turing machine and runs 10 times faster using the subgrouping strategy than the original algorithm.
Journal Articles
Publisher: Journals Gateway
Neural Computation (1989) 1 (2): 270–280.
Published: 01 June 1989
Abstract
View article
PDF
The exact form of a gradient-following learning algorithm for completely recurrent networks running in continually sampled time is derived and used as the basis for practical algorithms for temporal supervised learning tasks. These algorithms have (1) the advantage that they do not require a precisely defined training interval, operating while the network runs; and (2) the disadvantage that they require nonlocal communication in the network being trained and are computationally expensive. These algorithms allow networks having recurrent connections to learn complex tasks that require the retention of information over time periods having either fixed or indefinite length.