Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-3 of 3
John Moody
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Neural Computation (1996) 8 (3): 461–489.
Published: 01 April 1996
Abstract
View article
PDF
We derive a smoothing regularizer for dynamic network models by requiring robustness in prediction performance to perturbations of the training data. The regularizer can be viewed as a generalization of the first-order Tikhonov stabilizer to dynamic models. For two layer networks with recurrent connections described by the training criterion with the regularizer is where Φ = {U, V, W} is the network parameter set, Z ( t ) are the targets, I ( t ) = { X ( s ), s = 1,2, …, t } represents the current and all historical input information, N is the size of the training data set, is the regularizer, and λ is a regularization parameter. The closed-form expression for the regularizer for time-lagged recurrent networks is where ‖ ‖ is the Euclidean matrix norm and γ is a factor that depends upon the maximal value of the first derivatives of the internal unit activations f (). Simplifications of the regularizer are obtained for simultaneous recurrent nets ( τ ↦ 0), two-layer feedforward nets, and one layer linear nets. We have successfully tested this regularizer in a number of case studies and found that it performs better than standard quadratic weight decay.
Journal Articles
Publisher: Journals Gateway
Neural Computation (1990) 2 (3): 334–354.
Published: 01 September 1990
Abstract
View article
PDF
The existence of modular structures in the organization of nervous systems (e.g., cortical columns, patches of neostriatum, and olfactory glomeruli) is well known. However, the detailed dynamic mechanisms by which such structures develop remain a mystery. We propose a mechanism for the formation of modular structures that utilizes a combination of intrinsic network dynamics and Hebbian learning. Specifically, we show that under certain conditions, layered networks can support spontaneous localized activity patterns, which we call collective excitations , even in the absence of localized or spatially correlated afferent stimulation. These collective excitations can then induce the formation of modular structures in both the afferent and lateral connections via a Hebbian learning mechanism. The networks we consider are spatially homogeneous before learning, but the spontaneous emergence of localized collective excitations and the consequent development of modules in the connection patterns break translational symmetry. The essential conditions required to support collective excitations include internal units with sufficiently high gains and certain patterns of lateral connectivity. Our proposed mechanism is likely to play a role in understanding more complex (and more biologically realistic) systems.
Journal Articles
Publisher: Journals Gateway
Neural Computation (1989) 1 (2): 281–294.
Published: 01 June 1989
Abstract
View article
PDF
We propose a network architecture which uses a single internal layer of locally-tuned processing units to learn both classification tasks and real-valued function approximations (Moody and Darken 1988). We consider training such networks in a completely supervised manner, but abandon this approach in favor of a more computationally efficient hybrid learning method which combines self-organized and supervised learning. Our networks learn faster than backpropagation for two reasons: the local representations ensure that only a few units respond to any given input, thus reducing computational overhead, and the hybrid learning rules are linear rather than nonlinear, thus leading to faster convergence. Unlike many existing methods for data analysis, our network architecture and learning rules are truly adaptive and are thus appropriate for real-time use.