Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-4 of 4
Eduardo D. Sontag
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Neural Computation (2000) 12 (8): 1743–1772.
Published: 01 August 2000
Abstract
View article
PDF
Experimental data show that biological synapses behave quite differently from the symbolic synapses in all common artificial neural network models. Biological synapses are dynamic; their “weight” changes on a short timescale by several hundred percent in dependence of the past input to the synapse. In this article we address the question how this inherent synaptic dynamics (which should not be confused with long term learning ) affects the computational power of a neural network. In particular, we analyze computations on temporal and spatiotemporal patterns, and we give a complete mathematical characterization of all filters that can be approximated by feedforward neural networks with dynamic synapses. It turns out that even with just a single hidden layer, such networks can approximate a very rich class of nonlinear filters: all filters that can be characterized by Volterra series. This result is robust with regard to various changes in the model for synaptic dynamics. Our characterization result provides for all nonlinear filters that are approximable by Volterra series a new complexity hierarchy related to the cost of implementing such filters in neural systems.
Journal Articles
Publisher: Journals Gateway
Neural Computation (1999) 11 (3): 771–782.
Published: 01 April 1999
Abstract
View article
PDF
We consider recurrent analog neural nets where the output of each gate is subject to gaussian noise or any other common noise distribution that is nonzero on a sufficiently large part of the state-space. We show that many regular languages cannot be recognized by networks of this type, and we give a precise characterization of languages that can be recognized. This result implies severe constraints on possibilities for constructing recurrent analog neural nets that are robust against realistic types of analog noise. On the other hand, we present a method for constructing feedfor-ward analog neural nets that are robust with regard to analog noise of this type.
Journal Articles
Publisher: Journals Gateway
Neural Computation (1997) 9 (2): 337–348.
Published: 15 February 1997
Abstract
View article
PDF
For classes of concepts defined by certain classes of analytic functions depending on n parameters, there are nonempty open sets of samples of length 2 n + 2 that cannot be shattered. A slighly weaker result is also proved for piecewise-analytic functions. The special case of neural networks is discussed.
Journal Articles
Publisher: Journals Gateway
Neural Computation (1989) 1 (4): 470–472.
Published: 01 December 1989
Abstract
View article
PDF
Every dichotomy on a 2 k -point set in ℝ N can be implemented by a neural net with a single hidden layer containing k sigmoidal neurons. If the neurons were of a hardlimiter (Heaviside) type, 2 k − 1 would be in general needed.