Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
Date
Availability
1-2 of 2
Sara A. Solla
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Neural Computation (2006) 18 (2): 329–355.
Published: 01 February 2006
Abstract
View article
PDF
A robust identification algorithm has been developed for linear, time-invariant, multiple-input single-output systems, with an emphasis on how this algorithm can be used to estimate the dynamic relationship between a set of neural recordings and related physiological signals. The identification algorithm provides a decomposition of the system output such that each component is uniquely attributable to a specific input signal, and then reduces the complexity of the estimation problem by discarding those input signals that are deemed to be insignificant. Numerical difficulties due to limited input bandwidth and correlations among the inputs are addressed using a robust estimation technique based on singular value decomposition. The algorithm has been evaluated on both simulated and experimental data. The latter involved estimating the relationship between up to 40 simultaneously recorded motor cortical signals and peripheral electromyograms (EMGs) from four upper limb muscles in a freely moving primate.The algorithm performed well in both cases:it provided reliable estimates of the system output and significantly reduced the number of inputs needed for output prediction. For example, although physiological recordings from up to 40 different neuronal signals were available, the input selection algorithm reduced this to 10 neuronal signals that made signicant contributions to the recorded EMGs.
Journal Articles
Publisher: Journals Gateway
Neural Computation (1990) 2 (3): 374–385.
Published: 01 September 1990
Abstract
View article
PDF
Exhaustive exploration of an ensemble of networks is used to model learning and generalization in layered neural networks. A simple Boolean learning problem involving networks with binary weights is numerically solved to obtain the entropy S m and the average generalization ability G m as a function of the size m of the training set. Learning curves G m vs m are shown to depend solely on the distribution of generalization abilities over the ensemble of networks. Such distribution is determined prior to learning, and provides a novel theoretical tool for the prediction of network performance on a specific task.