Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-2 of 2
Soo-Young Lee
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Neural Computation (2010) 22 (6): 1615–1645.
Published: 01 June 2010
FIGURES
| View All (12)
Abstract
View article
PDF
This letter looks at the physics behind sensory data by identifying the parameters that govern the physical system and estimating them from sensory observations. We extend Takens's delay-embedding theorem to the dynamical systems controlled by parameters. An embedding of the product space of the phase and the parameter spaces of the dynamical system can be obtained by the delay-embedding map, provided that the parameter of the dynamical system changes slowly. The reconstruction error is bounded for slowly varying parameters. A manifold learning technique is applied to the embedding obtained to extract a low-dimensional global coordinate system representing the product space. The phase space of the deterministic dynamics can be contracted by using the adjacency relationship in time, which enables recovery of only the parameter space. As examples, the manifolds of synthetic and real-world vowels with time-varying fundamental frequency ( F 0 ) are analyzed, and the F 0 contours are extracted by an unsupervised algorithm. Experimental results show that the proposed method leads to robust performance under various noise conditions and rapid changes of F 0 compared with the current state-of-the-art F 0 estimation algorithms.
Includes: Supplementary data
Journal Articles
Publisher: Journals Gateway
Neural Computation (1991) 3 (1): 135–143.
Published: 01 March 1991
Abstract
View article
PDF
TAG (Training by Adaptive Gain) is a new adaptive learning algorithm developed for optical implementation of large-scale artificial neural networks. For fully interconnected single-layer neural networks with N input and M output neurons TAG contains two different types of interconnections, i.e., M N global fixed interconnections and N + M adaptive gain controls. For two-dimensional input patterns the former may be achieved by multifacet holograms, and the latter by spatial light modulators (SLMs). For the same number of input and output neurons TAG requires much less adaptive elements, and provides a possibility for large-scale optical implementation at some sacrifice in performance as compared to the perceptron. The training algorithm is based on gradient descent and error backpropagation, and is easily extensible to multilayer architecture. Computer simulation demonstrates reasonable performance of TAG compared to perceptron performance. An electrooptical implementation of TAG is also proposed.