Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-5 of 5
Te-Won Lee
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Neural Computation (2010) 22 (6): 1646–1673.
Published: 01 June 2010
FIGURES
| View All (5)
Abstract
View article
PDF
Convolutive mixtures of signals, which are common in acoustic environments, can be difficult to separate into their component sources. Here we present a uniform probabilistic framework to separate convolutive mixtures of acoustic signals using independent vector analysis (IVA), which is based on a joint distribution for the frequency components originating from the same source and is capable of preventing permutation disorder. Different gaussian mixture models (GMM) served as source priors, in contrast to the original IVA model, where all sources were modeled by identical multivariate Laplacian distributions. This flexible source prior enabled the IVA model to separate different type of signals. Three classes of models were derived and tested: noiseless IVA, online IVA, and noisy IVA. In the IVA model without sensor noise, the unmixing matrices were efficiently estimated by the expectation maximization (EM) algorithm. An online EM algorithm was derived for the online IVA algorithm to track the movement of the sources and separate them under nonstationary conditions. The noisy IVA model included the sensor noise and combined denoising with separation. An EM algorithm was developed that found the model parameters and separated the sources simultaneously. These algorithms were applied to separate mixtures of speech and music. Performance as measured by the signal-to-interference ratio (SIR) was substantial for all three models.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2003) 15 (8): 1991–2011.
Published: 01 August 2003
Abstract
View article
PDF
Missing data are common in real-world data sets and are a problem for many estimation techniques. We have developed a variational Bayesian method to perform independent component analysis (ICA) on high-dimensional data containing missing entries. Missing data are handled naturally in the Bayesian framework by integrating the generative density model. Modeling the distributions of the independent sources with mixture of gaussians allows sources to be estimated with different kurtosis and skewness. Unlike the maximum likelihood approach, the variational Bayesian method automatically determines the dimensionality of the data and yields an accurate density model for the observed data without overfitting problems. The technique is also extended to the clusters of ICA and supervised classification framework.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2003) 15 (2): 397–417.
Published: 01 February 2003
Abstract
View article
PDF
Neurons in the early stages of processing in the primate visual system efficiently encode natural scenes. In previous studies of the chromatic properties of natural images, the inputs were sampled on a regular array, with complete color information at every location. However, in the retina cone photoreceptors with different spectral sensitivities are arranged in a mosaic. We used an unsupervised neural network model to analyze the statistical structure of retinal cone mosaic responses to calibrated color natural images. The second-order statistical dependencies derived from the covariance matrix of the sensory signals were removed in the first stage of processing. These decorrelating filters were similar to type I receptive fields in parvo- or konio-cellular LGN in both spatial and chromatic characteristics. In the subsequent stage, the decorrelated signals were linearly transformed to make the output as statistically independent as possible, using independent component analysis. The independent component filters showed luminance selectivity with simple-cell-like receptive fields, or had strong color selectivity with large, often double-opponent, receptive fields, both of which were found in the primary visual cortex (V1). These results show that the “form” and “color” channels of the early visual system can be derived from the statistics of sensory signals.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2003) 15 (2): 349–396.
Published: 01 February 2003
Abstract
View article
PDF
Algorithms for data-driven learning of domain-specific overcomplete dictionaries are developed to obtain maximum likelihood and maximum a posteriori dictionary estimates based on the use of Bayesian models with concave/Schur-concave (CSC) negative log priors. Such priors are appropriate for obtaining sparse representations of environmental signals within an appropriately chosen (environmentally matched) dictionary. The elements of the dictionary can be interpreted as concepts, features, or words capable of succinct expression of events encountered in the environment (the source of the measured signals). This is a generalization of vector quantization in that one is interested in a description involving a few dictionary entries (the proverbial “25 words or less”), but not necessarily as succinct as one entry. To learn an environmentally adapted dictionary capable of concise expression of signals generated by the environment, we develop algorithms that iterate between a representative set of sparse representations found by variants of FOCUSS and an update of the dictionary using these sparse representations. Experiments were performed using synthetic data and natural images. For complete dictionaries, we demonstrate that our algorithms have improved performance over other independent component analysis (ICA) methods, measured in terms of signal-to-noise ratios of separated sources. In the overcomplete case, we show that the true underlying dictionary and sparse sources can be accurately recovered. In tests with natural images, learned overcomplete dictionaries are shown to have higher coding efficiency than complete dictionaries; that is, images encoded with an overcomplete dictionary have both higher compression (fewer bits per pixel) and higher accuracy (lower mean square error).
Journal Articles
Publisher: Journals Gateway
Neural Computation (1999) 11 (2): 417–441.
Published: 15 February 1999
Abstract
View article
PDF
An extension of the infomax algorithm of Bell and Sejnowski (1995) is presented that is able blindly to separate mixed signals with sub- and supergaussian source distributions. This was achieved by using a simple type of learning rule first derived by Girolami (1997) by choosing negentropy as a projection pursuit index. Parameterized probability distributions that have sub- and supergaussian regimes were used to derive a general learning rule that preserves the simple architecture proposed by Bell and Sejnowski (1995), is optimized using the natural gradient by Amari (1998), and uses the stability analysis of Cardoso and Laheld (1996) to switch between sub- and supergaussian regimes. We demonstrate that the extended infomax algorithm is able to separate 20 sources with a variety of source distributions easily. Applied to high-dimensional data from electroencephalographic recordings, it is effective at separating artifacts such as eye blinks and line noise from weaker electrical signals that arise from sources in the brain.