Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-3 of 3
Kenneth Kreutz-Delgado
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Neural Computation (2021) 33 (9): 2408–2438.
Published: 19 August 2021
Abstract
View article
PDF
Electromagnetic source imaging (ESI) and independent component analysis (ICA) are two popular and apparently dissimilar frameworks for M/EEG analysis. This letter shows that the two frameworks can be linked by choosing biologically inspired source sparsity priors. We demonstrate that ESI carried out by the sparse Bayesian learning (SBL) algorithm yields source configurations composed of a few active regions that are also maximally independent from one another. In addition, we extend the standard SBL approach to source imaging in two important directions. First, we augment the generative model of M/EEG to include artifactual sources. Second, we modify SBL to allow for efficient model inversion with sequential data. We refer to this new algorithm as recursive SBL (RSBL), a source estimation filter with potential for online and offline imaging applications. We use simulated data to verify that RSBL can accurately estimate and demix cortical and artifactual sources under different noise conditions. Finally, we show that on real error-related EEG data, RSBL can yield single-trial source estimates in agreement with the experimental literature. Overall, by demonstrating that ESI can produce maximally independent sources while simultaneously localizing them in cortical space, we bridge the gap between the ESI and ICA frameworks for M/EEG analysis.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2007) 19 (9): 2301–2352.
Published: 01 September 2007
Abstract
View article
PDF
We present a hierarchical architecture and learning algorithm for visual recognition and other visual inference tasks such as imagination, reconstruction of occluded images, and expectation-driven segmentation. Using properties of biological vision for guidance, we posit a stochastic generative world model and from it develop a simplified world model (SWM) based on a tractable variational approximation that is designed to enforce sparse coding. Recent developments in computational methods for learning overcomplete representations (Lewicki & Sejnowski, 2000; Teh, Welling, Osindero, & Hinton, 2003) suggest that overcompleteness can be useful for visual tasks, and we use an overcomplete dictionary learning algorithm (Kreutz-Delgado, et al., 2003) as a preprocessing stage to produce accurate, sparse codings of images. Inference is performed by constructing a dynamic multilayer network with feedforward, feedback, and lateral connections, which is trained to approximate the SWM. Learning is done with a variant of the back-propagation-through-time algorithm, which encourages convergence to desired states within a fixed number of iterations. Vision tasks require large networks, and to make learning efficient, we take advantage of the sparsity of each layer to update only a small subset of elements in a large weight matrix at each iteration. Experiments on a set of rotated objects demonstrate various types of visual inference and show that increasing the degree of overcompleteness improves recognition performance in difficult scenes with occluded objects in clutter.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2003) 15 (2): 349–396.
Published: 01 February 2003
Abstract
View article
PDF
Algorithms for data-driven learning of domain-specific overcomplete dictionaries are developed to obtain maximum likelihood and maximum a posteriori dictionary estimates based on the use of Bayesian models with concave/Schur-concave (CSC) negative log priors. Such priors are appropriate for obtaining sparse representations of environmental signals within an appropriately chosen (environmentally matched) dictionary. The elements of the dictionary can be interpreted as concepts, features, or words capable of succinct expression of events encountered in the environment (the source of the measured signals). This is a generalization of vector quantization in that one is interested in a description involving a few dictionary entries (the proverbial “25 words or less”), but not necessarily as succinct as one entry. To learn an environmentally adapted dictionary capable of concise expression of signals generated by the environment, we develop algorithms that iterate between a representative set of sparse representations found by variants of FOCUSS and an update of the dictionary using these sparse representations. Experiments were performed using synthetic data and natural images. For complete dictionaries, we demonstrate that our algorithms have improved performance over other independent component analysis (ICA) methods, measured in terms of signal-to-noise ratios of separated sources. In the overcomplete case, we show that the true underlying dictionary and sparse sources can be accurately recovered. In tests with natural images, learned overcomplete dictionaries are shown to have higher coding efficiency than complete dictionaries; that is, images encoded with an overcomplete dictionary have both higher compression (fewer bits per pixel) and higher accuracy (lower mean square error).