Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-9 of 9
Lars Kai Hansen
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Neural Computation (2021) 33 (4): 967–1004.
Published: 26 March 2021
Abstract
View article
PDF
Sustained attention is a cognitive ability to maintain task focus over extended periods of time (Mackworth, 1948 ; Chun, Golomb, & Turk-Browne, 2011). In this study, scalp electroencephalography (EEG) signals were processed in real time using a 32 dry-electrode system during a sustained visual attention task. An attention training paradigm was implemented, as designed in DeBettencourt, Cohen, Lee, Norman, and Turk-Browne ( 2015 ) in which the composition of a sequence of blended images is updated based on the participant's decoded attentional level to a primed image category. It was hypothesized that a single neurofeedback training session would improve sustained attention abilities. Twenty-two participants were trained on a single neurofeedback session with behavioral pretraining and posttraining sessions within three consecutive days. Half of the participants functioned as controls in a double-blinded design and received sham neurofeedback. During the neurofeedback session, attentional states to primed categories were decoded in real time and used to provide a continuous feedback signal customized to each participant in a closed-loop approach. We report a mean classifier decoding error rate of 34.3% (chance = 50%). Within the neurofeedback group, there was a greater level of task-relevant attentional information decoded in the participant's brain before making a correct behavioral response than before an incorrect response. This effect was not visible in the control group (interaction p = 7 . 23 e - 4), which strongly indicates that we were able to achieve a meaningful measure of subjective attentional state in real time and control participants' behavior during the neurofeedback session. We do not provide conclusive evidence whether the single neurofeedback session per se provided lasting effects in sustained attention abilities. We developed a portable EEG neurofeedback system capable of decoding attentional states and predicting behavioral choices in the attention task at hand. The neurofeedback code framework is Python based and open source, and it allows users to actively engage in the development of neurofeedback tools for scientific and translational use.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2018) 30 (1): 216–236.
Published: 01 January 2018
FIGURES
| View All (8)
Abstract
View article
PDF
Model-based classification of sequence data using a set of hidden Markov models is a well-known technique. The involved score function, which is often based on the class-conditional likelihood, can, however, be computationally demanding, especially for long data sequences. Inspired by recent theoretical advances in spectral learning of hidden Markov models, we propose a score function based on third-order moments. In particular, we propose to use the Kullback-Leibler divergence between theoretical and empirical third-order moments for classification of sequence data with discrete observations. The proposed method provides lower computational complexity at classification time than the usual likelihood-based methods. In order to demonstrate the properties of the proposed method, we perform classification of both simulated data and empirical data from a human activity recognition study.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2015) 27 (10): 2207–2230.
Published: 01 October 2015
FIGURES
| View All (25)
Abstract
View article
PDF
Correlated component analysis as proposed by Dmochowski, Sajda, Dias, and Parra ( 2012 ) is a tool for investigating brain process similarity in the responses to multiple views of a given stimulus. Correlated components are identified under the assumption that the involved spatial networks are identical. Here we propose a hierarchical probabilistic model that can infer the level of universality in such multiview data, from completely unrelated representations, corresponding to canonical correlation analysis, to identical representations as in correlated component analysis. This new model, which we denote Bayesian correlated component analysis, evaluates favorably against three relevant algorithms in simulated data. A well-established benchmark EEG data set is used to further validate the new model and infer the variability of spatial representations across multiple subjects.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2008) 20 (8): 2112–2131.
Published: 01 August 2008
Abstract
View article
PDF
There is a increasing interest in analysis of large-scale multiway data. The concept of multiway data refers to arrays of data with more than two dimensions, that is, taking the form of tensors. To analyze such data, decomposition techniques are widely used. The two most common decompositions for tensors are the Tucker model and the more restricted PARAFAC model. Both models can be viewed as generalizations of the regular factor analysis to data of more than two modalities. Nonnegative matrix factorization (NMF), in conjunction with sparse coding, has recently been given much attention due to its part-based and easy interpretable representation. While NMF has been extended to the PARAFAC model, no such attempt has been done to extend NMF to the Tucker model. However, if the tensor data analyzed are nonnegative, it may well be relevant to consider purely additive (i.e., nonnegative) Tucker decompositions). To reduce ambiguities of this type of decomposition, we develop updates that can impose sparseness in any combination of modalities, hence, proposed algorithms for sparse nonnegative Tucker decompositions (SN-TUCKER). We demonstrate how the proposed algorithms are superior to existing algorithms for Tucker decompositions when the data and interactions can be considered nonnegative. We further illustrate how sparse coding can help identify what model (PARAFAC or Tucker) is more appropriate for the data as well as to select the number of components by turning off excess components. The algorithms for SN-TUCKER can be downloaded from Mørup (2007).
Journal Articles
Publisher: Journals Gateway
Neural Computation (2008) 20 (3): 738–755.
Published: 01 March 2008
Abstract
View article
PDF
Nonlinear hemodynamic models express the BOLD (blood oxygenation level dependent) signal as a nonlinear, parametric functional of the temporal sequence of local neural activity. Several models have been proposed for both the neural activity and the hemodynamics. We compare two such combined models: the original balloon model with a square-pulse neural model (Friston, Mechelli, Turner, & Price, 2000) and an extended balloon model with a more sophisticated neural model (Buxton, Uludag, Dubowitz, & Liu, 2004). We learn the parameters of both models using a Bayesian approach, where the distribution of the parameters conditioned on the data is estimated using Markov chain Monte Carlo techniques. Using a split-half resampling procedure (Strother, Anderson, & Hansen, 2002), we compare the generalization abilities of the models as well as their reproducibility, for both synthetic and real data, recorded from two different visual stimulation paradigms. The results show that the simple model is the better one for these data.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2007) 19 (4): 934–955.
Published: 01 April 2007
Abstract
View article
PDF
We present a new algorithm for maximum likelihood convolutive independent component analysis (ICA) in which components are unmixed using stable autoregressive filters determined implicitly by estimating a convolutive model of the mixing process. By introducing a convolutive mixing model for the components, we show how the order of the filters in the model can be correctly detected using Bayesian model selection. We demonstrate a framework for deconvolving a subspace of independent components in electroencephalography (EEG). Initial results suggest that in some cases, convolutive mixing may be a more realistic model for EEG signals than the instantaneous ICA model.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2005) 17 (9): 1921–1926.
Published: 01 September 2005
Abstract
View article
PDF
We analyze convergence of the expectation maximization (EM) and variational Bayes EM (VBEM) schemes for parameter estimation in noisy linear models. The analysis shows that both schemes are inefficient in the low-noise limit. The linear model with additive noise includes as special cases independent component analysis, probabilistic principal component analysis, factor analysis, and Kalman filtering. Hence, the results are relevant for many practical applications.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2002) 14 (4): 889–918.
Published: 01 April 2002
Abstract
View article
PDF
We develop mean-field approaches for probabilistic independent component analysis (ICA). The sources are estimated from the mean of their posterior distribution and the mixing matrix (and noise level) is estimated by maximum a posteriori (MAP). The latter requires the computation of (a good approximation to) the correlations between sources. For this purpose, we investigate three increasingly advanced mean-field methods: the variational (also known as naive mean field) approach, linear response corrections, and an adaptive version of the Thouless, Anderson and Palmer (1977) (TAP) mean-field approach, which is due to Opper and Winther (2001). The resulting algorithms are tested on a number of problems. On synthetic data, the advanced mean-field approaches are able to recover the correct mixing matrix in cases where the variational mean-field theory fails. For handwritten digits, sparse encoding is achieved using nonnegative source and mixing priors. For speech, the mean-field method is able to separate in the underdetermined (overcomplete) case of two sensors and three sources. One major advantage of the proposed method is its generality and algorithmic simplicity. Finally, we point out several possible extensions of the approaches developed here.
Journal Articles
Publisher: Journals Gateway
Neural Computation (1994) 6 (6): 1223–1232.
Published: 01 November 1994
Abstract
View article
PDF
Inspired by the recent upsurge of interest in Bayesian methods we consider adaptive regularization. A generalization based scheme for adaptation of regularization parameters is introduced and compared to Bayesian regularization. We show that pruning arises naturally within both adaptive regularization schemes. As model example we have chosen the simplest possible: estimating the mean of a random variable with known variance. Marked similarities are found between the two methods in that they both involve a “noise limit,” below which they regularize with infinite weight decay, i.e., they prune . However, pruning is not always beneficial. We show explicitly that both methods in some cases may increase the generalization error. This corresponds to situations where the underlying assumptions of the regularizer are poorly matched to the environment.