Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-9 of 9
Suzanna Becker
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Neural Computation (2019) 31 (11): 2177–2211.
Published: 01 November 2019
FIGURES
| View All (12)
Abstract
View article
PDF
The brain is known to be active even when not performing any overt cognitive tasks, and often it engages in involuntary mind wandering. This resting state has been extensively characterized in terms of fMRI-derived brain networks. However, an alternate method has recently gained popularity: EEG microstate analysis. Proponents of microstates postulate that the brain discontinuously switches between four quasi-stable states defined by specific EEG scalp topologies at peaks in the global field potential (GFP). These microstates are thought to be “atoms of thought,” involved with visual, auditory, salience, and attention processing. However, this method makes some major assumptions by excluding EEG data outside the GFP peaks and then clustering the EEG scalp topologies at the GFP peaks, assuming that only one microstate is active at any given time. This study explores the evidence surrounding these assumptions by studying the temporal dynamics of microstates and its clustering space using tools from dynamical systems analysis, fractal, and chaos theory to highlight the shortcomings in microstate analysis. The results show evidence of complex and chaotic EEG dynamics outside the GFP peaks, which is being missed by microstate analysis. Furthermore, the winner-takes-all approach of only one microstate being active at a time is found to be inadequate since the dynamic EEG scalp topology does not always resemble that of the assigned microstate, and there is competition among the different microstate classes. Finally, clustering space analysis shows that the four microstates do not cluster into four distinct and separable clusters. Taken collectively, these results show that the discontinuous description of EEG microstates is inadequate when looking at nonstationary short-scale EEG dynamics.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2017) 29 (10): 2742–2768.
Published: 01 October 2017
Abstract
View article
PDF
Brain-computer interfaces (BCIs) allow users to control a device by interpreting their brain activity. For simplicity, these devices are designed to be operated by purposefully modulating specific predetermined neurophysiological signals, such as the sensorimotor rhythm. However, the ability to modulate a given neurophysiological signal is highly variable across individuals, contributing to the inconsistent performance of BCIs for different users. These differences suggest that individuals who experience poor BCI performance with one class of brain signals might have good results with another. In order to take advantage of individual abilities as they relate to BCI control, we need to move beyond the current approaches. In this letter, we explore a new BCI design aimed at a more individualized and user-focused experience, which we call open-ended BCI. Individual users were given the freedom to discover their own mental strategies as opposed to being trained to modulate a given brain signal. They then underwent multiple coadaptive training sessions with the BCI. Our first open-ended BCI performed similarly to comparable BCIs while accommodating a wider variety of mental strategies without a priori knowledge of the specific brain signals any individual might use. Post hoc analysis revealed individual differences in terms of which sensory modality yielded optimal performance. We found a large and significant effect of individual differences in background training and expertise, such as in musical training, on BCI performance. Future research should be focused on finding more generalized solutions to user training and brain state decoding methods to fully utilize the abilities of different individuals in an open-ended BCI. Accounting for each individual's areas of expertise could have important implications on BCI training and BCI application design.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2008) 20 (3): 709–737.
Published: 01 March 2008
Abstract
View article
PDF
Numerous single-unit recording studies have found mammalian hippocampal neurons that fire selectively for the animal's location in space, independent of its orientation. The population of such neurons, commonly known as place cells, is thought to maintain an allocentric, or orientation-independent, internal representation of the animal's location in space, as well as mediating long-term storage of spatial memories. The fact that spatial information from the environment must reach the brain via sensory receptors in an inherently egocentric, or viewpoint-dependent, fashion leads to the question of how the brain learns to transform egocentric sensory representations into allocentric ones for long-term memory storage. Additionally, if these long-term memory representations of space are to be useful in guiding motor behavior, then the reverse transformation, from allocentric to egocentric coordinates, must also be learned. We propose that orientation-invariant representations can be learned by neural circuits that follow two learning principles: minimization of reconstruction error and maximization of representational temporal inertia. Two different neural network models are presented that adhere to these learning principles, the first by direct optimization through gradient descent and the second using a more biologically realistic circuit based on the restricted Boltzmann machine (Hinton, 2002; Smolensky, 1986). Both models lead to orientation-invariant representations, with the latter demonstrating place-cell-like responses when trained on a linear track environment.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2006) 18 (12): 2942–2958.
Published: 01 December 2006
Abstract
View article
PDF
Hearing loss due to peripheral damage is associated with cochlear hair cell damage or loss and some retrograde degeneration of auditory nerve fibers. Surviving auditory nerve fibers in the impaired region exhibit elevated and broadened frequency tuning, and the cochleotopic representation of broadband stimuli such as speech is distorted. In impaired cortical regions, increased tuning to frequencies near the edge of the hearing loss coupled with increased spontaneous and synchronous firing is observed. Tinnitus, an auditory percept in the absence of sensory input, may arise under these circumstances as a result of plastic reorganization in the auditory cortex. We present a spiking neuron model of auditory cortex that captures several key features of cortical organization. A key assumption in the model is that in response to reduced afferent excitatory input in the damaged region, a compensatory change in the connection strengths of lateral excitatory and inhibitory connections occurs. These changes allow the model to capture some of the cortical correlates of sensorineural hearing loss, including changes in spontaneous firing and synchrony; these phenomena may explain central tinnitus. This model may also be useful for evaluating procedures designed to segregate synchronous activity underlying tinnitus and for evaluating adaptive hearing devices that compensate for selective hearing loss.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2005) 17 (12): 2648–2671.
Published: 01 December 2005
Abstract
View article
PDF
We propose a novel model-based hearing compensation strategy and gradient-free optimization procedure for a learning-based hearing aid design. Motivated by physiological data and normal and impaired auditory nerve models, a hearing compensation strategy is cast as a neural coding problem, and a Neurocompensator is designed to compensate for the hearing loss and enhance the speech. With the goal of learning the Neurocompensator parameters, we use a gradient-free optimization procedure, an improved version of the ALOPEX that we have developed (Haykin, Chen, & Becker, 2004), to learn the unknown parameters of the Neurocompensator. We present our methodology, learning procedure, and experimental results in detail; discussion is also given regarding the unsupervised learning and optimization methods.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2005) 17 (2): 361–395.
Published: 01 February 2005
Abstract
View article
PDF
The functional role of dopamine has attracted a great deal of interest ever since it was empirically discovered that dopamine-blocking drugs could be used to treat psychosis. Specifically, the D2 receptor and its expression in the ventral striatum have emerged as pivotal in our understanding of the complex role of the neuromodulator in schizophrenia, reward, and motivation. Our departure from the ubiquitous temporal difference (TD) model of dopamine neuron firing allows us to account for a range of experimental evidence suggesting that ventral striatal dopamine D2 receptor manipulation selectively modulates motivated behavior for distal versus proximal outcomes. Whether an internal model or the TD approach (or a mixture) is better suited to a comprehensive exposition of tonic and phasic dopamine will have important implications for our understanding of reward, motivation, schizophrenia, and impulsivity. We also use the model to help unite some of the leading cognitive hypotheses of dopamine function under a computational umbrella. We have used the model ourselves to stimulate and focus new rounds of experimental research.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2004) 16 (9): 1851–1872.
Published: 01 September 2004
Abstract
View article
PDF
Various lines of evidence indicate that animals process spatial information regarding object locations differently from spatial information regarding environmental boundaries or landmarks. Following Wang and Spelke's (2002) observation that spatial updating of egocentric representations appears to lie at the heart of many navigational tasks in many species, including humans, we postulate a neural circuit that can support this computation in parietal cortex, assuming that egocentric representations of multiple objects can be maintained in prefrontal cortex in spatial working memory (not simulated here). Our method is a generalization of an earlier model by Droulez and Berthoz (1991), with extensions to support observer rotation. We can thereby simulate perspective transformation of working memory representations of object coordinates based on an egomotion signal presumed to be generated via mental navigation. This biologically plausible transformation would allow a subject to recall the locations of previously viewed objects from novel viewpoints reached via imagined, discontinuous, or disoriented displacement. Finally, we discuss how this model can account for a wide range of experimental findings regarding memory for object locations, and we present several predictions made by the model.
Journal Articles
Publisher: Journals Gateway
Neural Computation (1999) 11 (2): 347–374.
Published: 15 February 1999
Abstract
View article
PDF
A novel architecture and set of learning rules for cortical self-organization is proposed. The model is based on the idea that multiple information channels can modulate one another's plasticity. Features learned from bottom-up information sources can thus be influenced by those learned from contextual pathways, and vice versa. A maximum likelihood cost function allows this scheme to be implemented in a biologically feasible, hierarchical neural circuit. In simulations of the model, we first demonstrate the utility of temporal context in modulating plasticity. The model learns a representation that categorizes people's faces according to identity, independent of viewpoint, by taking advantage of the temporal continuity in image sequences. In a second set of simulations, we add plasticity to the contextual stream and explore variations in the architecture. In this case, the model learns a two-tiered representation, starting with a coarse view-based clustering and proceeding to a finer clustering of more specific stimulus features. This model provides a tenable account of how people may perform 3D object recognition in a hierarchical, bottom-up fashion.
Journal Articles
Publisher: Journals Gateway
Neural Computation (1993) 5 (2): 267–277.
Published: 01 March 1993
Abstract
View article
PDF
We have previously described an unsupervised learning procedure that discovers spatially coherent properties of the world by maximizing the information that parameters extracted from different parts of the sensory input convey about some common underlying cause. When given random dot stereograms of curved surfaces, this procedure learns to extract surface depth because that is the property that is coherent across space. It also learns how to interpolate the depth at one location from the depths at nearby locations (Becker and Hinton 1992b). In this paper, we propose two new models that handle surfaces with discontinuities. The first model attempts to detect cases of discontinuities and reject them. The second model develops a mixture of expert interpolators. It learns to detect the locations of discontinuities and to invoke specialized, asymmetric interpolators that do not cross the discontinuities.