Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-20 of 43
Terrence J. Sejnowski
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
1
Sort by
Journal Articles
Publisher: Journals Gateway
Neural Computation (2024) 36 (5): 781–802.
Published: 23 April 2024
Abstract
View article
PDF
Variation in the strength of synapses can be quantified by measuring the anatomical properties of synapses. Quantifying precision of synaptic plasticity is fundamental to understanding information storage and retrieval in neural circuits. Synapses from the same axon onto the same dendrite have a common history of coactivation, making them ideal candidates for determining the precision of synaptic plasticity based on the similarity of their physical dimensions. Here, the precision and amount of information stored in synapse dimensions were quantified with Shannon information theory, expanding prior analysis that used signal detection theory (Bartol et al., 2015 ). The two methods were compared using dendritic spine head volumes in the middle of the stratum radiatum of hippocampal area CA1 as well-defined measures of synaptic strength. Information theory delineated the number of distinguishable synaptic strengths based on nonoverlapping bins of dendritic spine head volumes. Shannon entropy was applied to measure synaptic information storage capacity (SISC) and resulted in a lower bound of 4.1 bits and upper bound of 4.59 bits of information based on 24 distinguishable sizes. We further compared the distribution of distinguishable sizes and a uniform distribution using Kullback-Leibler divergence and discovered that there was a nearly uniform distribution of spine head volumes across the sizes, suggesting optimal use of the distinguishable values. Thus, SISC provides a new analytical measure that can be generalized to probe synaptic strengths and capacity for plasticity in different brain regions of different species and among animals raised in different conditions or during learning. How brain diseases and disorders affect the precision of synaptic plasticity can also be probed.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2023) 35 (3): 309–342.
Published: 17 February 2023
Abstract
View article
PDF
Large language models (LLMs) have been transformative. They are pretrained foundational models that are self-supervised and can be adapted with fine-tuning to a wide range of natural language tasks, each of which previously would have required a separate network model. This is one step closer to the extraordinary versatility of human language. GPT-3 and, more recently, LaMDA, both of them LLMs, can carry on dialogs with humans on many topics after minimal priming with a few examples. However, there has been a wide range of reactions and debate on whether these LLMs understand what they are saying or exhibit signs of intelligence. This high variance is exhibited in three interviews with LLMs reaching wildly different conclusions. A new possibility was uncovered that could explain this divergence. What appears to be intelligence in LLMs may in fact be a mirror that reflects the intelligence of the interviewer, a remarkable twist that could be considered a reverse Turing test. If so, then by studying interviews, we may be learning more about the intelligence and beliefs of the interviewer than the intelligence of the LLMs. As LLMs become more capable, they may transform the way we interact with machines and how they interact with each other. Increasingly, LLMs are being coupled with sensorimotor devices. LLMs can talk the talk, but can they walk the walk? A road map for achieving artificial general autonomy is outlined with seven major improvements inspired by brain systems and how LLMs could in turn be used to uncover new insights into brain function.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2021) 33 (12): 3264–3287.
Published: 12 November 2021
Abstract
View article
PDF
Recurrent neural network (RNN) models trained to perform cognitive tasks are a useful computational tool for understanding how cortical circuits execute complex computations. However, these models are often composed of units that interact with one another using continuous signals and overlook parameters intrinsic to spiking neurons. Here, we developed a method to directly train not only synaptic-related variables but also membrane-related parameters of a spiking RNN model. Training our model on a wide range of cognitive tasks resulted in diverse yet task-specific synaptic and membrane parameters. We also show that fast membrane time constants and slow synaptic decay dynamics naturally emerge from our model when it is trained on tasks associated with working memory (WM). Further dissecting the optimized parameters revealed that fast membrane properties are important for encoding stimuli, and slow synaptic dynamics are needed for WM maintenance. This approach offers a unique window into how connectivity patterns and intrinsic neuronal properties contribute to complex dynamics in neural populations.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2021) 33 (11): 2908–2950.
Published: 12 October 2021
Abstract
View article
PDF
Replay is the reactivation of one or more neural patterns that are similar to the activation patterns experienced during past waking experiences. Replay was first observed in biological neural networks during sleep, and it is now thought to play a critical role in memory formation, retrieval, and consolidation. Replay-like mechanisms have been incorporated in deep artificial neural networks that learn over time to avoid catastrophic forgetting of previous knowledge. Replay algorithms have been successfully used in a wide range of deep learning methods within supervised, unsupervised, and reinforcement learning paradigms. In this letter, we provide the first comprehensive comparison between replay in the mammalian brain and replay in artificial neural networks. We identify multiple aspects of biological replay that are missing in deep learning systems and hypothesize how they could be used to improve artificial neural networks.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2020) 32 (12): 2389–2421.
Published: 01 December 2020
FIGURES
| View All (13)
Abstract
View article
PDF
Measuring functional connectivity from fMRI recordings is important in understanding processing in cortical networks. However, because the brain's connection pattern is complex, currently used methods are prone to producing false functional connections. We introduce differential covariance analysis, a new method that uses derivatives of the signal for estimating functional connectivity. We generated neural activities from dynamical causal modeling and a neural network of Hodgkin-Huxley neurons and then converted them to hemodynamic signals using the forward balloon model. The simulated fMRI signals, together with the ground-truth connectivity pattern, were used to benchmark our method with other commonly used methods. Differential covariance achieved better results in complex network simulations. This new method opens an alternative way to estimate functional connectivity.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2019) 31 (7): 1271–1326.
Published: 01 July 2019
FIGURES
| View All (42)
Abstract
View article
PDF
Epilepsy is a neurological disorder characterized by the sudden occurrence of unprovoked seizures. There is extensive evidence of significantly altered brain connectivity during seizure periods in the human brain. Research on analyzing human brain functional connectivity during epileptic seizures has been limited predominantly to the use of the correlation method. However, spurious connectivity can be measured between two brain regions without having direct connection or interaction between them. Correlations can be due to the apparent interactions of the two brain regions resulting from common input from a third region, which may or may not be observed. Hence, researchers have recently proposed a sparse-plus-latent-regularized precision matrix (SLRPM) when there are unobserved or latent regions interacting with the observed regions. The SLRPM method yields partial correlations of the conditional statistics of the observed regions given the latent regions, thus identifying observed regions that are conditionally independent of both the observed and latent regions. We evaluate the performance of the methods using a spring-mass artificial network and assuming that some nodes cannot be observed, thus constituting the latent variables in the example. Several cases have been considered, including both sparse and dense connections, short-range and long-range connections, and a varying number of latent variables. The SLRPM method is then applied to estimate brain connectivity during epileptic seizures from human ECoG recordings. Seventy-four clinical seizures from five patients, all having complex partial epilepsy, were analyzed using SLRPM, and brain connectivity was quantified using modularity index, clustering coefficient, and eigenvector centrality. Furthermore, using a measure of latent inputs estimated by the SLRPM method, it was possible to automatically detect 72 of the 74 seizures with four false positives and find six seizures that were not marked manually.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2017) 29 (12): 3181–3218.
Published: 01 December 2017
FIGURES
| View All (20)
Abstract
View article
PDF
High-density electrocorticogram (ECoG) electrodes are capable of recording neurophysiological data with high temporal resolution with wide spatial coverage. These recordings are a window to understanding how the human brain processes information and subsequently behaves in healthy and pathologic states. Here, we describe and implement delay differential analysis (DDA) for the characterization of ECoG data obtained from human patients with intractable epilepsy. DDA is a time-domain analysis framework based on embedding theory in nonlinear dynamics that reveals the nonlinear invariant properties of an unknown dynamical system. The DDA embedding serves as a low-dimensional nonlinear dynamical basis onto which the data are mapped. This greatly reduces the risk of overfitting and improves the method's ability to fit classes of data. Since the basis is built on the dynamical structure of the data, preprocessing of the data (e.g., filtering) is not necessary. We performed a large-scale search for a DDA model that best fit ECoG recordings using a genetic algorithm to qualitatively discriminate between different cortical states and epileptic events for a set of 13 patients. A single DDA model with only three polynomial terms was identified. Singular value decomposition across the feature space of the model revealed both global and local dynamics that could differentiate electrographic and electroclinical seizures and provided insights into highly localized seizure onsets and diffuse seizure terminations. Other common ECoG features such as interictal periods, artifacts, and exogenous stimuli were also analyzed with DDA. This novel framework for signal processing of seizure information demonstrates an ability to reveal unique characteristics of the underlying dynamics of the seizure and may be useful in better understanding, detecting, and maybe even predicting seizures.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2017) 29 (10): 2581–2632.
Published: 01 October 2017
FIGURES
| View All (4)
Abstract
View article
PDF
With our ability to record more neurons simultaneously, making sense of these data is a challenge. Functional connectivity is one popular way to study the relationship of multiple neural signals. Correlation-based methods are a set of currently well-used techniques for functional connectivity estimation. However, due to explaining away and unobserved common inputs (Stevenson, Rebesco, Miller, & Körding, 2008 ), they produce spurious connections. The general linear model (GLM), which models spike trains as Poisson processes (Okatan, Wilson, & Brown, 2005 ; Truccolo, Eden, Fellows, Donoghue, & Brown, 2005 ; Pillow et al., 2008 ), avoids these confounds. We develop here a new class of methods by using differential signals based on simulated intracellular voltage recordings. It is equivalent to a regularized AR(2) model. We also expand the method to simulated local field potential recordings and calcium imaging. In all of our simulated data, the differential covariance-based methods achieved performance better than or similar to the GLM method and required fewer data samples. This new class of methods provides alternative ways to analyze neural signals.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2017) 29 (7): 2004–2020.
Published: 01 July 2017
Abstract
View article
PDF
In estimating the frequency spectrum of real-world time series data, we must violate the assumption of infinite-length, orthogonal components in the Fourier basis. While it is widely known that care must be taken with discretely sampled data to avoid aliasing of high frequencies, less attention is given to the influence of low frequencies with period below the sampling time window. Here, we derive an analytic expression for the side-lobe attenuation of signal components in the frequency domain representation. This expression allows us to detail the influence of individual frequency components throughout the spectrum. The first consequence is that the presence of low-frequency components introduces a 1/f component across the power spectrum, with a scaling exponent of . This scaling artifact could be composed of diffuse low-frequency components, which can render it difficult to detect a priori. Further, treatment of the signal with standard digital signal processing techniques cannot easily remove this scaling component. While several theoretical models have been introduced to explain the ubiquitous 1/f scaling component in neuroscientific data, we conjecture here that some experimental observations could be the result of such data analysis procedures.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2017) 29 (3): 603–642.
Published: 01 March 2017
FIGURES
| View All (28)
Abstract
View article
PDF
The correlation method from brain imaging has been used to estimate functional connectivity in the human brain. However, brain regions might show very high correlation even when the two regions are not directly connected due to the strong interaction of the two regions with common input from a third region. One previously proposed solution to this problem is to use a sparse regularized inverse covariance matrix or precision matrix (SRPM) assuming that the connectivity structure is sparse. This method yields partial correlations to measure strong direct interactions between pairs of regions while simultaneously removing the influence of the rest of the regions, thus identifying regions that are conditionally independent. To test our methods, we first demonstrated conditions under which the SRPM method could indeed find the true physical connection between a pair of nodes for a spring-mass example and an RC circuit example. The recovery of the connectivity structure using the SRPM method can be explained by energy models using the Boltzmann distribution. We then demonstrated the application of the SRPM method for estimating brain connectivity during stage 2 sleep spindles from human electrocorticography (ECoG) recordings using an electrode array. The ECoG recordings that we analyzed were from a 32-year-old male patient with long-standing pharmaco-resistant left temporal lobe complex partial epilepsy. Sleep spindles were automatically detected using delay differential analysis and then analyzed with SRPM and the Louvain method for community detection. We found spatially localized brain networks within and between neighboring cortical areas during spindles, in contrast to the case when sleep spindles were not present.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2017) 29 (1): 50–93.
Published: 01 January 2017
FIGURES
| View All (20)
Abstract
View article
PDF
Our understanding of neural population coding has been limited by a lack of analysis methods to characterize spiking data from large populations. The biggest challenge comes from the fact that the number of possible network activity patterns scales exponentially with the number of neurons recorded ( ). Here we introduce a new statistical method for characterizing neural population activity that requires semi-independent fitting of only as many parameters as the square of the number of neurons, requiring drastically smaller data sets and minimal computation time. The model works by matching the population rate (the number of neurons synchronously active) and the probability that each individual neuron fires given the population rate. We found that this model can accurately fit synthetic data from up to 1000 neurons. We also found that the model could rapidly decode visual stimuli from neural population data from macaque primary visual cortex about 65 ms after stimulus onset. Finally, we used the model to estimate the entropy of neural population activity in developing mouse somatosensory cortex and, surprisingly, found that it first increases, and then decreases during development. This statistical model opens new options for interrogating neural population data and can bolster the use of modern large-scale in vivo Ca and voltage imaging tools.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2015) 27 (12): 2477–2509.
Published: 01 December 2015
FIGURES
| View All (57)
Abstract
View article
PDF
Inhibition-stabilized networks (ISNs) are neural architectures with strong positive feedback among pyramidal neurons balanced by strong negative feedback from inhibitory interneurons, a circuit element found in the hippocampus and the primary visual cortex. In their working regime, ISNs produce damped oscillations in the -range in response to inputs to the inhibitory population. In order to understand the properties of interconnected ISNs, we investigated periodic forcing of ISNs. We show that ISNs can be excited over a range of frequencies and derive properties of the resonance peaks. In particular, we studied the phase-locked solutions, the torus solutions, and the resonance peaks. Periodically forced ISNs respond with (possibly multistable) phase-locked activity, whereas networks with sustained intrinsic oscillations respond more dynamically to periodic inputs with tori. Hence, the dynamics are surprisingly rich, and phase effects alone do not adequately describe the network response. This strengthens the importance of phase-amplitude coupling as opposed to phase-phase coupling in providing multiple frequencies for multiplexing and routing information.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2015) 27 (3): 615–627.
Published: 01 March 2015
FIGURES
| View All (7)
Abstract
View article
PDF
We propose a time-domain approach to detect frequencies, frequency couplings, and phases using nonlinear correlation functions. For frequency analysis, this approach is a multivariate extension of discrete Fourier transform, and for higher-order spectra, it is a linear and multivariate alternative to multidimensional fast Fourier transform of multidimensional correlations. This method can be applied to short and sparse time series and can be extended to cross-trial and cross-channel spectra (CTS) for electroencephalography data where multiple short data segments from multiple trials of the same experiment are available. There are two versions of CTS. The first one assumes some phase coherency across the trials, while the second one is independent of phase coherency. We demonstrate that the phase-dependent version is more consistent with event-related spectral perturbation analysis and traditional Morlet wavelet analysis. We show that CTS can be applied to short data windows and yields higher temporal resolution than traditional Morlet wavelet analysis. Furthermore, the CTS can be used to reconstruct the event-related potential using all linear components of the CTS.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2015) 27 (3): 594–614.
Published: 01 March 2015
FIGURES
| View All (14)
Abstract
View article
PDF
Nonlinear dynamical system analysis based on embedding theory has been used for modeling and prediction, but it also has applications to signal detection and classification of time series. An embedding creates a multidimensional geometrical object from a single time series. Traditionally either delay or derivative embeddings have been used. The delay embedding is composed of delayed versions of the signal, and the derivative embedding is composed of successive derivatives of the signal. The delay embedding has been extended to nonuniform embeddings to take multiple timescales into account. Both embeddings provide information on the underlying dynamical system without having direct access to all the system variables. Delay differential analysis is based on functional embeddings, a combination of the derivative embedding with nonuniform delay embeddings. Small delay differential equation (DDE) models that best represent relevant dynamic features of time series data are selected from a pool of candidate models for detection or classification. We show that the properties of DDEs support spectral analysis in the time domain where nonlinear correlation functions are used to detect frequencies, frequency and phase couplings, and bispectra. These can be efficiently computed with short time windows and are robust to noise. For frequency analysis, this framework is a multivariate extension of discrete Fourier transform (DFT), and for higher-order spectra, it is a linear and multivariate alternative to multidimensional fast Fourier transform of multidimensional correlations. This method can be applied to short or sparse time series and can be extended to cross-trial and cross-channel spectra if multiple short data segments of the same experiment are available. Together, this time-domain toolbox provides higher temporal resolution, increased frequency and phase coupling information, and it allows an easy and straightforward implementation of higher-order spectra across time compared with frequency-based methods such as the DFT and cross-spectral analysis.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2014) 26 (7): 1329–1339.
Published: 01 July 2014
FIGURES
| View All (18)
Abstract
View article
PDF
Data sets with high dimensionality such as natural images, speech, and text have been analyzed with methods from condensed matter physics. Here we compare recent approaches taken to relate the scale invariance of natural images to critical phenomena. We also examine the method of studying high-dimensional data through specific heat curves by applying the analysis to noncritical systems: 1D samples taken from natural images and 2D binary pink noise. Through these examples, we concluded that due to small sample sizes, specific heat is not a reliable measure for gauging whether high-dimensional data are critical. We argue that identifying order parameters and universality classes is a more reliable way to identify criticality in high-dimensional data.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2013) 25 (4): 922–939.
Published: 01 April 2013
FIGURES
| View All (34)
Abstract
View article
PDF
The analysis of natural images with independent component analysis (ICA) yields localized bandpass Gabor-type filters similar to receptive fields of simple cells in visual cortex. We applied ICA on a subset of patches called position-centered patches, selected for forming a translation-invariant representation of small patches. The resulting filters were qualitatively different in two respects. One novel feature was the emergence of filters we call double-Gabor filters. In contrast to Gabor functions that are modulated in one direction, double-Gabor filters are sinusoidally modulated in two orthogonal directions. In addition the filters were more extended in space and frequency compared to standard ICA filters and better matched the distribution in experimental recordings from neurons in primary visual cortex. We further found a dual role for double-Gabor filters as edge and texture detectors, which could have engineering applications.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2012) 24 (4): 939–966.
Published: 01 April 2012
FIGURES
| View All (17)
Abstract
View article
PDF
When subjects adapt their reaching movements in the setting of a systematic force or visual perturbation, generalization of adaptation can be assessed psychophysically in two ways: by testing untrained locations in the work space at the end of adaptation (slow postadaptation generalization) or by determining the influence of an error on the next trial during adaptation (fast trial-by-trial generalization). These two measures of generalization have been widely used in psychophysical studies, but the reason that they might differ has not been addressed explicitly. Our goal was to develop a computational framework for determining when a two-state model is justified by the data and to explore the implications of these two types of generalization for neural representations of movements. We first investigated, for single-target learning, how well standard statistical model selection procedures can discriminate two-process models from single-process models when learning and retention coefficients were systematically varied. We then built a two-state model for multitarget learning and showed that if an adaptation process is indeed two-rate, then the postadaptation generalization approach primarily probes the slow process, whereas the trial-by-trial generalization approach is most informative about the fast process. The fast process, due to its strong sensitivity to trial error, contributes predominantly to trial-by-trial generalization, whereas the strong retention of the slow system contributes predominantly to postadaptation generalization. Thus, when adaptation can be shown to be two-rate, the two measures of generalization may probe different brain representations of movement direction.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2011) 23 (9): 2169–2208.
Published: 01 September 2011
FIGURES
| View All (13)
Abstract
View article
PDF
Neurons in sensory systems convey information about physical stimuli in their spike trains. In vitro, single neurons respond precisely and reliably to the repeated injection of the same fluctuating current, producing regions of elevated firing rate, termed events. Analysis of these spike trains reveals that multiple distinct spike patterns can be identified as trial-to-trial correlations between spike times (Fellous, Tiesinga, Thomas, & Sejnowski, 2004 ). Finding events in data with realistic spiking statistics is challenging because events belonging to different spike patterns may overlap. We propose a method for finding spiking events that uses contextual information to disambiguate which pattern a trial belongs to. The procedure can be applied to spike trains of the same neuron across multiple trials to detect and separate responses obtained during different brain states. The procedure can also be applied to spike trains from multiple simultaneously recorded neurons in order to identify volleys of near-synchronous activity or to distinguish between excitatory and inhibitory neurons. The procedure was tested using artificial data as well as recordings in vitro in response to fluctuating current waveforms.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2011) 23 (3): 651–655.
Published: 01 March 2011
FIGURES
Abstract
View article
PDF
The pattern of spikes recorded from place cells in the rodent hippocampus is strongly modulated by both the spatial location in the environment and the theta rhythm. The phases of the spikes in the theta cycle advance during movement through the place field. Recently intracellular recordings from hippocampal neurons (Harvey, Collman, Dombeck, & Tank, 2009 ) showed an increase in the amplitude of membrane potential oscillations inside the place field, which was interpreted as evidence that an intracellular mechanism caused phase precession. Here we show that an existing network model of the hippocampus (Tsodyks, Skaggs, Sejnowski, & McNaughton, 1996 ) can equally reproduce this and other aspects of the intracellular recordings, which suggests that new experiments are needed to distinguish the contributions of intracellular and network mechanisms to phase precession.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2010) 22 (6): 1646–1673.
Published: 01 June 2010
FIGURES
| View All (5)
Abstract
View article
PDF
Convolutive mixtures of signals, which are common in acoustic environments, can be difficult to separate into their component sources. Here we present a uniform probabilistic framework to separate convolutive mixtures of acoustic signals using independent vector analysis (IVA), which is based on a joint distribution for the frequency components originating from the same source and is capable of preventing permutation disorder. Different gaussian mixture models (GMM) served as source priors, in contrast to the original IVA model, where all sources were modeled by identical multivariate Laplacian distributions. This flexible source prior enabled the IVA model to separate different type of signals. Three classes of models were derived and tested: noiseless IVA, online IVA, and noisy IVA. In the IVA model without sensor noise, the unmixing matrices were efficiently estimated by the expectation maximization (EM) algorithm. An online EM algorithm was derived for the online IVA algorithm to track the movement of the sources and separate them under nonstationary conditions. The noisy IVA model included the sensor noise and combined denoising with separation. An EM algorithm was developed that found the model parameters and separated the sources simultaneously. These algorithms were applied to separate mixtures of speech and music. Performance as measured by the signal-to-interference ratio (SIR) was substantial for all three models.
1