Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-4 of 4
Maxim Bazhenov
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Neural Computation (2021) 33 (11): 2908–2950.
Published: 12 October 2021
Abstract
View article
PDF
Replay is the reactivation of one or more neural patterns that are similar to the activation patterns experienced during past waking experiences. Replay was first observed in biological neural networks during sleep, and it is now thought to play a critical role in memory formation, retrieval, and consolidation. Replay-like mechanisms have been incorporated in deep artificial neural networks that learn over time to avoid catastrophic forgetting of previous knowledge. Replay algorithms have been successfully used in a wide range of deep learning methods within supervised, unsupervised, and reinforcement learning paradigms. In this letter, we provide the first comprehensive comparison between replay in the mammalian brain and replay in artificial neural networks. We identify multiple aspects of biological replay that are missing in deep learning systems and hypothesize how they could be used to improve artificial neural networks.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2020) 32 (12): 2389–2421.
Published: 01 December 2020
FIGURES
| View All (13)
Abstract
View article
PDF
Measuring functional connectivity from fMRI recordings is important in understanding processing in cortical networks. However, because the brain's connection pattern is complex, currently used methods are prone to producing false functional connections. We introduce differential covariance analysis, a new method that uses derivatives of the signal for estimating functional connectivity. We generated neural activities from dynamical causal modeling and a neural network of Hodgkin-Huxley neurons and then converted them to hemodynamic signals using the forward balloon model. The simulated fMRI signals, together with the ground-truth connectivity pattern, were used to benchmark our method with other commonly used methods. Differential covariance achieved better results in complex network simulations. This new method opens an alternative way to estimate functional connectivity.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2017) 29 (10): 2581–2632.
Published: 01 October 2017
FIGURES
| View All (4)
Abstract
View article
PDF
With our ability to record more neurons simultaneously, making sense of these data is a challenge. Functional connectivity is one popular way to study the relationship of multiple neural signals. Correlation-based methods are a set of currently well-used techniques for functional connectivity estimation. However, due to explaining away and unobserved common inputs (Stevenson, Rebesco, Miller, & Körding, 2008 ), they produce spurious connections. The general linear model (GLM), which models spike trains as Poisson processes (Okatan, Wilson, & Brown, 2005 ; Truccolo, Eden, Fellows, Donoghue, & Brown, 2005 ; Pillow et al., 2008 ), avoids these confounds. We develop here a new class of methods by using differential signals based on simulated intracellular voltage recordings. It is equivalent to a regularized AR(2) model. We also expand the method to simulated local field potential recordings and calcium imaging. In all of our simulated data, the differential covariance-based methods achieved performance better than or similar to the GLM method and required fewer data samples. This new class of methods provides alternative ways to analyze neural signals.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2010) 22 (5): 1312–1332.
Published: 01 May 2010
FIGURES
| View All (9)
Abstract
View article
PDF
Perceiving and identifying an object is improved by prior exposure to the object. This perceptual priming phenomenon is accompanied by reduced neural activity. But whether suppression of neuronal activity with priming is responsible for the improvement in perception is unclear. To address this problem, we developed a rate-based network model of visual processing. In the model, decreased neural activity following priming was due to stimulus-specific sharpening of representations taking place in the early visual areas. Representation sharpening led to decreased interference of representations in higher visual areas that facilitated selection of one of the competing representations, thereby improving recognition. The model explained a wide range of psychophysical and physiological data observed in priming experiments, including antipriming phenomena, and predicted two functionally distinct stages of visual processing.