Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-6 of 6
Stefan Mihalas
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Neural Computation (2023) 35 (4): 555–592.
Published: 18 March 2023
FIGURES
Abstract
View article
PDF
Individual neurons in the brain have complex intrinsic dynamics that are highly diverse. We hypothesize that the complex dynamics produced by networks of complex and heterogeneous neurons may contribute to the brain's ability to process and respond to temporally complex data. To study the role of complex and heterogeneous neuronal dynamics in network computation, we develop a rate-based neuronal model, the generalized-leaky-integrate-and-fire-rate (GLIFR) model, which is a rate equivalent of the generalized-leaky-integrate-and-fire model. The GLIFR model has multiple dynamical mechanisms, which add to the complexity of its activity while maintaining differentiability. We focus on the role of after-spike currents, currents induced or modulated by neuronal spikes, in producing rich temporal dynamics. We use machine learning techniques to learn both synaptic weights and parameters underlying intrinsic dynamics to solve temporal tasks. The GLIFR model allows the use of standard gradient descent techniques rather than surrogate gradient descent, which has been used in spiking neural networks. After establishing the ability to optimize parameters using gradient descent in single neurons, we ask how networks of GLIFR neurons learn and perform on temporally challenging tasks, such as sequential MNIST. We find that these networks learn diverse parameters, which gives rise to diversity in neuronal dynamics, as demonstrated by clustering of neuronal parameters. GLIFR networks have mixed performance when compared to vanilla recurrent neural networks, with higher performance in pixel-by-pixel MNIST but lower in line-by-line MNIST. However, they appear to be more robust to random silencing. We find that the ability to learn heterogeneity and the presence of after-spike currents contribute to these gains in performance. Our work demonstrates both the computational robustness of neuronal complexity and diversity in networks and a feasible method of training such models using exact gradients.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2022) 34 (3): 541–594.
Published: 17 February 2022
Abstract
View article
PDF
As animals adapt to their environments, their brains are tasked with processing stimuli in different sensory contexts. Whether these computations are context dependent or independent, they are all implemented in the same neural tissue. A crucial question is what neural architectures can respond flexibly to a range of stimulus conditions and switch between them. This is a particular case of flexible architecture that permits multiple related computations within a single circuit. Here, we address this question in the specific case of the visual system circuitry, focusing on context integration, defined as the integration of feedforward and surround information across visual space. We show that a biologically inspired microcircuit with multiple inhibitory cell types can switch between visual processing of the static context and the moving context. In our model, the VIP population acts as the switch and modulates the visual circuit through a disinhibitory motif. Moreover, the VIP population is efficient, requiring only a relatively small number of neurons to switch contexts. This circuit eliminates noise in videos by using appropriate lateral connections for contextual spatiotemporal surround modulation, having superior denoising performance compared to circuits where only one context is learned. Our findings shed light on a minimally complex architecture that is capable of switching between two naturalistic contexts using few switching units.
Includes: Supplementary data
Journal Articles
Publisher: Journals Gateway
Neural Computation (2016) 28 (1): 89–117.
Published: 01 January 2016
FIGURES
| View All (25)
Abstract
View article
PDF
It has been previously shown (Namboodiri, Mihalas, Marton, & Hussain Shuler, 2014 ) that an evolutionary theory of decision making and time perception is capable of explaining numerous behavioral observations regarding how humans and animals decide between differently delayed rewards of differing magnitudes and how they perceive time. An implementation of this theory using a stochastic drift-diffusion accumulator model (Namboodiri, Mihalas, & Hussain Shuler, 2014a ) showed that errors in time perception and decision making approximately obey Weber’s law for a range of parameters. However, prior calculations did not have a clear mechanistic underpinning. Further, these calculations were only approximate, with the range of parameters being limited. In this letter, we provide a full analytical treatment of such an accumulator model, along with a mechanistic implementation, to calculate the expression of these errors for the entirety of the parameter space. In our mechanistic model, Weber’s law results from synaptic facilitation and depression within the feedback synapses of the accumulator. Our theory also makes the prediction that the steepness of temporal discounting can be affected by requiring the precise timing of temporal intervals. Thus, by presenting exact quantitative calculations, this work provides falsifiable predictions for future experimental testing.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2011) 23 (11): 2833–2867.
Published: 01 November 2011
FIGURES
| View All (8)
Abstract
View article
PDF
When a neuronal spike train is observed, what can we deduce from it about the properties of the neuron that generated it? A natural way to answer this question is to make an assumption about the type of neuron, select an appropriate model for this type, and then choose the model parameters as those that are most likely to generate the observed spike train. This is the maximum likelihood method. If the neuron obeys simple integrate-and-fire dynamics, Paninski, Pillow, and Simoncelli ( 2004 ) showed that its negative log-likelihood function is convex and that, at least in principle, its unique global minimum can thus be found by gradient descent techniques. Many biological neurons are, however, known to generate a richer repertoire of spiking behaviors than can be explained in a simple integrate-and-fire model. For instance, such a model retains only an implicit (through spike-induced currents), not an explicit, memory of its input; an example of a physiological situation that cannot be explained is the absence of firing if the input current is increased very slowly. Therefore, we use an expanded model (Mihalas & Niebur, 2009 ), which is capable of generating a large number of complex firing patterns while still being linear. Linearity is important because it maintains the distribution of the random variables and still allows maximum likelihood methods to be used. In this study, we show that although convexity of the negative log-likelihood function is not guaranteed for this model, the minimum of this function yields a good estimate for the model parameters, in particular if the noise level is treated as a free parameter. Furthermore, we show that a nonlinear function minimization method (r-algorithm with space dilation) usually reaches the global minimum.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2011) 23 (2): 421–434.
Published: 01 February 2011
FIGURES
| View All (15)
Abstract
View article
PDF
An accurate calculation of the first passage time probability density (FPTPD) is essential for computing the likelihood of solutions of the stochastic leaky integrate-and-fire model. The previously proposed numerical calculation of the FPTPD based on the integral equation method discretizes the probability current of the voltage crossing the threshold. While the method is accurate for high noise levels, we show that it results in large numerical errors for small noise. The problem is solved by analytically computing, in each time bin, the mean probability current. Efficiency is further improved by identifying and ignoring time bins with negligible mean probability current.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2009) 21 (3): 704–718.
Published: 01 March 2009
FIGURES
Abstract
View article
PDF
For simulations of neural networks, there is a trade-off between the size of the network that can be simulated and the complexity of the model used for individual neurons. In this study, we describe a generalization of the leaky integrate-and-fire model that produces a wide variety of spiking behaviors while still being analytically solvable between firings. For different parameter values, the model produces spiking or bursting, tonic, phasic or adapting responses, depolarizing or hyperpolarizing after potentials and so forth. The model consists of a diagonalizable set of linear differential equations describing the time evolution of membrane potential, a variable threshold, and an arbitrary number of firing-induced currents. Each of these variables is modified by an update rule when the potential reaches threshold. The variables used are intuitive and have biological significance. The model's rich behavior does not come from the differential equations, which are linear, but rather from complex update rules. This single-neuron model can be implemented using algorithms similar to the standard integrate-and-fire model. It is a natural match with event-driven algorithms for which the firing times are obtained as a solution of a polynomial equation.