Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-3 of 3
Martin Stemmler
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Neural Computation (2017) 29 (6): 1528–1560.
Published: 01 June 2017
FIGURES
| View All (77)
Abstract
View article
PDF
Synapses are the communication channels for information transfer between neurons; these are the points at which pulse-like signals are converted into the stochastic release of quantized amounts of chemical neurotransmitter. At many synapses, prior neuronal activity depletes synaptic resources, depressing subsequent responses of both spontaneous and spike-evoked releases. We analytically compute the information transmission rate of a synaptic release site, which we model as a binary asymmetric channel. Short-term depression is incorporated by assigning the channel a memory of depth one. A successful release, whether spike evoked or spontaneous, decreases the probability of a subsequent release; if no release occurs on the following time step, the release probabilities recover back to their default values. We prove that synaptic depression can increase the release site’s information rate if spontaneous release is more strongly depressed than spike-evoked release. When depression affects spontaneous and evoked release equally, the information rate must invariably decrease, even when the rate is normalized by the resources used for synaptic transmission. For identical depression levels, we analytically disprove the hypothesis, at least in this simplified model, that synaptic depression serves energy- and information-efficient encoding.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2012) 24 (9): 2280–2317.
Published: 01 September 2012
FIGURES
| View All (49)
Abstract
View article
PDF
Rodents use two distinct neuronal coordinate systems to estimate their position: place fields in the hippocampus and grid fields in the entorhinal cortex. Whereas place cells spike at only one particular spatial location, grid cells fire at multiple sites that correspond to the points of an imaginary hexagonal lattice. We study how to best construct place and grid codes, taking the probabilistic nature of neural spiking into account. Which spatial encoding properties of individual neurons confer the highest resolution when decoding the animal's position from the neuronal population response? A priori , estimating a spatial position from a grid code could be ambiguous, as regular periodic lattices possess translational symmetry. The solution to this problem requires lattices for grid cells with different spacings; the spatial resolution crucially depends on choosing the right ratios of these spacings across the population. We compute the expected error in estimating the position in both the asymptotic limit, using Fisher information, and for low spike counts, using maximum likelihood estimation. Achieving high spatial resolution and covering a large range of space in a grid code leads to a trade-off: the best grid code for spatial resolution is built of nested modules with different spatial periods, one inside the other, whereas maximizing the spatial range requires distinct spatial periods that are pairwisely incommensurate. Optimizing the spatial resolution predicts two grid cell properties that have been experimentally observed. First, short lattice spacings should outnumber long lattice spacings. Second, the grid code should be self-similar across different lattice spacings, so that the grid field always covers a fixed fraction of the lattice period. If these conditions are satisfied and the spatial “tuning curves” for each neuron span the same range of firing rates, then the resolution of the grid code easily exceeds that of the best possible place code with the same number of neurons.
Journal Articles
Publisher: Journals Gateway
Neural Computation (1994) 6 (5): 795–836.
Published: 01 September 1994
Abstract
View article
PDF
We investigate a model for neural activity in a two-dimensional sheet of leaky integrate-and-fire neurons with feedback connectivity consisting of local excitation and surround inhibition. Each neuron receives stochastic input from an external source, independent in space and time. As recently suggested by Softky and Koch (1992, 1993), independent stochastic input alone cannot explain the high interspike interval variability exhibited by cortical neurons in behaving monkeys. We show that high variability can be obtained due to the amplification of correlated fluctuations in a recurrent network. Furthermore, the cross-correlation functions have a dual structure, with a sharp peak on top of a much broader hill. This is due to the inhibitory and excitatory feedback connections, which cause “hotspots” of neural activity to form within the network. These localized patterns of excitation appear as clusters or stripes that coalesce, disintegrate, or fluctuate in size while simultaneously moving in a random walk constrained by the interaction with other clusters. The synaptic current impinging upon a single neuron shows large fluctuations at many time scales, leading to a large coefficient of variation (C V ) for the interspike interval statistics. The power spectrum associated with single units shows a 1/ f decay for small frequencies and is flat at higher frequencies, while the power spectrum of the spiking activity averaged over many cells—equivalent to the local field potential—shows no 1/ f decay but a prominent peak around 40 Hz, in agreement with data recorded from cat and monkey cortex (Gray et al . 1990; Eckhorn et al . 1993). Firing rates exhibit self-similarity between 20 and 800 msec, resulting in 1/ f -like noise, consistent with the fractal nature of neural spike trains (Teich 1992).