Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-7 of 7
Andreas V. M. Herz
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Neural Computation (2012) 24 (9): 2280–2317.
Published: 01 September 2012
FIGURES
| View All (49)
Abstract
View article
PDF
Rodents use two distinct neuronal coordinate systems to estimate their position: place fields in the hippocampus and grid fields in the entorhinal cortex. Whereas place cells spike at only one particular spatial location, grid cells fire at multiple sites that correspond to the points of an imaginary hexagonal lattice. We study how to best construct place and grid codes, taking the probabilistic nature of neural spiking into account. Which spatial encoding properties of individual neurons confer the highest resolution when decoding the animal's position from the neuronal population response? A priori , estimating a spatial position from a grid code could be ambiguous, as regular periodic lattices possess translational symmetry. The solution to this problem requires lattices for grid cells with different spacings; the spatial resolution crucially depends on choosing the right ratios of these spacings across the population. We compute the expected error in estimating the position in both the asymptotic limit, using Fisher information, and for low spike counts, using maximum likelihood estimation. Achieving high spatial resolution and covering a large range of space in a grid code leads to a trade-off: the best grid code for spatial resolution is built of nested modules with different spatial periods, one inside the other, whereas maximizing the spatial range requires distinct spatial periods that are pairwisely incommensurate. Optimizing the spatial resolution predicts two grid cell properties that have been experimentally observed. First, short lattice spacings should outnumber long lattice spacings. Second, the grid code should be self-similar across different lattice spacings, so that the grid field always covers a fixed fraction of the lattice period. If these conditions are satisfied and the spatial “tuning curves” for each neuron span the same range of firing rates, then the resolution of the grid code easily exceeds that of the best possible place code with the same number of neurons.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2010) 22 (6): 1493–1510.
Published: 01 June 2010
FIGURES
| View All (7)
Abstract
View article
PDF
The timescale-invariant recognition of temporal stimulus sequences is vital for many species and poses a challenge for their sensory systems. Here we present a simple mechanistic model to address this computational task, based on recent observations in insects that use rhythmic acoustic communication signals for mate finding. In the model framework, feedforward inhibition leads to burst-like response patterns in one neuron of the circuit. Integrating these responses over a fixed time window by a readout neuron creates a timescale-invariant stimulus representation. Only two additional processing channels, each with a feature detector and a readout neuron, plus one final coincidence detector for all three parallel signal streams, are needed to account for the behavioral data. In contrast to previous solutions to the general time-warp problem, no time delay lines or sophisticated neural architectures are required. Our results suggest a new computational role for feedforward inhibition and underscore the power of parallel signal processing.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2006) 18 (1): 10–25.
Published: 01 January 2006
Abstract
View article
PDF
Sensory memory is a short-lived persistence of a sensory stimulus in the nervous system, such as iconic memory in the visual system. However, little is known about the mechanisms underlying olfactory sensory memory. We have therefore analyzed the effect of odor stimuli on the first odor-processing network in the honeybee brain, the antennal lobe, which corresponds to the vertebrate olfactory bulb. We stained output neurons with a calcium-sensitive dye and measured across-glomerular patterns of spontaneous activity before and after a stimulus. Such a single-odor presentation changed the relative timing of spontaneous activity across glomeruli in accordance with Hebb's theory of learning. Moreover, during the first few minutes after odor presentation, correlations between the spontaneous activity fluctuations suffice to reconstruct the stimulus. As spontaneous activity is ubiquitous in the brain, modifiable fluctuations could provide an ideal substrate for Hebbian reverberations and sensory memory in other neural systems.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2004) 16 (5): 999–1012.
Published: 01 May 2004
Abstract
View article
PDF
The antennal lobe plays a central role for odor processing in insects, as demonstrated by electrophysiological and imaging experiments. Here we analyze the detailed temporal evolution of glomerular activity patterns in the antennal lobe of honeybees. We represent these spatiotemporal patterns as trajectories in a multidimensional space, where each dimension accounts for the activity of one glomerulus. Our data show that the trajectories reach odor-specific steady states (attractors) that correspond to stable activity patterns at about 1 second after stimulus onset. As revealed by a detailed mathematical investigation, the trajectories are characterized by different phases: response onset, steady-state plateau, response offset, and periods of spontaneous activity. An analysis based on support-vector machines quantifies the odor specificity of the attractors and the optimal time needed for odor discrimination. The results support the hypothesis of a spatial olfactory code in the antennal lobe and suggest a perceptron-like readout mechanism that is biologically implemented in a downstream network, such as the mushroom body.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2003) 15 (11): 2523–2564.
Published: 01 November 2003
Abstract
View article
PDF
Spike-frequency adaptation is a prominent feature of neural dynamics. Among other mechanisms, various ionic currents modulating spike generation cause this type of neural adaptation. Prominent examples are voltage-gated potassium currents (M-type currents), the interplay of calcium currents and intracellular calcium dynamics with calcium-gated potassium channels (AHP-type currents), and the slow recovery from inactivation of the fast sodium current. While recent modeling studies have focused on the effects of specific adaptation currents, we derive a universal model for the firing-frequency dynamics of an adapting neuron that is independent of the specific adaptation process and spike generator. The model is completely defined by the neuron's onset f -I curve, the steady-state f -I curve, and the time constant of adaptation. For a specific neuron, these parameters can be easily determined from electrophysiological measurements without any pharmacological manipulations. At the same time, the simplicity of the model allows one to analyze mathematically how adaptation influences signal processing on the single-neuron level. In particular, we elucidate the specific nature of high-pass filter properties caused by spike-frequency adaptation. The model is limited to firing frequencies higher than the reciprocal adaptation time constant and to moderate fluctuations of the adaptation and the input current. As an extension of the model, we introduce a framework for combining an arbitrary spike generator with a generalized adaptation current.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2002) 14 (7): 1651–1667.
Published: 01 July 2002
Abstract
View article
PDF
Field models provide an elegant mathematical framework to analyze large-scale patterns of neural activity. On the microscopic level, these models are usually based on either a firing-rate picture or integrate-andfire dynamics. This article shows that in spite of the large conceptual differences between the two types of dynamics, both generate closely related plane-wave solutions. Furthermore, for a large group of models, estimates about the network connectivity derived from the speed of these plane waves only marginally depend on the assumed class of microscopic dynamics. We derive quantitative results about this phenomenon and discuss consequences for the interpretation of experimental data.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2002) 14 (6): 1323–1346.
Published: 01 June 2002
Abstract
View article
PDF
We investigate the energy efficiency of signaling mechanisms that transfer information by means of discrete stochastic events, such as the opening or closing of an ion channel. Using a simple model for the generation of graded electrical signals by sodium and potassium channels, we find optimum numbers of channels that maximize energy efficiency. The optima depend on several factors: the relative magnitudes of the signaling cost (current flow through channels), the fixed cost of maintaining the system, the reliability of the input, additional sources of noise, and the relative costs of upstream and downstream mechanisms. We also analyze how the statistics of input signals influence energy efficiency. We find that energy-efficient signal ensembles favor a bimodal distribution of channel activations and contain only a very small fraction of large inputs when energy is scarce. We conclude that when energy use is a significant constraint, trade-offs between information transfer and energy can strongly influence the number of signaling molecules and synapses used by neurons and the manner in which these mechanisms represent information.