Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-5 of 5
Thomas Wennekers
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Neural Computation (2011) 23 (2): 435–476.
Published: 01 February 2011
FIGURES
| View All (51)
Abstract
View article
PDF
Many neurons that initially respond to a stimulus stop responding if the stimulus is presented repeatedly but recover their response if a different stimulus is presented. This phenomenon is referred to as stimulus-specific adaptation (SSA). SSA has been investigated extensively using oddball experiments, which measure the responses of a neuron to sequences of stimuli. Neurons that exhibit SSA respond less vigorously to common stimuli, and the metric typically used to quantify this difference is the SSA index (SI). This article presents the first detailed analysis of the SI metric by examining the question: How should a system (e.g., a neuron) respond to stochastic input if it is to maximize the SI of its output? Questions like this one are particularly relevant to those wishing to construct computational models of SSA. If an artificial neural network receives stimulus information at a particular rate and must respond within a fixed time, what is the highest SI one can reasonably expect? We demonstrate that the optimum, average SI is constrained by the information in the input source, the length and encoding of the memory, and the assumptions concerning how the task is decomposed.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2005) 17 (10): 2258–2290.
Published: 01 October 2005
Abstract
View article
PDF
We extend Linkser's Infomax principle for feedforward neural networks to a measure for stochastic interdependence that captures spatial and temporal signal properties in recurrent systems. This measure, stochastic interaction , quantifies the Kullback-Leibler divergence of a Markov chain from a product of split chains for the single unit processes. For unconstrained Markov chains, the maximization of stochastic interaction, also called Temporal Infomax , has been previously shown to result in almost deterministic dynamics. This letter considers Temporal Infomax on constrained Markov chains, where some of the units are clamped to prescribed stochastic processes providing input to the system. Temporal Infomax in that case leads to finite state automata, either completely deterministic or weakly nondeterministic. Transitions between internal states of these systems are almost perfectly predictable given the complete current state and the input, but the activity of each single unit alone is virtually random. The results are demonstrated by means of computer simulations and confirmed analytically. It is furthermore shown numerically that Temporal Infomax leads to a high information flow from the input to internal units and that a simple temporal learning rule can approximately achieve the optimization of temporal interaction. We relate these results to experimental data concerning the correlation dynamics and functional connectivities observed in multiple electrode recordings.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2002) 14 (8): 1801–1825.
Published: 01 August 2002
Abstract
View article
PDF
This article presents an approximation method to reduce the spatiotemporal behavior of localized activation peaks (also called “bumps”) in nonlinear neural field equations to a set of coupled ordinary differential equations (ODEs) for only the amplitudes and tuning widths of these peaks. This enables a simplified analysis of steady-state receptive fields and their stability, as well as spatiotemporal point spread functions and dynamic tuning properties. A lowest-order approximation for peak amplitudes alone shows that much of the well-studied behavior of small neural systems (e.g., the Wilson-Cowan oscillator) should carry over to localized solutions in neural fields. Full spatiotemporal response profiles can further be reconstructed from this low-dimensional approximation. The method is applied to two standard neural field models: a one-layer model with difference-of-gaussians connectivity kernel and a two-layer excitatory-inhibitory network. Similar models have been previously employed in numerical studies addressing orientation tuning of cortical simple cells. Explicit formulas for tuning properties, instabilities, and oscillation frequencies are given, and exemplary spatiotemporal response functions, reconstructed from the low-dimensional approximation, are compared with full network simulations.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2001) 13 (8): 1721–1747.
Published: 01 August 2001
Abstract
View article
PDF
We present a general approximation method for the mathematical analysis of spatially localized steady-state solutions in nonlinear neural field models. These models comprise several layers of excitatory and inhibitory cells. Coupling kernels between and inside layers are assumed to be gaussian shaped. In response to spatially localized (i.e., tuned) inputs, such networks typically reveal stationary localized activity profiles in the different layers. Qualitative properties of these solutions, like response amplitudes and tuning widths, are approximated for a whole class of nonlinear rate functions that obey a power law above some threshold and that are zero below. A special case of these functions is the semilinear function, which is commonly used in neural field models. The method is then applied to models for orientation tuning in cortical simple cells: first, to the one-layer model with “difference of gaussians” connectivity kernel developed by Carandini and Ringach (1997) as an abstraction of the biologically detailed simulations of Somers, Nelson, and Sur (1995); second, to a two-field model comprising excitatory and inhibitory cells in two separate layers. Under certain conditions, both models have the same steady states. Comparing simulations of the field models and results derived from the approximation method, we find that the approximation well predicts the tuning behavior of the full model. Moreover, explicit formulas for approximate amplitudes and tuning widths in response to changing input strength are given and checked numerically. Comparing the network behavior for different nonlinearities, we find that the only rate function (from the class of functions under study) that leads to constant tuning widths and a linear increase of firing rates in response to increasing input is the semilinear function. For other nonlinearities, the qualitative network response depends on whether the model neurons operate in a convex (e.g., x 2 ) or concave (e.g., sqrt (x) ) regime of their rate function. In the first case, tuning gradually changes from input driven at low input strength (broad tuning strongly depending on the input and roughly linear amplitudes in response to input strength) to recurrently driven at moderate input strength (sharp tuning, supra-linear increase of amplitudes in response to input strength). For concave rate functions, the network reveals stable hysteresis between a state at low firing rates and a tuned state at high rates. This means that the network can “memorize” tuning properties of a previously shown stimulus. Sigmoid rate functions can combine both effects. In contrast to the Carandini-Ringach model, the two-field model further reveals oscillations with typical frequencies in the beta and gamma range, when the excitatory and inhibitory connections are relatively strong. This suggests a rhythmic modulation of tuning properties during cortical oscillations.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2001) 13 (1): 139–159.
Published: 01 January 2001
Abstract
View article
PDF
Receptive fields (RF) in the visual cortex can change their size depending on the state of the individual. This reflects a changing visual resolution according to different demands on information processing during drowsiness. So far, however, the possible mechanisms that underlie these size changes have not been tested rigorously. Only qualitatively has it been suggested that state-dependent lateral geniculate nucleus (LGN) firing patterns (burst versus tonic firing) are mainly responsible for the observed cortical receptive field restructuring. Here, we employ a neural field approach to describe the changes of cortical RF properties analytically. Expressions to describe the spatiotemporal receptive fields are given for pure feedforward networks. The model predicts that visual latencies increase nonlinearly with the distance of the stimulus location from the RF center. RF restructuring effects are faithfully reproduced. Despite the changing RF sizes, the model demonstrates that the width of the spatial membrane potential profile (as measured by the variance σ of a gaussian) remains constant in cortex. In contrast, it is shown for recurrent networks that both the RF width and the width of the membrane potential profile generically depend on time and can even increase if lateral cortical excitatory connections extend further than fibers from LGN to cortex. In order to differentiate between a feedforward and a recurrent mechanism causing the experimental RF changes, we fitted the data to the analytically derived point-spread functions. Results of the fits provide estimates for model parameters consistent with the literature data and support the hypothesis that the observed RF sharpening is indeed mainly driven by input from LGN, not by recurrent intracortical connections.