Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-20 of 22
Christof Koch
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
1
Sort by
Journal Articles
Publisher: Journals Gateway
Neural Computation (2008) 20 (8): 2070–2084.
Published: 01 August 2008
Abstract
View article
PDF
Based on the membrane currents generated by an action potential in a biologically realistic model of a pyramidal, hippocampal cell within rat CA1, we perform a moment expansion of the extracellular field potential. We decompose the potential into both inverse and classical moments and show that this method is a rapid and efficient way to calculate the extracellular field both near and far from the cell body. The action potential gives rise to a large quadrupole moment that contributes to the extracellular field up to distances of almost 1 cm. This method will serve as a starting point in connecting the microscopic generation of electric fields at the level of neurons to macroscopic observables such as the local field potential.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2008) 20 (5): 1165–1178.
Published: 01 May 2008
Abstract
View article
PDF
Motivated by the existence of highly selective, sparsely firing cells observed in the human medial temporal lobe (MTL), we present an unsupervised method for learning and recognizing object categories from unlabeled images. In our model, a network of nonlinear neurons learns a sparse representation of its inputs through an unsupervised expectation-maximization process. We show that the application of this strategy to an invariant feature-based description of natural images leads to the development of units displaying sparse, invariant selectivity for particular individuals or image categories much like those observed in the MTL data.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2004) 16 (12): 2507–2532.
Published: 01 December 2004
Abstract
View article
PDF
The basic requirement for direction selectivity is a nonlinear interaction between two different inputs in space-time. In some models, the interaction is hypothesized to occur between excitation and inhibition of the shunting type in the neuron's dendritic tree. How can the required spatial specificity be acquired in an unsupervised manner? We here propose an activity-based, local learning model that can account for direction selectivity in visual cortex based on such a local veto operation and that depends on synaptically induced changes in intracellular calcium concentration. Our biophysical simulations suggest that a model cell with our learning algorithm can develop direction selectivity organically after unsupervised training. The learning rule is also applicable to a neuron with multiple-direction-selective subunits and to a pair of cells with opposite-direction selectivities and is stable under different starting conditions, delays, and velocities.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2003) 15 (4): 735–759.
Published: 01 April 2003
Abstract
View article
PDF
Reverse-phi motion is the illusory reversal of perceived direction of movement when the stimulus contrast is reversed in successive frames. Livingstone, Tsao, and Conway (2000) showed that direction-selective cells in striate cortex of the alert macaque monkey showed reversed excitatory and inhibitory regions when two different contrast bars were flashed sequentially during a two-bar interaction analysis. While correlation or motion energy models predict the reverse-phi response, it is unclear how neurons can accomplish this. We carried out detailed biophysical simulations of a direction-selective cell model implementing a synaptic shunting scheme. Our results suggest that a simple synaptic-veto mechanism with strong direction selectivity for normal motion cannot account for the observed reverse-phi motion effect. Given the nature of reverse-phi motion, a direct interaction between the ON and OFF pathway, missing in the original shunting-inhibition model, it is essential to account for the reversal of response. We here propose a double synaptic-veto mechanism in which ON excitatory synapses are gated by both delayed ON inhibition at their null side and delayed OFF inhibition at their preferred side. The converse applies to OFF excitatory synapses. Mapping this scheme onto the dendrites of a direction-selective neuron permits the model to respond best to normal motion in its preferred direction and to reverse-phi motion in its null direction. Two-bar interaction maps showed reversed excitation and inhibition regions when two different contrast bars are presented.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2002) 14 (2): 347–367.
Published: 01 February 2002
Abstract
View article
PDF
It remains unclear whether the variability of neuronal spike trains in vivo arises due to biological noise sources or represents highly precise encoding of temporally varying synaptic input signals. Determining the variability of spike timing can provide fundamental insights into the nature of strategies used in the brain to represent and transmit information in the form of discrete spike trains. In this study, we employ a signal estimation paradigm to determine how variability in spike timing affects encoding of random time-varying signals. We assess this for two types of spiking models: an integrate-and-fire model with random threshold and a more biophysically realistic stochastic ion channel model. Using the coding fraction and mutual information as information-theoretic measures, we quantify the efficacy of optimal linear decoding of random inputs from the model outputs and study the relationship between efficacy and variability in the output spike train. Our findings suggest that variability does not necessarily hinder signal decoding for the biophysically plausible encoders examined and that the functional role of spiking variability depends intimately on the nature of the encoder and the signal processing task; variability can either enhance or impede decoding performance.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2001) 13 (1): 1–33.
Published: 01 January 2001
Abstract
View article
PDF
The temporal precision with which neurons respond to synaptic inputs has a direct bearing on the nature of the neural code. A characterization of the neuronal noise sources associated with different sub-cellular components (synapse, dendrite, soma, axon, and so on) is needed to understand the relationship between noise and information transfer. Here we study the effect of the unreliable, probabilistic nature of synaptic transmission on information transfer in the absence of interaction among presynaptic inputs. We derive theoretical lower bounds on the capacity of a simple model of a cortical synapse under two different paradigms. In signal estimation , the signal is assumed to be encoded in the mean firing rate of the presynaptic neuron, and the objective is to estimate the continuous input signal from the postsynaptic voltage. In signal detection , the input is binary, and the presence or absence of a presynaptic action potential is to be detected from the postsynaptic voltage. The efficacy of information transfer in synaptic transmission is characterized by deriving optimal strategies under these two paradigms. On the basis of parameter values derived from neocortex, we find that single cortical synapses cannot transmit information reliably, but redundancy obtained using a small number of multiple synapses leads to a significant improvement in the information capacity of synaptic transmission.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2000) 12 (10): 2291–2304.
Published: 01 October 2000
Abstract
View article
PDF
Flies are capable of stabilizing their body during free flight by using visual motion information to estimate self-rotation. We have built a hardware model of this optomotor control system in a standard CMOS VLSI process. The result is a small, low-power chip that receives input directly from the real world through on-board photoreceptors and generates motor commands in real time. The chip was tested under closed-loop conditions typically used for insect studies. The silicon system exhibited stable control sufficiently analogous to the biological system to allow for quantitative comparisons.
Journal Articles
Publisher: Journals Gateway
Neural Computation (1999) 11 (8): 1893–1913.
Published: 15 November 1999
Abstract
View article
PDF
It is generally assumed that nerve cells optimize their performance to reflect the statistics of their input. Electronic circuit analogs of neurons require similar methods of self-optimization for stable and autonomous operation. We here describe and demonstrate a biologically plausible adaptive algorithm that enables a neuron to adapt the current threshold and the slope (or gain) of its current-frequency relationship to match the mean (or dc offset) and variance (or dynamic range or contrast) of the time-varying somatic input current. The adaptation algorithm estimates the somatic current signal from the spike train by way of the intracellular somatic calcium concentration, thereby continuously adjusting the neuronś firing dynamics. This principle is shown to work in an analog VLSI-designed silicon neuron.
Journal Articles
Publisher: Journals Gateway
Neural Computation (1999) 11 (8): 1797–1829.
Published: 15 November 1999
Abstract
View article
PDF
In recent theoretical approaches addressing the problem of neural coding, tools from statistical estimation and information theory have been applied to quantify the ability of neurons to transmit information through their spike outputs. These techniques, though fairly general, ignore the specific nature of neuronal processing in terms of its known biophysical properties. However, a systematic study of processing at various stages in a biophysically faithful model of a single neuron can identify the role of each stage in information transfer. Toward this end, we carry out a theoretical analysis of the information loss of a synaptic signal propagating along a linear, one-dimensional, weakly active cable due to neuronal noise sources along the way, using both a signal reconstruction and a signal detection paradigm. Here we begin such an analysis by quantitatively characterizing three sources of membrane noise: (1) thermal noise due to the passive membrane resistance, (2) noise due to stochastic openings and closings of voltage-gated membrane channels (Na + and K + ), and (3) noise due to random, background synaptic activity. Using analytical expressions for the power spectral densities of these noise sources, we compare their magnitudes in the case of a patch of membrane from a cortical pyramidal cell and explore their dependence on different biophysical parameters.
Journal Articles
Publisher: Journals Gateway
Neural Computation (1999) 11 (8): 1831–1873.
Published: 15 November 1999
Abstract
View article
PDF
This is the second in a series of articles that seek to recast classical single-neuron biophysics in information-theoretical terms. Classical cable theory focuses on analyzing the voltage or current attenuation of a synaptic signal as it propagates from its dendritic input location to the spike initiation zone. On the other hand, we are interested in analyzing the amount of information lost about the signal in this process due to the presence of various noise sources distributed throughout the neuronal membrane. We use a stochastic version of the linear one-dimensional cable equation to derive closed-form expressions for the second-order moments of the fluctuations of the membrane potential associated with different membrane current noise sources: thermal noise, noise due to the random opening and closing of sodium and potassium channels, and noise due to the presence of “spontaneous” synaptic input. We consider two different scenarios. In the signal estimation paradigm, the time course of the membrane potential at a location on the cable is used to reconstruct the detailed time course of a random, band-limited current injected some distance away. Estimation performance is characterized in terms of the coding fraction and the mutual information. In the signal detection paradigm, the membrane potential is used to determine whether a distant synaptic event occurred within a given observation interval. In the light of our analytical results, we speculate that the length of weakly active apical dendrites might be limited by the information loss due to the accumulated noise between distal synaptic input sites and the soma and that the presence of dendritic nonlinearities probably serves to increase dendritic information transfer.
Journal Articles
Publisher: Journals Gateway
Neural Computation (1999) 11 (1): 243–265.
Published: 01 January 1999
Abstract
View article
PDF
One way to understand a neurobiological system is by building a simulacrum that replicates its behavior in real time using similar constraints. Analog very large-scale integrated (VLSI) electronic circuit technology provides such an enabling technology. We here describe a neuromorphic system that is part of a long-term effort to understand the primate oculomotor system. It requires both fast sensory processing and fast motor control to interact with the world. A one-dimensional hardware model of the primate eye has been built that simulates the physical dynamics of the biological system. It is driven by two different analog VLSI chips, one mimicking cortical visual processing for target selection and tracking and another modeling brain stem circuits that drive the eye muscles. Our oculomotor plant demonstrates both smooth pursuit movements, driven by a retinal velocity error signal, and saccadic eye movements, controlled by retinal position error, and can reproduce several behavioral, stimulation, lesion, and adaptation experiments performed on primates.
Journal Articles
Publisher: Journals Gateway
Neural Computation (1997) 9 (5): 1001–1013.
Published: 01 July 1997
Abstract
View article
PDF
Shunting inhibition, a conductance increase with a reversal potential close to the resting potential of the cell, has been shown to have a divisive effect on subthreshold excitatory postsynaptic potential amplitudes. It has therefore been assumed to have the same divisive effect on firing rates. We show that shunting inhibition actually has a subtractive effecton the firing rate in most circumstances. Averaged over several interspike intervals, the spiking mechanism effectively clamps the somatic membrane potential to a value significantly above the resting potential, so that the current through the shunting conductance is approximately independent of the firing rate. This leads to a subtractive rather than a divisive effect. In addition, at distal synapses, shunting inhibition will also have an approximately subtractive effect if the excitatory conductance is not small compared to the inhibitory conductance. Therefore regulating a cell's passive membrane conductance—for instance, via massive feedback—is not an adequate mechanism for normalizing or scaling its output.
Journal Articles
Publisher: Journals Gateway
Neural Computation (1996) 8 (6): 1185–1202.
Published: 01 August 1996
Abstract
View article
PDF
How reliably do action potentials in cortical neurons encode information about a visual stimulus? Most physiological studies do not weigh the occurrences of particular action potentials as significant but treat them only as reflections of average neuronal excitation. We report that single neurons recorded in a previous study by Newsome et al. (1989; see also Britten et al. 1992) from cortical area MT in the behaving monkey respond to dynamic and unpredictable motion stimuli with a markedly reproducible temporal modulation that is precise to a few milliseconds. This temporal modulation is stimulus dependent, being present for highly dynamic random motion but absent when the stimulus translates rigidly.
Journal Articles
Publisher: Journals Gateway
Neural Computation (1996) 8 (1): 44–66.
Published: 01 January 1996
Abstract
View article
PDF
Recently, methods of statistical estimation theory have been applied by Bialek and collaborators (1991) to reconstruct time-varying velocity signals and to investigate the processing of visual information by a directionally selective motion detector in the fly's visual system, the H1 cell. We summarize here our theoretical results obtained by studying these reconstructions starting from a simple model of H1 based on experimental data. Under additional technical assumptions, we derive a closed expression for the Fourier transform of the optimal reconstruction filter in terms of the statistics of the stimulus and the characteristics of the model neuron, such as its firing rate. It is shown that linear reconstruction filters will change in a nontrivial way if the statistics of the signal or the mean firing rate of the cell changes. Analytical expressions are then derived for the mean square error in the reconstructions and the lower bound on the rate of information transmission that was estimated experimentally by Bialek et al. (1991). For plausible values of the parameters, the model is in qualitative agreement with experimental data. We show that the rate of information transmission and mean square error represent different measures of the reconstructions: in particular, satisfactory reconstructions in terms of the mean square error can be achieved only using stimuli that are matched to the properties of the recorded cell. Finally, it is shown that at least for the class of models presented here, reconstruction methods can be understood as a generalization of the more familiar reverse-correlation technique.
Journal Articles
Publisher: Journals Gateway
Neural Computation (1994) 6 (5): 795–836.
Published: 01 September 1994
Abstract
View article
PDF
We investigate a model for neural activity in a two-dimensional sheet of leaky integrate-and-fire neurons with feedback connectivity consisting of local excitation and surround inhibition. Each neuron receives stochastic input from an external source, independent in space and time. As recently suggested by Softky and Koch (1992, 1993), independent stochastic input alone cannot explain the high interspike interval variability exhibited by cortical neurons in behaving monkeys. We show that high variability can be obtained due to the amplification of correlated fluctuations in a recurrent network. Furthermore, the cross-correlation functions have a dual structure, with a sharp peak on top of a much broader hill. This is due to the inhibitory and excitatory feedback connections, which cause “hotspots” of neural activity to form within the network. These localized patterns of excitation appear as clusters or stripes that coalesce, disintegrate, or fluctuate in size while simultaneously moving in a random walk constrained by the interaction with other clusters. The synaptic current impinging upon a single neuron shows large fluctuations at many time scales, leading to a large coefficient of variation (C V ) for the interspike interval statistics. The power spectrum associated with single units shows a 1/ f decay for small frequencies and is flat at higher frequencies, while the power spectrum of the spiking activity averaged over many cells—equivalent to the local field potential—shows no 1/ f decay but a prominent peak around 40 Hz, in agreement with data recorded from cat and monkey cortex (Gray et al . 1990; Eckhorn et al . 1993). Firing rates exhibit self-similarity between 20 and 800 msec, resulting in 1/ f -like noise, consistent with the fractal nature of neural spike trains (Teich 1992).
Journal Articles
Publisher: Journals Gateway
Neural Computation (1994) 6 (4): 622–641.
Published: 01 July 1994
Abstract
View article
PDF
It is commonly assumed that temporal synchronization of excitatory synaptic inputs onto a single neuron increases its firing rate. We investigate here the role of synaptic synchronization for the leaky integrate-and-fire neuron as well as for a biophysically and anatomically detailed compartmental model of a cortical pyramidal cell. We find that if the number of excitatory inputs, N , is on the same order as the number of fully synchronized inputs necessary to trigger a single action potential, N t , synchronization always increases the firing rate (for both constant and Poisson-distributed input). However, for large values of N compared to N t , “overcrowding” occurs and temporal synchronization is detrimental to firing frequency. This behavior is caused by the conflicting influence of the low-pass nature of the passive dendritic membrane on the one hand and the refractory period on the other. If both temporal synchronization as well as the fraction of synchronized inputs (Murthy and Fetz 1993) is varied, synchronization is only advantageous if either N or the average input frequency, f in , are small enough.
Journal Articles
Publisher: Journals Gateway
Neural Computation (1992) 4 (5): 643–646.
Published: 01 September 1992
Journal Articles
Publisher: Journals Gateway
Neural Computation (1992) 4 (3): 332–340.
Published: 01 May 1992
Abstract
View article
PDF
To what extent do the mechanisms generating different receptive field properties of neurons depend on each other? We investigated this question theoretically within the context of orientation and direction tuning of simple cells in the mammalian visual cortex. In our model a cortical cell of the "simple" type receives its orientation tuning by afferent convergence of aligned receptive fields of the lateral geniculate nucleus (Hubel and Wiesel 1962). We sharpen this orientation bias by postulating a special type of radially symmetric long-range lateral inhibition called circular inhibition . Surprisingly, this isotropic mechanism leads to the emergence of a strong bias for the direction of motion of a bar. We show that this directional anisotropy is neither caused by the probabilistic nature of the connections nor is it a consequence of the specific columnar structure chosen but that it is an inherent feature of the architecture of visual cortex.
Journal Articles
Publisher: Journals Gateway
Neural Computation (1992) 4 (2): 211–223.
Published: 01 March 1992
Abstract
View article
PDF
The dynamic behavior of a network model consisting of all-to-all excitatory coupled binary neurons with global inhibition is studied analytically and numerically. We prove that for random input signals, the output of the network consists of synchronized bursts with apparently random intermissions of noisy activity. We introduce the fraction of simultaneously firing neurons as a measure for synchrony and prove that its temporal correlation function displays, besides a delta peak at zero indicating random processes, strongly dampened oscillations. Our results suggest that synchronous bursts can be generated by a simple neuronal architecture that amplifies incoming coincident signals. This synchronization process is accompanied by dampened oscillations that, by themselves, however, do not play any constructive role in this and can therefore be considered to be an epiphenomenon.
Journal Articles
Publisher: Journals Gateway
Neural Computation (1989) 1 (3): 318–320.
Published: 01 September 1989
1