Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-7 of 7
Jack D. Cowan
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Neural Computation (2010) 22 (2): 377–426.
Published: 01 February 2010
FIGURES
| View All (8)
Abstract
View article
PDF
Population rate or activity equations are the foundation of a common approach to modeling for neural networks. These equations provide mean field dynamics for the firing rate or activity of neurons within a network given some connectivity. The shortcoming of these equations is that they take into account only the average firing rate, while leaving out higher-order statistics like correlations between firing. A stochastic theory of neural networks that includes statistics at all orders was recently formulated. We describe how this theory yields a systematic extension to population rate equations by introducing equations for correlations and appropriate coupling terms. Each level of the approximation yields closed equations; they depend only on the mean and specific correlations of interest, without an ad hoc criterion for doing so. We show in an example of an all-to-all connected network how our system of generalized activity equations captures phenomena missed by the mean field rate equations alone.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2002) 14 (3): 493–525.
Published: 01 March 2002
Abstract
View article
PDF
A mathematical theory of interacting hypercolumns in primary visual cortex (V1) is presented that incorporates details concerning the anisotropic nature of long-range lateral connections. Each hypercolumn is modeled as a ring of interacting excitatory and inhibitory neural populations with orientation preferences over the range 0 to 180 degrees. Analytical methods from bifurcation theory are used to derive nonlinear equations for the amplitude and phase of the population tuning curves in which the effective lateral interactions are linear in the amplitudes. These amplitude equations describe how mutual interactions between hypercolumns via lateral connections modify the response of each hypercolumn to modulated inputs from the lateral geniculate nucleus; such interactions form the basis of contextual effects. The coupled ring model is shown to reproduce a number of orientation-dependent and contrast-dependent features observed in center-surround experiments. A major prediction of the model is that the anisotropy in lateral connections results in a nonuniform modulatory effect of the surround that is correlated with the orientation of the center.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2002) 14 (3): 473–491.
Published: 01 March 2002
Abstract
View article
PDF
Many observers see geometric visual hallucinations after taking hallucinogens such as LSD, cannabis, mescaline or psilocybin; on viewing bright flickering lights; on waking up or falling asleep; in “near-death” experiences; and in many other syndromes. Klüver organized the images into four groups called form constants : (I) tunnels and funnels, (II) spirals, (III) lattices, including honeycombs and triangles, and (IV) cobwebs. In most cases, the images are seen in both eyes and move with them. We interpret this to mean that they are generated in the brain. Here, we summarize a theory of their origin in visual cortex (area V1), based on the assumption that the form of the retino–cortical map and the architecture of V1 determine their geometry. (A much longer and more detailed mathematical version has been published in Philosophical Transactions of the Royal Society B, 356 [2001].) We model V1 as the continuum limit of a lattice of interconnected hypercolumns, each comprising a number of interconnected iso-orientation columns. Based on anatomical evidence, we assume that the lateral connectivity between hypercolumns exhibits symmetries, rendering it invariant under the action of the Euclidean group E(2), composed of reflections and translations in the plane, and a (novel) shift-twist action. Using this symmetry, we show that the various patterns of activity that spontaneously emerge when V1's spatially uniform resting state becomes unstable correspond to the form constants when transformed to the visual field using the retino-cortical map. The results are sensitive to the detailed specification of the lateral connectivity and suggest that the cortical mechanisms that generate geometric visual hallucinations are closely related to those used to process edges, contours, surfaces, and textures.
Journal Articles
Publisher: Journals Gateway
Neural Computation (1998) 10 (7): 1779–1795.
Published: 01 October 1998
Abstract
View article
PDF
We propose a model for the lateral connectivity of orientation-selective cells in the visual cortex. We study the properties of the input signal to the visual cortex and find new statistical structures that have not been processed in the retino-geniculate pathway. Using the idea that the system performs redundancy reduction of the incoming signals, we derive the lateral connectivity that will achieve this for a set of orientation-selective local circuits, as well as the complete spatial structure of a network composed of such circuits. We compare the results with various physiological measurements.
Journal Articles
Publisher: Journals Gateway
Neural Computation (1997) 9 (6): 1305–1320.
Published: 15 August 1997
Abstract
View article
PDF
A geometric approach to data representation incorporating information theoretic ideas is presented. The task of finding a faithful representation, where the input distribution is evenly partitioned into regions of equal mass, is addressed. For input consisting of mixtures of statistically independent sources, we treat independent component analysis (ICA) as a computational geometry problem. First, we consider the separation of sources with sharply peaked distribution functions, where the ICA problem becomes that of finding high-density directions in the input distribution. Second, we consider the more general problem for arbitrary input distributions, where ICA is transformed into the task of finding an aligned equipartition. By modifying the Kohonen self-organized feature maps, we arrive at neural networks with local interactions that optimize coding while simultaneously performing source separation. The local nature of our approach results in networks with nonlinear ICA capabilities.
Journal Articles
Publisher: Journals Gateway
Neural Computation (1996) 8 (8): 1653–1676.
Published: 01 November 1996
Abstract
View article
PDF
Exploiting local stability, we show what neuronal characteristics are essential to ensure that coherent oscillations are asymptotically stable in a spatially homogeneous network of spiking neurons. Under standard conditions, a necessary and, in the limit of a large number of interacting neighbors, also sufficient condition is that the postsynaptic potential is increasing in time as the neurons fire. If the postsynaptic potential is decreasing, oscillations are bound to be unstable. This is a kind of locking theorem and boils down to a subtle interplay of axonal delays, postsynaptic potentials, and refractory behavior. The theorem also allows for mixtures of excitatory and inhibitory interactions. On the basis of the locking theorem, we present a simple geometric method to verify the existence and local stability of a coherent oscillation.
Journal Articles
Publisher: Journals Gateway
Neural Computation (1995) 7 (3): 518–528.
Published: 01 May 1995
Abstract
View article
PDF
We study the stochastic behavior of a single self-exciting model neuron with additive noise, a system that has bistable stochastic dynamics. We use Langevin and Fokker-Planck equations to obtain analytical expressions for the stationary distribution of activities and for the crossing rate between two stable states. We adjust the parameters in these expressions to fit observed histograms of neural activity, thus obtaining what we call an “effective single neuron” for a given network. We construct an effective single neuron from an activity histogram of a representative hidden neuron in a recurrent learning network. We also compare our result with an effective single neuron previously obtained analytically through the adiabatic elimination approximation.