Abstract

Neural continuum networks are an important aspect of the modeling of macroscopic parts of the cortex. Two classes of such networks are considered: voltage and activity based. In both cases, our networks contain an arbitrary number, n, of interacting neuron populations. Spatial nonsymmetric connectivity functions represent cortico-cortical, local connections, and external inputs represent nonlocal connections. Sigmoidal nonlinearities model the relationship between (average) membrane potential and activity. Departing from most of the previous work in this area, we do not assume the nonlinearity to be singular, that is, represented by the discontinuous Heaviside function. Another important difference from previous work is that we relax the assumption that the domain of definition where we study these networks is infinite, that is, equal to or . We explicitly consider the biologically more relevant case of a bounded subset Ω of , a better model of a piece of cortex. The time behavior of these networks is described by systems of integro-differential equations. Using methods of functional analysis, we study the existence and uniqueness of a stationary (i.e., time-independent) solution of these equations in the case of a stationary input. These solutions can be seen as ‘persistent’; they are also sometimes called bumps. We show that under very mild assumptions on the connectivity functions and because we do not use the Heaviside function for the nonlinearities, such solutions always exist. We also give sufficient conditions on the connectivity functions for the solution to be absolutely stable, that is, independent of the initial state of the network. We then study the sensitivity of the solutions to variations of such parameters as the connectivity functions, the sigmoids, the external inputs, and, last but not least, the shape of the domain of existence Ω of the neural continuum networks. These theoretical results are illustrated and corroborated by a large number of numerical experiments in most of the cases 2 ⩽ n ⩽ 3, 2 ⩽ q ⩽ 3.

1.  Introduction

We analyze the ability of neuronal continuum networks to display localized persistent activity, or bumps. This type of activity is related, for example, to working memory, which involves holding and processing information on the timescale of seconds. Experiments in primates have shown that there exist neurons in the prefrontal cortex that have high firing rates during the period the animal is “remembering” the spatial location of an event before using the information being remembered (Colby, Duhamel, & Goldberg, 1995; Funahashi, Bruce, & Goldman-Rakic, 1989; Miller, Erickson, & Desimone, 1996). Realistic models for this type of activity have involved spatially extended networks of coupled neural elements or neural masses and the study of spatially localized areas of high activity in these systems. A neuronal continuum network is first built from a “local” description of the dynamics of a number of interacting neuron populations where the spatial structure of the connections is neglected. This local description can be thought of as representing such a structure as a cortical column (Mountcastle, 1957; Mountcastle, 1997; Buxhoeveden & Casanova, 2002). We call it a neural mass (Freeman, 1975). Probably the best-known neural mass model is that of Jansen and Rit (Jansen, Zouridakis, & Brandt, 1993), based on the original work of Lopes da Silva and colleagues (Lopes da Silva, Hoeks, & Zetterberg, 1974, 1976) and of Van Rotterdam and colleagues (van Rotterdam, Lopes da Silva, van den Ende, Viergever, & Hermans, 1982). A complete analysis of the bifurcation diagram of this model can be found in Grimbert and Faugeras (2006). The model has been used to simulate evoked potentials: EEG activities in normal (Jansen & Rit, 1995) and epileptic patients (Wendling, Bartolomei, Bellanger, & Chauvel, 2001, 2000). In a similar vein, David and Friston (2003) have used an extension of this model to simulate a large variety of cerebral rhythms (α, β, γ, δ, and γ) in MEG/EEG simulations. Another important class of such models is the one introduced by Wilson and Cowan (1973); Hoppenstaedt & Izhikevich (1997).

These local descriptions are then assembled spatially to form the neuronal continuum network. This continuum network is meant to represent a macroscopic part of the neocortex, for example, a visual area such as V1. The spatial connections are models of cortico-cortical connections. Other, nonlocal connections with, for example, such visual areas as the lateral geniculate nucleus, or V2, are also considered. Other researchers have used several interconnected neural masses to simulate epileptogenic zones (Wendling et al., 2000, 2001; Lopes da Silva et al., 2003) or to study the connectivity between cortical areas (David, Cosmelli, & Friston, 2004). In this letter we consider a continuum of neural masses.

2.  The Models

We briefly discuss local and spatial models.

2.1.  The Local Models.

We consider n interacting populations of neurons such as those shown in Figure 1. The figure is inspired by the work of Alex Thomson (Thomson & Bannister, 2003) and Wolfgang Maass (Haeusler & Maass, 2007). It shows six interacting populations of neurons. Black indicates excitation and gray inhibition. The thickness of the arrows pertains to the strength of the interaction. In this example, the six populations are located in layers 2/3, 4, and 5 of the neocortex. Each population being described by its state (defined below), we derive from first principles the equations that describe the variations over time of the states of the different interacting populations.

Figure 1:

A model with six interacting neural populations.

Figure 1:

A model with six interacting neural populations.

Our derivation follows closely that of Ermentrout (1998). We consider that each neural population i is described by its average membrane potential Vi(t) or its average instantaneous firing rate νi(t), the relation between the two quantities being of the form νi(t) = Si(Vi(t)) (Gerstner and Kistler, 2002; Dayan & Abbott, 2001), where Si is sigmoidal and smooth. The functions Si, i = 1, …, n, satisfy the following properties introduced in the following definition:

Definition 1.

For all i = 1, …, n, |Si| ⩽ Sim (boundedness). We note Sm = maxiSim. For all i = 1, …, n, the derivative Si of Si is positive and bounded by Sim>0 (boundedness of the derivatives). We note DSm = maxiSim and DSm the diagonal matrix diag(Sim).

A typical example of a function Si is given in equation 2.1:
formula
2.1
This function is symmetric with respect to the “threshold” potential θi and varies between 0 and 1. The positive parameter si controls the slope of the ith sigmoid at v = θi.
This function is shown in Figure 2 for the values of the parameters θ = 0 and s = 0.5, 1, 10. We have Sim = 1 and Sim = s. When s → ∞, S converges to the Heaviside function Y defined by
formula
Figure 2:

Three examples of sigmoid functions for different values of the parameter s. See the text.

Figure 2:

Three examples of sigmoid functions for different values of the parameter s. See the text.

Neurons in population j are connected to neurons in population i. A single action potential from neurons in population j is seen as a postsynaptic potential PSPij(ts) by neurons in population i, where s is the time of the spike hitting the terminal and t the time after the spike. We neglect the delays due to the distance traveled down the axon by the spikes.

Assuming that they sum linearly, the average membrane potential of population i due to action potentials of population j is
formula
where the sum is taken over the arrival times of the spikes produced by the neurons in population j. The number of spikes arriving between t and t+dt is νj(t)dt. Therefore, we have
formula
or, equivalently,
formula
2.2
The PSPij can depend on several variables in order to account for adaptation and learning, among other examples.

There are two main simplifying assumptions that appear in the literature (Ermentrout, 1998) and produce two different models.

2.1.1.  The Voltage-Based Model.

The assumption (Hopfield, 1984) is that the postsynaptic potential has the same shape no matter which presynaptic population caused it; however, the sign and amplitude may vary. This leads to the relation
formula
If wij>0, the population j excites population i, whereas it inhibits it when wij < 0.
Finally, if we assume that , or equivalently that
formula
2.3
we end up with the following system of ordinary differential equations,
formula
2.4
that describes the dynamic behavior of a cortical column. We have incoporated the constant Ai in the weights wij and added an external current Iext(t) to model the nonlocal connections of population i. We introduce the n × n matrixes W such that Wij = wiji and the function S, such that S(x) is the vector of coordinates Si(xi), if x = (x1, …, xn). We rewrite equation 2.4 in vector form and obtain the following system of n ordinary differential equations,
formula
2.5
where L is the diagonal matrix L=diag(1/τi).

In terms of units, the left- and right-hand sides of this equations are in units of, say, mV × ms−1. Therefore, Iext, despite its name, is not a current. Note that since S(V) is an activity, its unit is ms−1, and hence W is in mV.

2.1.2.  The Activity-Based Model.

The assumption is that the shape of a postsynaptic potential (PSP) depends on only the nature of the presynaptic cell, that is,
formula
As above, we suppose that PSPi(t) satisfies the differential equation 2.3 and define the time-averaged firing rate to be
formula
A similar derivation yields the following set of n ordinary differential equations:
formula
We include the τis in the sigmoids Si and rewrite this in vector form:
formula
2.6
The units are ms−2 for both sides of the equation. W is expressed in mV × ms, and Iext is in mV.

2.2.  The Continuum Models.

We now combine these local models to form a continuum of neural masses, for example, in the case of a model of a significant part Ω of the cortex. We consider a subset Ω of , q = 1, 2, 3, which we assume to be connected and compact (i.e., closed and bounded). We note |Ω| its Lebesgue measure (length, area, volume). This encompasses several cases of interest.

When q = 1, we deal with one-dimensional sets of neural masses. Although this appears to be of limited biological interest, this is one of the most widely studied cases because of its relative mathematical simplicity and because of the insights one can gain of the more realistic situations.

When q = 2, we discuss properties of two-dimensional sets of neural masses. This is perhaps more interesting from a biological point of view since Ω can be viewed as a piece of cortex where the third dimension, its thickness, is neglected. This case has received by far less attention than the previous one, probably because of the increased mathematical difficulty. Note that we could also take into account the curvature of the cortical sheet at the cost of an increase in mathematical difficulty. This is outside the scope of this letter.

Finally q = 3 allows us to discuss properties of volumes of neural masses, such as cortical sheets where their thickness is taken into account (Kandel, Schwartz, & Jessel, 2000; Chalupa & Werner, 2004).

The theoretical results presented in this letter are independent of the value of q.

We note V(r, t) (resp., A(r, t)) the n-dimensional state vector at the point r of the continuum and at time t. We introduce the n × n matrix function W(r, r′), which describes how the neural mass at point r′ influences that at point r at time t. We call W the connectivity matrix function. In particular, W(r, r) = W, the matrix that appears in equations 2.5 and 2.6. More precisely, Wij(r, r′) describes how population j at point r′ influences population i at point r at time t. Equation 2.5 can now be extended to
formula
2.7
and equation 2.6 to
formula
2.8

It is important to discuss again the units of the quantities involved in these equations. For equation 2.7, as for equation 2.4, the unit is mV × ms−1 for both sides. Because of the spatial integration, W is in mV × ms−1 × mmq, and q is the dimension of the continuum. To obtain a dimensionless equation, we normalize (i.e., divide both sides of the equation) by the Frobenius norm ‖WF of the connectivity matrix function W (see section A.1 for a definition). Equivalently, we assume that ‖WF = 1.

We have given elsewhere (Faugeras, Grimbert, & Slotine, 2007)—but see proposition 2 below for completeness—sufficient conditions on W and Iext for equations 2.7 and 2.8 to be well defined and studied the existence and stability of their solutions for general external currents. In this letter, we analyze in detail the case of stationary external currents (independent of the time variable) and investigate the existence and stability of the corresponding stationary solutions of 2.7 and 2.8.

A significant amount of work has been devoted to this or closely related problems, starting perhaps with the pioneering work of Wilson and Cowan (1973). A fairly recent review of this work, and much more, can be found in Coombes (2005). Amari (1977) investigated the problem in the case n = q = 1 when the sigmoid function is approximated by a Heaviside function and the connectivity function has a Mexican hat shape. He proved the existence of stable bumps in this case. His work has been extended to different firing rate and connectivity functions (Gutkin, Ermentrout, & O'Sullivan, 2000; Laing, Troy, Gutkin, & Ermentrout, 2002; Laing & Troy, 2003; Rubin & Troy, 2004; Guo & Chow, 2005a, 2005b).

The case n = 1, q = 2 has been considered by several authors, including Pinto and Ermentrout (2001a, 2001b) for general firing rate functions and gaussian-like connectivity functions (Blomquist, Wyller, & Einevoll, 2005) when the firing rate functions are approximated by Heaviside functions, and more recently for translation-invariant kernels including some discussion of sigmoids in Owen, Laing, and Coombes (2007).

Extending these analyses to a two- or three-dimensional continuum is difficult because of the increase in the degrees of freedom in the choice of the connectivity function. The case n = 2, q = 1 has been studied in Werner and Richter (2001) and Bressloff (2005) when the firing rate functions are approximated by Heaviside functions and the connectivity function is circularly symmetric, while the case n = 2, q = 2 is mentioned as difficult in Doubrovinski (2005).

In all of these contributions, the proof of the existence of a bump solution is based on Amari's original argument (1977), which works only when q = 1 and the firing rate function is approximated by a Heaviside function. Kubota and Aihara (2005); Kishimoto and Amari (1979) consider sigmoids in the case q = 1 for translation-invariant symmetric connectivity functions. Solutions are usually constructed using a variant of the method of the singular perturbation construction (Pinto & Ermentrout, 2001b). Sufficient conditions for their stability are obtained by a linear stability analysis, which in general requires the use of Heaviside functions instead of sigmoids.

The approach that we describe in this letter is a significant departure from previous ones. By using simple ideas of functional analysis, we are able to:

  • • 

    Prove the existence and uniqueness of a stationary solution to equations 2.7 and 2.8 for any dimensions n and q, arbitrary connectivity functions, and general firing rate functions.

  • • 

    Obtain very simple conditions for the absolute stability of the solution in terms of the spectrum of the differential of the nonlinear operator that appears on the right-hand side of equations 2.7 and 2.8.

  • • 

    Construct a numerical approximation as accurately as needed of the solution, when it exists, for any stationary input.

  • • 

    Characterize the sensitivity of the solutions to variations of the parameters, including the shape of the domain Ω.

To be complete, let us point out that equations of the type 2.7 and 2.8 have been studied in pure mathematics (see e.g., Hazewinkel, 2001). They are of the Hammerstein type (Hammerstein, 1930; Tricomi, 1985). This type of equation has received some recent attention (see Appell & Chen, 2006), and progress has been made toward a better understanding of their solutions. Our contributions are the articulation of the models of networks of neural masses with this type of equation, the characterization of persistent activity in these networks as fixed points of Hammerstein equations, the proof of the existence of solutions, the characterization of their stability, and the analysis of their sensitivity to variations of the parameters involved in the equations.

The rest of the letter is organized as follows. In section 3, we prove that under some mild hypotheses about the connectivity matrix, because Ω is bounded, there always exist stationary solutions to the neural field equations, 2.7 and 2.8, for stationary inputs. In section 4, we provide sufficient conditions for these stationary solutions to be stable. Section 5 provides numerical support for the theorems of the previous two sections: because we know that there exist stable solutions, we can derive a robust and accurate numerical scheme to compute them for various values of the parameters. In section 6, we probe the sensitivity of the solutions to variations of these parameters. This is a first step in the direction of the study of their bifurcations, which will be the topic of a forthcoming paper. In section 7, we summarize our results and open some perspectives.

3.  Existence of Stationary Solutions

In this section we deal with the problem of the existence of stationary solutions to equations 2.7 and 2.8 for a given stationary external current Iext.

As indicated in the previous section, we use functional analysis to solve this problem. Let be the set L2n(Ω) of square integrable functions from Ω to . This is a Hilbert, hence a Banach, space for the usual inner product,
formula
where is the complex conjuguate of the vector V. This inner product induces the norm (see section A.1). is the state space. Another important space is L2n×n(Ω × Ω), the space of square integrable n × n matrices (see section A.1 for a precise definition). We assume that the connectivity matrix functions W(·, ·) are in this space (see propositions 1 and 2 below).
We also identify L2n×n(Ω × Ω) with (the space of continuous linear operators on ) as follows. If WL2n×n(Ω × Ω), it defines a linear mapping
formula
For example, this allows us to write equations 2.7 and 2.8:
formula

We first recall some results on the existence of a solution to equations 2.7 and 2.8 that will be used in the sequel.

We denote by a closed interval of the real line containing 0. A state vector X(r, t) is a mapping , and equations 2.7 and 2.8 are formally recast as an initial value problem:
formula
3.1
where X0 is an element of and the function f from is defined by the right-hand side of equation 2.7, in which case we call it fv, or equation 2.8, in which case we call it fa. In other words, equations 2.7 and 2.8 become differential equations defined on the Hilbert space .

We need the following two propositions, which we quote without proof (Faugeras et al., 2007):

Proposition 1.

If the following two hypotheses are satisfied,

  1. The connectivity function W is in L2n×n(Ω × Ω) (see section A.1),

  2. At each time instant tJ the external current I is in, the set of continuous functions from J to,

then the mappings fv and fa are fromto, continuous, and Lipschitz continuous with respect to their second argument, uniformly with respect to the first.

Proposition 2.

If the following two hypotheses are satisfied,

  1. The connectivity function W is in L2n×n(Ω × Ω),

  2. The external current Iext is in, the set of continuous functions from J to,

then for any function X0 in, there is a unique solution X, defined on(and not only on J) and continuously differentiable, of the abstract initial value problem, equation 3.1, for f = fv and f = fa.

This proposition says that given the two hypotheses and the initial condition, there exists a unique solution to equation 2.7 or 2.8 and that this solution is in , the set of continuously differentiable functions from to .

We now turn our attention to a special type of solution of equations 2.7 and 2.8, corresponding to stationary external currents. We call these solutions, when they exist, stationary solutions. The currents Iext are simply in .

A stationary solution of equation 2.7 or 2.8 is defined by
formula
3.2
where the function fL, , is equal to fLv defined by
formula
3.3
or to fLa defined by
formula
3.4
where WL = L−1W, SL = L−1S and ILext = L−1Iext.

We now recall:

Definition 2.

A continuous mapping(linear or nonlinear) is called compact provided that for each bounded subsetof, the setis relatively compact; that is, its closure is compact.

We then consider the nonlinear mapping ,
formula
3.5
and the linear mappings ga and gLa,
formula
3.6
formula
3.7
We have the following:

Proposition 3.

If WL2n×n(Ω × Ω), gLv and gLa are compact operators of.

Proof.

We know from proposition 1 that gLv is continuous and prove that for each bounded sequence {Vn}n=1 of , there exists a subsequence such that is convergent in .

Because of definition 1 of S, the sequence {An = S(Vn)}n=1 is bounded in by . We prove that there exists a subsequence such that converges in .

Since is separable, its unit ball is weakly compact, and because {An}n=1 is bounded, there exists a a subsequence of {An}n=1 that converges weakly in toward A. Because of Fubini's theorem, for almost all r ∈ Ω (noted almost surely), the function r′ → W(r, r′) is in . Therefore, almost surely,

Since , A is also bounded by C in . It is easy to show that , and we can apply Lebesgue's dominated convergence theorem to the sequence and conclude that , that is, is convergent in .

A small variation of the proof shows that gLa is compact.

From proposition 3 follows:

Proposition 4.

Under the hypotheses of proposition 3, if, fLv and fLa, are compact operators of.

Proof.

The operators XILext and XIext are clearly compact under the hypothesis ; therefore, fLv is the sum of two compact operators, hence compact. For the same reason, ga + Iext is also compact, and so is fLa = SL(ga + Iext) because SL is smooth and bounded.

We can now prove:

Theorem 1.

If WL2n×n(Ω × Ω) and, there exists a stationary solution of equations2.7and2.8.

Proof.

A stationary solution of equation 2.7 (resp. of equation 2.8) is a fixed point of fLv (resp. fLa).

Define the set . Because of lemma A.2 for all , we have
formula
Hence, is bounded.

Similarly, define the set . Because of lemma A.2 for all , we have ; hence, is bounded.

The conclusion follows from Schaefer's fixed-point theorem (Evans, 1998).

Note that in the proofs of proposition 4 and theorem 1, the assumption that Ω is bounded (i.e., that |Ω| is finite) is essential.

4.  Stability of the Stationary Solutions

In this section we give a sufficient condition on the connectivity matrix W to guarantee the stability of the stationary solutions to equations 2.7 and 2.8.

4.1.  The Voltage-Based Model.

We define the “corrected maximal” connectivity function Wcm(r, r′) by Wcm = WDSm, where DSm is defined in definition 1. We also define the corresponding linear operator ,
formula
which is compact according to proposition 3. Its adjoint, noted h*m is defined1 by
formula
and is also compact. Hence the symmetric part , the sum of two compact operators, is also compact. Furthermore, we have 〈V, hm(V)〉 = 〈V, hsm(V)〉, as can be easily verified. It is also self-adjoint since, clearly, hsm = hs*m.

We recall the following property of the spectrum of a compact self-adjoint operator in a Hilbert space (see, e.g., Dieudonné 1960).

Proposition 5.

The spectrum of a compact, self-adjoint operator of a Hilbert space is countable and real. Each nonzero spectral value is an eigenvalue, and the dimension of the corresponding eigenspace is finite.

We have the following:

Theorem 2.
A sufficient condition for the stability of a stationary solution to equation 2.7 is that all the eigenvalues of the linear compact, self-adjoint operator hL,sm be less than 1, where hL,sm is defined by
formula
where hL,sm is the symmetric part of the linear compact operator:
formula

Proof.

The proof of this theorem is a generalization to the continuum case of a result obtained by Matsuoka (1992).

Let us note the function (DSm)−1S and rewrite equation 2.7 for a homogeneous input Iext as follows:
formula
Let U be a stationary solution of equation 2.7. Also let V be the unique solution of the same equation with initially some condition (see proposition 2). We introduce the new function X = VU, which satisfies
formula
where the vector Θ(X) is given by . Consider now the functional
formula
We note that
formula
This is because (Taylor expansion with integral remainder)
formula
and by construction of the vector . This implies that the functional Δ(X) is strictly positive for all and Δ(0) = 0. It also implies—and this is used in the sequel—that zΘi(z) ⩾ Θi(z)2.
The time derivative of δ is readily obtained:
formula
We replace Xt(r, t)) by its value in this expression to obtain
formula
Because of a previous remark, we have
formula
and this provides an upper bound for :
formula
and the conclusion follows.

Note that we have the following:

Corollary 1.

If the condition of theorem 2 is satisfied, the homogeneous solution of equation2.7is unique.

Proof.

Indeed, the result of theorem 2 is independent of the particular stationary solution U that is chosen in the proof.

4.2.  The Activity-Based Model.

We now give a sufficient condition for the stability of a solution to equation 2.8. We define the “maximal corrected” connectivity matrix function Wmc = DSmW and the linear compact operator km from to :
formula

Theorem 3.
A sufficient condition for the stability of a solution to equation2.8is that all the eigenvalues of the linear compact operator kLm be of a magnitude less than 1, where kLm is defined by
formula

Proof.
Let U be a stationary solution of equation 2.8 for a stationary external current Iext(r). As in the proof of theorem 2, we introduce the new function X = VU, where V is the unique solution of the same equation with initial conditions , an element of . We have
formula
Using a first-order Taylor expansion with an integral remainder, this equation can be rewritten as
formula
Consider now the functional . Its time derivative is
formula
We replace Xt(r, t)) by its value in this expression to obtain
formula
where the nonlinear operator σm is defined by
formula
a diagonal matrix whose diagonal elements are between 0 and 1. We rewrite in a slightly different manner, introducing the operator kLm:
formula
From the Cauchy-Schwarz inequality and the property of σm(X), we obtain
formula
A sufficient condition for to be negative is therefore that for all Y.

5.  Numerical Experiments

In this section and the next, we investigate the question of effectively (i.e., numerically) computing stationary solutions of equation 2.7, which is equivalent to computing solutions of equation 3.2. Similar results are obtained for equation 2.8.

In all of our numerical experiments, we assume the sigmoidal functions Si, i = 1, …, n introduced in definition 1 to be of the form 2.1.

5.1.  Algorithm.

We now explain how to compute a fixed point Vf of equation 3.3 in which we drop for simplicity the upper index L and the lower index ext:
formula
5.1
The method is iterative and based on Banach's fixed-point theorem (Evans, 1998):

Theorem 4.
Let X be Banach space and M:XX a nonlinear mapping such that
formula
Such a mapping is said to be contracting. M has a unique fixed point xf and for all x0X and xp+1 = M(xp) then (xp) converges geometrically to xf.

Note that this method allows to computing the solution of equation 3.2 only when it admits a unique solution and f is contracting. However, it could admit more than one solution (recall it always has a solution; see theorem 1) or f could be noncontracting. Another method has to be found in these cases.

In our case, , where Ω is an open-bounded set of and M = fv. According to lemmas 2 and 1, if DSmWF < 1, fv is contracting.

Each element of the sequence Vp, p ⩾ 0 is approximated by a piecewise constant function Vp,h, where h is the step of the approximation, defined by a finite number of points . In order to avoid difficulties because Vp,hL2n(Ω), hence defined almost everywhere, we assume that W and I are smooth. It is not a restriction because every function of L2n(Ω) can be approximated by a smooth function. As the bump solution is smooth, as soon as W, I are smooth, we can use the multidimensional gaussian quadrature formula (Press, Flannery, Teukolsky, & Vetterling, 1988; Stoer & Bulirsch, 1972) with N points (in the examples below, usually N = 20) on each axis. In order to interpolate the values of the bump from a finite (and small) number of its values Vn(rh,j,Gauss), we use Nyström's method (Hazewinkel, 2001), as follows:
formula
where the gjs are the weights of the gaussian quadrature method and the points rp,j,Gauss are chosen according to the gauss quadrature formula. It is to be noted that the choice of a particular quadrature formula can make a huge difference in accuracy and computing time (see section A.2).
Having chosen the type of quadrature we solve with Banach's theorem,
formula
5.2
that is, we compute the fixed point at the level of approximation defined by h.

The following theorem ensures that limh→0Vfh = Vf:

Theorem 5.

Assume that limh→0Wh = W in L2n×n(Ω × Ω) thenwith ah = ‖WWhF.

Proof.

The proof is an adaptation of (Krasnosel'skii, Vainikko, Zabreiko, & Stetsenko, 1972, theorem 19.5).

5.2.  Examples of Bumps.

We show four examples of the application of the previous numerical method to the computation of bumps for various values of n and q.

There are n populations (V = [V1, …, Vn]T, WL2n×n(Ω × Ω)), some excitatory and some inhibitory. Ω = [ − 1, 1]q. We characterize in section 6.2 how the shape of Ω influences that of the bumps. The connectivity matrix is of the form
formula
5.3
with a q × q symmetric positive definite matrix. The weights αij, i, j = 1, …, n form an element α of , and is an element of . The weights α are chosen so that DSmWF < 1. The sign of αij, ij, indicates whether population j excites or inhibits population i. The bumps are computed using the algorithm described in the previous section.

5.2.1.  First Example: n = 2, q = 2, Constant Current.

Figure 3 shows an example of a bump for the following values of the parameters:
formula
There is one excitatory and one inhibitory population of neural masses.
Figure 3:

Example of a two-population, two-dimensional bump with constant external currents (see text).

Figure 3:

Example of a two-population, two-dimensional bump with constant external currents (see text).

5.2.2.  Second Example: n = 2, q = 2, Nonconstant Current.

Figure 4 shows a different example where the external current I is still equal to 0 for its second coordinate and is not constant but equal to its previous value, −0.3, to which we have added a circularly symmetric 2D gaussian centered at the point of coordinates (0.5, 0, 5) of the square Ω with standard deviation 0.18 and maximum value 0.2. It is interesting to see how the shape of the previous bump is perturbed. The matrix α is the same as in the first example. The matrix T is equal to
formula
corresponding to a spatially broader interaction for the first population and narrower for the second.
Figure 4:

Example of a two-population, two-dimensional bump with gaussian-shaped external current (see text).

Figure 4:

Example of a two-population, two-dimensional bump with gaussian-shaped external current (see text).

5.2.3.  Third Example: n = 3, q = 2, Constant Current.

Figure 5 shows an example of a bump for three neural populations—two excitatory and one inhibitory—in two dimensions. We use the following values of the parameters:

Figure 5:

Example of a three-population, two-dimensional bump (see text).

Figure 5:

Example of a three-population, two-dimensional bump (see text).

formula

5.2.4.  Fourth Example: n = 2, q = 3, Constant Current.

We show an example of a three-dimensional bump for two populations of neural masses (see Figure 6). The parameters are
formula
where Id3 is the 3 × 3 identity matrix.
Figure 6:

Example of a two-population, three-dimensional bump; isosurfaces are shown. Transparencies increase linearly from blue to red.

Figure 6:

Example of a two-population, three-dimensional bump; isosurfaces are shown. Transparencies increase linearly from blue to red.

6.  Sensitivity of the Bump to Variations of the Parameters

In this section, we characterize how the solutions of equation 3.2 vary with the parameters that appear in the equation. These parameters are of two types: first, we have a finite number of real parameters such as the external currents, the weights in the connectivity matrix W, or the parameters of the sigmoids, and, second, the shape of the domain Ω, a potentially infinite-dimensional parameter.

We focus on the voltage-based model; the analysis in the activity-based case is very similar. We start with a set of general considerations in the finite-dimensional case, which we then apply to the various cases. We then tackle the more difficult case of the dependency with respect to the shape of Ω.

As fv is a smooth function of the parameters (Iα, S, …), one can show (by extending Banach's theorem) that the fixed-point Vf inherits the smoothness of fv.

6.1.  The Finite-Dimensional Parameters.

We introduce the linear operator2 such that
formula
We have the following:

Proposition 6.
The derivative ∂λVf of the fixed point Vf with respect to the generic parameter λ satisfies the equation
formula
6.1
where b(λ, Vf) = (∂λW) · S(Vf) + W · (∂λS(Vf)) + ∂λI.

Proof.
Taking the derivative of both sides of equation 5.1 with respect to λ, we have
formula
Hence, we obtain equation 6.1.

Note that ∂λS(Vf) is the partial derivative of the vector S with respect to the scalar parameter λ evaluated at V = Vf.

Because of the assumption DSmWF < 1, the linear operator J = Id − W · DS(Vf) is invertible with
formula
and the series is convergent.
λVf is thus obtained from the following formula:
formula
the right-hand side being computed by
formula
We now apply proposition 6 to the study of the sensitivity of the bump to the variations of the parameters.

6.1.1.  Sensitivity of the Bump to the Exterior Current.

When λ = I1, we find:
formula
This inequality is to be understood component by component. It predicts the influence of I1 on Vf. For example, with the parameters α and T used in Figure 3 but with an external current equal to 0, we obtain the bump shown in Figure 7 (top) with the derivatives shown at the bottom of the same figure. We also show in Figure 8 of V1 and V2 along the diagonal and the x-axis for different values of I1close to 0. The reader can verify that the values increase with I1, as predicted.
Figure 7:

A bump corresponding to the following parameters: α and T are the same as in Figure 3, I = [00]T (top). Derivative of the bump with respect to the first coordinate, I1, of the exterior current (bottom). We verify that it is positive (see text).

Figure 7:

A bump corresponding to the following parameters: α and T are the same as in Figure 3, I = [00]T (top). Derivative of the bump with respect to the first coordinate, I1, of the exterior current (bottom). We verify that it is positive (see text).

Figure 8:

Cross sections of V1 (left) and V2 (right) for I1 = −0.001 (long-dashed line), I1 = 0 (continuous line) and I1 = 0.001 (dashed line). I2 = 0 in all three cases. To increase the readibility of the results, we have applied an offset of 0.001 and 0.002 to the continuous and dashed curves on the right-hand side of the figure, respectively.

Figure 8:

Cross sections of V1 (left) and V2 (right) for I1 = −0.001 (long-dashed line), I1 = 0 (continuous line) and I1 = 0.001 (dashed line). I2 = 0 in all three cases. To increase the readibility of the results, we have applied an offset of 0.001 and 0.002 to the continuous and dashed curves on the right-hand side of the figure, respectively.

6.1.2.  Sensitivity of the Bump to the Weights α.

For λ = αij, one finds
formula
We then have:
formula
The fixed point is an increasing function of the excitatory parameter a.
formula

The fixed point is a decreasing function of the inhibitory parameter b (see Figure 9).

The other cases are similar.

Figure 9:

Cross sections of V1 (left) and V2 (right) for b = −0.101 (long-dashed line), b = −0.1 (continuous line), and b = −0.099 (dashed line). To increase the readibility of the results, we have applied an offset of 0.001 and 0.002 to all continuous and dashed curves, respectively.

Figure 9:

Cross sections of V1 (left) and V2 (right) for b = −0.101 (long-dashed line), b = −0.1 (continuous line), and b = −0.099 (dashed line). To increase the readibility of the results, we have applied an offset of 0.001 and 0.002 to all continuous and dashed curves, respectively.

6.1.3.  Sensitivity of the Bump to the Thresholds.

When λ = θi, i = 1, 2 we have from definition 1 of S and with the notations of proposition 6,
formula
where e1 = [1, 0]T, e2 = [0, 1]T. We show in Figure 10 some cross sections of the bump Vf obtained for the same values of the parameters as in Figure 3 and three values of the threshold vector.
Figure 10:

Cross sections of V1 (left) and V2 (right) for θ = −0.101[1, 1]T (long-dashed line), θ = 0 (continuous line), and θ = 0.1[1, 1]T (dashed line). To increase the readibility of the results, we have applied an offset of 0.001 and 0.002 to all continuous and dashed curves, respectively.

Figure 10:

Cross sections of V1 (left) and V2 (right) for θ = −0.101[1, 1]T (long-dashed line), θ = 0 (continuous line), and θ = 0.1[1, 1]T (dashed line). To increase the readibility of the results, we have applied an offset of 0.001 and 0.002 to all continuous and dashed curves, respectively.

6.1.4.  Sensitivity of the Bump to the Slope of the Sigmoid.

When λ = si, i = 1, 2, we have from definition 1 of S and with the notations of proposition 6,
formula
where the matrix S is given by
formula
Figure 11 shows the two coordinates ∂sVf1 and ∂sVf2 for s1 = s2 = s of the derivative of the bump Vf at s = 1 obtained for the same values of the other parameters as in Figure 3, except the intensity which is equal to 0.
Figure 11:

Plot of the derivative with respect to the slope of the sigmoids of the the bump obtained with the same parameters α, I, and T as in Figure 3.

Figure 11:

Plot of the derivative with respect to the slope of the sigmoids of the the bump obtained with the same parameters α, I, and T as in Figure 3.

6.2.  Sensitivity of the Bump to Variations of the Shape of the Domain Ω.

We expect the bump to be somewhat dependent on the shape of Ω. It would nonetheless be desirable that this dependency would not be too strong for the modeling described in this letter to have some biological relevance. Indeed, if the bumps are metaphors of persistent states, we expect them to be relatively robust to the actual shape of the cortical part being modeled. For example, if we take Ω to be a representation of the primary visual cortex V1 whose shape varies from individual to individual, it would come as a surprise if the shape of a bump induced by the same spatially constant stimulus were drastically different.

Technically, in order to study the dependence of Vf with respect to Ω, we need to assume that Ω is smooth; its border ∂Ω is a smooth curve (q = 2) or surface (q = 3) unlike the previous examples, where Ω was the square [ − 1, 1]2. But a difficulty arises from the fact that the set of regular domains is not a vector space—hence, the derivative of a function (the bump) with respect to a domain has to be defined with some care. The necessary background is found in section A.3.

We make explicit the fact that the connectivity function W has been normalized to satisfy ‖WF = 1 by writing W(r, r′, Ω) where, with some abuse of notation,
formula

Theorem 6.
Let us assume that Ω is a smooth, bounded domain of. If W is in W1,2n×n(Ω × Ω), Iext is in W1,2n(Ω) (see section A.1 for a definition) the material derivative (see section A.3 for a definition) Vfm(r, Ω) of the bump Vf satisfies the following equation:
formula
6.2
formula
6.3
formula
6.4
formula
6.5
formula
6.6
where Di, i = 1, 2 indicates the derivative with respect to the ith variable and 〈J′(Ω), X〉 is the Gâteaux derivative of J(Ω) with respect to the vector field X:
formula
where X is defined in the proof below. We have
formula
where da is the surface element on the smooth boundary ∂Ω of Ω, and n is its unit inward normal vector.

Proof.
The proof uses ideas that are developed in Delfour and Zolésio (2001) and Sokolowski and Zolésio (1992; see also section A.3). We want to compute
formula
from equation 3.2. As far as the computation of the derivative is concerned, only small deformations are relevant, and we consider the first-order Taylor expansion of the transformation T:
formula
We define
formula
In the first integral, we make the change of variable r′ → r′ + τX and obtain
formula
We have
formula
Hence, for τ sufficiently small detJτ>0. Moreover:
formula
and
formula
where Di, i = 1, 2 indicates the derivative with respect to the ith argument. Thus, we have
formula
Because
formula
and ∫ΩW(r, r′, Ω)S(Vf(r′, Ω))dr′ = Vf(r, Ω) − Iext(r), we obtain equation 6.2. The value of 〈J′(Ω), X〉 is obtained from corollary 2.

Equation 6.2 is of the same form as before:
formula
This result tells us that the shape of the bump varies smoothly with respect to the shape of the domain Ω.
Figure 12:

The unit disk and its bump Vf.

Figure 12:

The unit disk and its bump Vf.

6.2.1.  Numerical Application for the Domain Derivatives.

We show in Figure 12 the bump Vf for Ω equal to the unit disc D(0, 1) and in Figure 13 the one for Ω equal to the ellipse3 of equation . The values of the weight parameters α are the same as in Figure 3 and I = [0, 0]T. The matrix T is equal to
formula
Note that because the diagonal elements are not equal for T11, T12, and T13, W is not circularly symmetric, and so is the bump in Figure 12 despite the fact that Ω is circularly symmetric.
Finally we show in Figure 14 the two coordinates of the shape (material) derivative of the first bump in the direction of the field X corresponding to the transformation
formula
T(1) transforms the disc D(0, 1) into the ellipse , X(r) = [(a − 1)r1, 0]T.
Figure 13:

Bump associated with the ellipse with major axis along the r1 coordinate and the minor axis along the r2 coordinate. The ratio of the axes length is a = 1.2 (see text).

Figure 13:

Bump associated with the ellipse with major axis along the r1 coordinate and the minor axis along the r2 coordinate. The ratio of the axes length is a = 1.2 (see text).

Figure 14:

The shape derivative Vfm for a = 1.2.

Figure 14:

The shape derivative Vfm for a = 1.2.

Thus, and, because of equation 5.3,
formula
and
formula
As the gaussian quadrature formula holds for a rectangular domain, we use polar coordinates to map the disk (or the ellipse) to a square. For our numerical study, we can simplify these expressions (the matrixes Tij are symmetric):
formula
Thus, we can use a simple modification of the algorithm that computes W · S(V) to obtain the previous expression.

J(Ω) and 〈J′(Ω), X〉 are computed with a gauss quadrature formula. For a circle in polar coordinates, N(r′) = r′.

Let us be a bit more precise. In the case shown in Figure 12, we choose I = 0. Using Banach's theorem, we compute VfGauss for N = 30 and use Nyström's interpolation to compute VfNys for n = 100 (for example) points on each axis.

Then, using VfGauss, we compute Vfm,Gauss for N points. But the equation for Vfm reads
formula
Having computed a Nyström interpolation of n points for Vfm = W.DS(Vf).Vfm + 〈J′(Ω), X〉, we again use a Nyström interpolation with the last equation to compute Vfm,Nystrom for n points on each axis.

We used this numerical method in every previous example related to the computation of derivatives.

7.  Conclusion and Perspectives

We have studied two classes (voltage and activity based) of neural continuum networks in the context of modeling macroscopic parts of the cortex. In both cases, we have assumed an arbitrary number of interacting neuron populations, either excitatory or inhibitory. These populations are spatially related by nonsymmetric connectivity functions representing cortico-cortical, local connections. External inputs are also present in our models to represent nonlocal connections, for example, with other cortical areas. The relationship between (average) membrane potential and activity is described by nondegenerate sigmoidal nonlinearities, that is, not by Heaviside functions, which have often been considered instead in the literature because of their (apparent) simplicity.

The resulting nonlinear integro-differential equations are of the Hammerstein type (Hammerstein, 1930) and generalize those proposed by Wilson and Cowan (1973).

Departing from most of the previous work in this area, we relax the usual assumption that the domain of definition where we study these networks is infinite, that is, equal to or , and we explicitly consider the biologically much more relevant case of a bounded subset Ω of , obviously a better model of a piece of cortex. The importance of this has been emphasized by Nunez (2005) for a number of years, and we fully agree with him on this.

Using methods of functional analysis, we have studied the existence and uniqueness of a stationary (i.e., time-independent) solution of these equations in the case of a stationary input. These solutions are often referred to as persistent states, or bumps, in the literature.

We have proved that under very mild assumptions on the connectivity functions, such solutions always exist (this is due in part to the fact that we do not use Heaviside functions and mostly to the fact that we consider the cortex as a bounded set).

We have provided sufficient conditions on the connectivity functions for the solution to be absolutely stable, that is, independent of the initial state of the network. These conditions can be expressed in terms of the spectra of some functional operators, which we prove to be compact, that arise naturally from the equations describing the network activity.

We have also studied the sensitivity of the solutions to variations of such parameters as the connectivity functions, the sigmoids, the external inputs, and the shape of the domain of definition of the neural continuum networks. This last analysis is more involved than the others because of the infinite-dimensional nature of the shape parameter. An analysis of the bifurcations of the solutions when the parameters vary over large ranges requires the use of techniques of bifurcation analysis for infinite-dimensional systems and is out of the scope of this letter.

We believe, and we hope by now to have convinced the reader, that the functional analysis framework that we have used in this letter is the right one to try to answer some of the mathematical questions raised by models of connected networks of nonlinear neurons. We also believe that some of these also begin to answer biological questions since these networks models, despite the admitedly immense simplifications they are built from, are nonetheless metaphors of real neural assemblies.

Appendix A:  Notations and Background Material

A.1.  Matrix Norms and Spaces of Functions.

We note the set of n × n real matrixes. We consider the Frobenius norm on ,
formula
and consider the space L2n×n(Ω × Ω) of the functions from Ω × Ω to whose Frobenius norm is in L2(Ω × Ω). If WL2n×n(Ω × Ω), we note ‖W2F = ∫Ω×ΩW(r, r′)‖2Fdrdr′. Note that this implies that each element Wij, i, j = 1, …, n in in L2(Ω × Ω). We note the set L2n(Ω) of square-integrable mappings from Ω to and the corresponding norm. We have the following:

Lemma 1.
Given xL2n(Ω) and WL2n×n(Ω × Ω), we define y(r) = ∫ΩW(r, r′)x(r′)dr′. This integral is well defined for almost all r, y is in L2n(Ω), and we have
formula

Proof.
Since each Wij is in L2(Ω × Ω), Wij(r, .) is in L2(Ω) for almost all r, thanks to Fubini's theorem. So Wij(r, .)xj(.) is integrable for almost all r from what we deduce that Y is well defined for almost all r. Next, we have
formula
and (Cauchy-Schwarz)
formula
from which it follows that (Cauchy-Schwarz again, discrete version):
formula
from which it follows that Y is in L2n(Ω) (thanks again to Fubini's theorem) and
formula

We also use the following:

Lemma 2.
For each V of , S(V) is in ; and we have
formula
For all V1 and V2 in we have
formula
where DSm is defined in definition 1.

Proof.

We have , where |Ω| is the Lebesgue measure of Ω (its area). Similarly, .

In theorem 6, we use the Sobolev spaces W1,2n(Ω) and W1,2n×n(Ω × Ω). W1,2n(Ω) is the set of functions such that each component Xi, i = 1, …, n is in W1,2(Ω), the set of functions of L2(Ω) whose first-order derivatives exist in the weak sense and are also in L2(Ω) (see Evans, 1998). Similarly W1,2n×n(Ω × Ω) is the set of functions such that each component Xij, i, j = 1, …, n is in W1,2(Ω).

A.2.  Choice of the Quadrature Method.

We emphasize the importance of the choice of a specific quadrature formula using the following example: where we compare a 0th-order finite elements methods with Gauss's method (the parameters of the Gauss quadrature formula are computed with a precision of 10−16 using Newton's method):

MethodValue
Exact 2.350 402 387 287 603 … 
0th order (N = 1000) 2.351 945 … 
Finite element  
Gauss (N = 5) 2.350 402 386 46 … 
MethodValue
Exact 2.350 402 387 287 603 … 
0th order (N = 1000) 2.351 945 … 
Finite element  
Gauss (N = 5) 2.350 402 386 46 … 

The Gauss method is far more powerful and allows us to compute bumps in 3D for an arbitrary number of populations.

A.3.  Shape Derivatives.

As it has already been pointed out, the computation of the variation of the bump with respect to the shape of the region Ω is difficult since the set of regular domains (regular open bounded sets) of does not have the structure of a vector space. Variations of a domain must then be defined in some way. Let us consider a reference domain and the set of applications , which are at least as regular as homeomorphisms—one to one with T and T−1 one to one. In detail,
formula
where the functional space is the set of mappings such that they and their first-order derivatives are in . In detail,
formula

Given a shape function , for , let us define . The key point is that since is a Banach space, we can define the notion of a derivative with respect to the domain Ω as:

Definition 4.

Fis Gâteaux differentiable with respect to Ω if and only ifis Gâteaux differentiable with respect to T.

In order to compute Gâteaux derivatives with respect to T, we introduce a family of deformations (T(τ))τ⩾0 such that for τ ⩾ 0, T(0) = Id, and . From a practical point of view, there are many ways to construct such a family, the most famous one being the Hadamard deformation (Hadamard, 1968), which goes as follows.

For a point r ∈ Ω, we note
formula
Let us now define the velocity vector field X corresponding to T(τ) as
formula
From definition 4 follows:

Definition 5.
The Gâteaux derivative of a shape function F(Ω) in the direction of X, denoted 〈F′(Ω), X〉, is equal to
formula

We also introduce:

Definition 6.
The material derivative of a function f(r, Ω), noted fm(r, Ω, X) is defined by
formula

and

Definition 7.
The shape derivative of a function f(r, Ω), noted fs(r, Ω, X) is defined by
formula

The following theorem whose proof can be found, in Delfour and Zolésio (2001) and Sokolowski and Zolésio (1992) relates the Gâteaux derivative and the shape derivative:

Theorem 7.
The Gâteaux derivative of the functional F(Ω) = ∫Ωf(r, Ω)dr in the direction of X is given by
formula
where n is the unit inward normal to ∂Ω and da its area element.

The following corollary is used in the proof of theorem 6:

Corollary 2.
The Gâteaux derivative of the functional F(Ω) = ∫Ωf(r)dr in the direction of X is given by
formula
where n is the unit inward normal to ∂Ω and da its area element.

Notes

1

By definition, 〈V1, hm(V2)〉 = 〈h*m(V1), V2〉, for all elements V1, V2 of .

2

W · DS(Vf) is the Frechet derivative of the operator fv at the point Vf of .

3

represents the ellipse lying along the first axis of coordinates with semimajor axis a and semiminor axis b.

References

Amari
,
S. I.
(
1977
).
Dynamics of pattern formation in lateral-inhibition type neural fields
.
Biological Cybernetics
,
27
(
2
),
77
87
.
Appell
,
J.
, &
Chen
,
C.-J.
(
2006
).
How to solve Hammerstein equations
.
Journal of integral equations and applications
,
18
(
3
),
287
296
.
Blomquist
,
P.
,
Wyller
,
J.
, &
Einevoll
,
G.
(
2005
).
Localized activity patterns in two-population neuronal networks
.
Physica D
,
206
,
180
212
.
Bressloff
,
P.
(
2005
).
Spontaneous symmetry breaking in self-organizing neural fields
.
Biological Cybernetics
,
93
(
4
),
256
274
.
Buxhoeveden
,
D.
, &
Casanova
,
M.
(
2002
).
The minicolumn hypothesis in neuroscience
.
Brain
,
125
,
935
951
.
Chalupa
,
L. M.
, &
Werner
,
J.
(Eds.). (
2004
).
The visual neurosciences
.
Cambridge, MA
:
MIT Press
.
Colby
,
C.
,
Duhamel
,
J.
, &
Goldberg
,
M.
(
1995
).
Oculocentric spatial representation in parietal cortex
.
Cereb. Cortex
,
5
,
470
481
.
Coombes
,
S.
(
2005
).
Waves, bumps, and patterns in neural fields theories
.
Biological Cybernetics
,
93
(
2
),
91
108
.
David
,
O.
,
Cosmelli
,
D.
, &
Friston
,
K. J.
(
2004
).
Evaluation of different measures of functional connectivity using a neural mass model
.
NeuroImage
,
21
,
659
673
.
David
,
O.
, &
Friston
,
K. J.
(
2003
).
A neural mass model for MEG/EEG: Coupling and neuronal dynamics
.
NeuroImage
,
20
,
1743
1755
.
Dayan
,
P.
, &
Abbott
,
L. F.
(
2001
).
Theoretical neuroscience: Computational and mathematical modeling of neural systems
.
Cambridge, MA
:
MIT Press
.
Delfour
,
M.
, &
Zolésio
,
J.-P.
(
2001
).
Shapes and geometries: Advances in design and control
.
Philadelphia
:
SIAM
.
Dieudonné
,
J.
(
1960
).
Foundations of modern analysis
.
Orlando, FL
:
Academic Press
.
Doubrovinski
,
K.
(
2005
).
Dynamics, stability and bifurcation phenomena in the nonlocal model of cortical activity
(UUDM Project Report 2005:8). Uppsala: Uppsala University, Department of Mathematics
.
Ermentrout
,
B.
(
1998
).
Neural networks as spatio-temporal pattern-forming systems
.
Reports on Progress in Physics
,
61
,
353
430
.
Evans
,
L.
(
1998
).
Partial differential equations
.
Providence, RI
:
American Mathematical Society
.
Faugeras
,
O.
,
Grimbert
,
F.
, &
Slotine
,
J.-J.
(
2007
).
Stability and synchronization in neural fields
(Tech. Rep. RR-6212)
Sophia-Antipolis, France
:
INRIA
.
Freeman
,
W.
(
1975
).
Mass action in the nervous system
.
Orlando, FL
:
Academic Press
.
Funahashi
,
S.
,
Bruce
,
C.
, &
Goldman-Rakic
,
P.
(
1989
).
Mnemonic coding of visual space in the monkey's dorsolateral prefrontal cortex
.
J. Neurophysiol.
,
61
,
331
349
.
Gerstner
,
W.
, &
Kistler
,
W. M.
(
2002
).
Mathematical formulations of Hebbian learning
.
Biological Cybernetics
,
87
,
404
415
.
Grimbert
,
F.
, &
Faugeras
,
O.
(
2006
).
Bifurcation analysis of Jansen's neural mass model
.
Neural Computation
,
18
(
12
),
3052
3068
.
Guo
,
Y.
, &
Chow
,
C.
(
2005a
). .
Existence and stability of standing pulses in neural networks: II: Stability
.
SIAM Journal on Applied Dynamical Systems
,
4
,
249
281
.
Guo
,
Y.
, &
Chow
,
C. C.
(
2005b
). .
Existence and stability of standing pulses in neural networks: I. Existence
.
SIAM Journal on Applied Dynamical Systems
,
4
(
2
),
217
248
.
Gutkin
,
B.
,
Ermentrout
,
G.
, &
O'Sullivan
,
J.
(
2000
).
Layer 3 patchy recurrent excitatory connections may determine the spatial organization of sustained activity in the primate prefrontal cortex
.
Neurocomputing
,
32–33
,
391
400
.
Hadamard
,
J.
(
1968
).
Mémoire sur un problème d'analyse relatif à l'équilibre des plaques élastiques encastrées
. In
Mémoire des savants étrangers
.
Paris
:
CNRS
.
Haeusler
,
S.
, &
Maass
,
W.
(
2007
).
A statistical analysis of information-processing properties of lamina-specific cortical microcircuits models
.
Cerebral Cortex
,
17
,
149
162
.
Hammerstein
,
A.
(
1930
).
Nichtlineare integralgleichungen nebst anwendungen
.
Acta Math
,
54
,
117
176
.
Hazewinkel
,
M.
(
2001
).
Encyclopaedia of Mathematics
.
Berlin
:
Springer
.
Hopfield
,
J. J.
(
1984
).
Neurons with graded response have collective computational properties like those of two-state neurons
.
Proceedings of the National Academy of Sciences, USA
,
81
(
10
),
3088
3092
.
Hoppenstaedt
,
F.
, &
Izhikevich
,
E.
(
1997
).
Weakly connected neural networks
.
New York
:
Springer-Verlag
.
Jansen
,
B. H.
, &
Rit
,
V. G.
(
1995
).
Electroencephalogram and visual evoked potential generation in a mathematical model of coupled cortical columns
.
Biological Cybernetics
,
73
,
357
366
.
Jansen
,
B. H.
,
Zouridakis
,
G.
, &
Brandt
,
M. E.
(
1993
).
A neurophysiologically-based mathematical model of flash visual evoked potentials
.
Biological Cybernetics
,
68
,
275
283
.
Kandel
,
E.
,
Schwartz
,
J.
, &
Jessel
,
T.
(
2000
).
Principles of neural Science
(4th ed.)
New York
:
McGraw-Hill
.
Kishimoto
,
K.
, &
Amari
,
S.
(
1979
).
Existence and stability of local excitations in homogeneous neural fields
.
Journal of Mathematical Biology
,
7
(
4
),
303
318
.
Krasnosel'skii
,
M.
,
Vainikko
,
G.
,
Zabreiko
,
P.
, &
Stetsenko
,
V.
(
1972
).
Approximate solutions of operator equations
(trans. D. Louvish)
.
Gröningen
:
Wolteers-Noordhoff
.
Kubota
,
S.
, &
Aihara
,
K.
(
2005
).
Analyzing global dynamics of a neural field model
.
Neural Processing Letters
,
21
,
133
141
.
Laing
,
C.
, &
Troy
,
W.
(
2003
).
Two-bump solutions of Amari-type models of neuronal pattern formation
.
Physica D
,
178
(
3
),
190
218
.
Laing
,
C.
,
Troy
,
W.
,
Gutkin
,
B.
, &
Ermentrout
,
G.
(
2002
).
Multiple bumps in a neuronal model of working memory
.
SIAM J. Appl. Math.
,
63
(
1
),
62
97
.
Lopes da Silva
,
F.
,
Blanes
,
W.
,
Kalitzin
,
S.
,
Parra
,
J.
,
Suffczynski
,
P.
, &
Velis
,
D.
(
2003
).
Dynamical diseases of brain systems: Different routes to epileptic seizures
.
IEEE Transactions in Biomedical Engeneering
,
50
(
5
),
540
548
.
Lopes da Silva
,
F.
,
Hoeks
,
A.
, &
Zetterberg
,
L.
(
1974
).
Model of brain rhythmic activity
.
Kybernetik
,
15
,
27
37
.
Lopes da Silva
,
F.
,
van Rotterdam
,
A.
,
Barts
,
P.
,
van Heusden
,
E.
, &
Burr
,
W.
(
1976
).
Model of neuronal populations: The basic mechanism of rhythmicity
. In
M. A. Corner & D. F. Swaab (Eds.)
,
Progress in brain research
(pp.
281
308
).
Amsterdam
:
Elsevier
.
Matsuoka
,
K.
(
1992
).
Stability conditions for nonlinear continuous neural networks with asymmetric connection weights
.
Neural Networks
,
5
,
495
500
.
Miller
,
E.
,
Erickson
,
C.
, &
Desimone
,
R.
(
1996
).
Neural mechanisms of visual working memory in prefrontal cortex of the macaque
.
J. Neurosci.
,
16
,
5154
5167
.
Mountcastle
,
V.
(
1957
).
Modality and topographic properties of single neurons of cat's somatosensory cortex
.
Journal of Neurophysiology
,
20
,
408
434
.
Mountcastle
,
V.
(
1997
).
The columnar organization of the neocortex
.
Brain
,
120
,
701
722
.
Nunez
,
P.
(
2005
).
Electric fields of the brain: The neurophysics of EEG
.
New York
:
Oxford University Press
.
Owen
,
M.
,
Laing
,
C.
, &
Coombes
,
S.
(
2007
).
Bumps and rings in a two-dimensional neural field: Splitting and rotational instabilities
.
New Journal of Physics
,
9
,
378
401
.
Pinto
,
D.
, &
Ermentrout
,
G.
(
2001a
).
Spatially structured activity in synaptically coupled neuronal networks: 1. Traveling fronts and pulses
.
SIAM J. of Appl. Math.
,
62
,
206
225
.
Pinto
,
D.
, &
Ermentrout
,
G.
(
2001b
).
Spatially structured activity in synaptically coupled neuronal networks: 2. Standing pulses
.
SIAM J. of Appl. Math.
,
62
,
226
243
.
Press
,
W. H.
,
Flannery
,
B. P.
,
Teukolsky
,
S. A.
, &
Vetterling
,
W. T.
(
1988
).
Numerical recipes in C
.
Cambridge
:
Cambridge University Press
.
Rubin
,
J.
, &
Troy
,
W.
(
2004
).
Sustained spatial patterns of activity in neuronal populations without recurrent excitation
.
SIAM Journal on Applied Mathematics
,
64
(
5
),
1609
1635
.
Sokolowski
,
J.
, &
Zolésio
,
J.-P.
(
1992
).
Introduction to shape optimization: Shape sensitivity analysis
.
Berlin
:
Springer-Verlag
.
Stoer
,
J.
, &
Bulirsch
,
R.
(
1972
).
Introduction to numerical analysis
.
Berlin
:
Springer-Verlag
.
Thomson
,
A. M.
, &
Bannister
,
A. P.
(
2003
).
Interlaminar connections in the neocortex
.
Cerebral Cortex
,
13
,
5
14
.
Tricomi
,
F.
(
1985
).
Integral equations
.
New York
:
Dover
.
van Rotterdam
,
A.
,
Lopes da Silva
,
F.
,
van den Ende
,
J.
,
Viergever
,
M.
, &
Hermans
,
A.
(
1982
).
A model of the spatial-temporal characteristics of the alpha rhythm
.
Bulletin of Mathematical Biology
,
44
(
2
),
283
305
.
Wendling
,
F.
,
Bartolomei
,
F.
,
Bellanger
,
J.
, &
Chauvel
,
P.
(
2001
).
Interpretation of interdependencies in epileptic signals using a macroscopic physiological model of the EEG
.
Clinical Neurophysiology
,
112
(
7
),
1201
1218
.
Wendling
,
F.
,
Bellanger
,
J.
,
Bartolomei
,
F.
, &
Chauvel
,
P.
(
2000
).
Relevance of nonlinear lumped-parameter models in the analysis of depth-EEG epileptic signals
.
Biological Cybernetics
,
83
,
367
378
.
Werner
,
H.
, &
Richter
,
T.
(
2001
).
Circular stationary solutions in two-dimensional neural fields
.
Biological Cybernetics
,
85
(
3
),
211
217
.
Wilson
,
H.
, &
Cowan
,
J.
(
1973
).
A mathematical theory of the functional dynamics of cortical and thalamic nervous tissue
.
Biological Cybernetics
,
13
(
2
),
55
80
.