## Abstract

Competition is a well-studied and powerful mechanism for information processing in neuronal networks, providing noise rejection, signal restoration, decision making and associative memory properties, with relatively simple requirements for network architecture. Models based on competitive interactions have been used to describe the shaping of functional properties in visual cortex, as well as the development of functional maps in columnar cortex. These models require competition within a cortical area to occur on a wider spatial scale than cooperation, usually implemented by lateral inhibitory connections having a longer range than local excitatory connections. However, measurements of cortical anatomy reveal that the spatial extent of inhibition is in fact more restricted than that of excitation. Relatively few models reflect this, and it is unknown whether lateral competition can occur in cortical-like networks that have a realistic spatial relationship between excitation and inhibition. Here we analyze simple models for cortical columns and perform simulations of larger models to show how the spatial scales of excitation and inhibition can interact to produce competition through disynaptic inhibition. Our findings give strong support to the direct coupling effect—that the presence of competition across the cortical surface is predicted well by the anatomy of direct excitatory and inhibitory coupling and that multisynaptic network effects are negligible. This implies that for networks with short-range inhibition and longer-range excitation, the spatial extent of competition is even narrower than the range of inhibitory connections. Our results suggest the presence of network mechanisms that focus on intra-rather than intercolumn competition in neocortex, highlighting the need for both new models and direct experimental characterizations of lateral inhibition and competition in columnar cortex.

## 1. Introduction

How can we expect to tease apart the mechanisms of neocortex? The only justification for our hubris is the observation that each area of cortex exists not as a solitary and unique design, but instead adopts a variation on a shared but elusive theme (Ramón y Cajal, 1892; DeFelipe & Jones, 1998; Mountcastle, 2003; Douglas & Martin, 2007; Muir et al., 2011). Known as the *canonical cortical microcircuit* (Douglas, Martin, & Whitteridge, 1989), the notion that every cortical area reproduces a common network motif kindles a hope that each cortical area might also perform its computational role using a common form of computational dynamics.

Competition between the activity of several neurons is a well-studied mechanism that has been suggested as a canonical computation for cortex due to the useful theoretical properties of competitive interactions and the relative simplicity of implementing competition with neuronal elements. Two neurons are said to be in competition with each other if the activity of one of the neurons directly or indirectly reduces the activity of the other. Although two cross-connected inhibitory neurons have this property, more attention is usually paid to the information-encoding properties of excitatory neurons of cortex. These neurons form the vast majority of projections to and from other cortical areas and subcortical nuclei, and so they could be considered to embody the result of a cortical area's computation. The simplest networks that implement competition consist of two or more excitatory neurons coupled to a single common inhibitory neuron (Coultrip, Granger, & Lynch, 1992; Douglas, Mahowald, & Martin, 1994; Douglas & Martin, 2007). Depending on the parameters of the network, the excitatory neurons can be placed in a competitive regime. The excitatory neuron that receives the strongest external input will then effectively suppress the activity of other excitatory neurons through disynaptic inhibition via the shared inhibitory neuron. In extreme cases a single excitatory neuron—the “winner”—will be active, and all other excitatory neurons will be inactive. This network behavior is known as hard winner-take-all (WTA) behavior.

The set of excitatory neurons can be placed in a geometric (or topological) space, with a distance-based neighborhood function defining cooperative connections among the excitatory neurons. The WTA then behaves as an associative memory network, where fixed points shaped by the excitatory neighborhood function are placed in competition. In this regime, a “winner” is no longer a single neuron but a set of cooperating excitatory neurons. The autoassociative function of these networks brings desirable information processing properties such as noise rejection and analog signal restoration (Douglas & Martin, 2007). This network architecture has been proposed as a model for cortical computation by interpreting each excitatory neuron as representing a cortical column. For example, ring models of orientation tuning for primary visual cortex assign a different preferred orientation to each excitatory neuron (Douglas et al., 1994, Ben-Yishai, Bar-Or, & Sompolinsky, 1995; Somers, Nelson, & Sur, 1995). More elaborated models consist of multiple subnetworks, each spanning the full range of preferred orientations (a hypercolumn), with global competition within each hypercolumn and feedforward inhibition between competing hypercolumns (Lundqvist, Rehn, Djurfeldt, & Lansner, 2006; Lundqvist, Compte, & Lansner, 2010). Models for working memory in prefrontal cortex have been proposed using similar cooperative and competitive mechanisms (Amit & Brunel, 1995; Durstewitz, Kelc, & Güntürkün, 1999; Compte, Brunel, Goldman-Rakic, & Wang, 2000; Miller, Brody, Romo, & Wang, 2003). For a review, see Durstewitz, Seamans, & Sejnowski, 2000). In these models, a pattern of activity is embedded in the configuration of recurrent excitatory connections. This pattern of activity is then self-sustaining once activated by external input, and wide-ranging inhibitory feedback is used to ensure stability and robustness of the stored activity pattern.

Defining WTA models in these ways makes several assumptions about the anatomy and physiology of inhibition in columnar cortex. First, inhibitory projections are wide ranging in the WTA models described. Global inhibition can be softened by adopting a Mexican hat network connectivity profile, whereby a point in the network sends spatial inhibitory connections extending over a longer range than excitatory connections (Somers et al., 1995; Sperling, 1970) (see Figure 1A). However, inhibitory neurons in neocortex are mostly limited in their lateral extent, making projections either vertically between cortical layers or proximal to their somata (Lund, 1987; Lund & Wu, 1997; Lund & Yoshioka, 1991; Lund, Hawken, & Parker, 1988; DeFelipe, 2002; Douglas & Martin, 2004; Markram et al., 2004, Douglas & Martin, 2009). Some inhibitory neurons are coupled via electrical synapses called gap junctions, providing an effective excitatory coupling across an inhibitory population (Galarreta & Hestrin, 1999; Gibson, Beierlein, & Connors, 1999). This could theoretically serve to widen the effective spatial extent of inhibition. Unfortunately, the electrical connections are weak (Galarreta & Hestrin, 1999, Gibson et al., 1999), sparse compared with the number of chemical synapses made by a neuron (Fukuda, Kosaka, Singer, & Galuske, 2006), and mostly absent in adult animals (Conners, Bernardo, & Prince, 1983; Peinado, Yuste, & Katz, 1993). For these reasons, gap junctions cannot generally be relied on as a substrate for long-range spreading of inhibitory influences. Some models address this concern through long-range excitatory projections that selectively target inhibitory neurons (Li, 1998, 2002; Rutishauser, Slotine, & Douglas, 2012) or by including instantaneous disynaptic inhibition while neglectingf-inhibitory recurrence (Pinto & Ermentrout, 2001; Kang, Shelley, & Sompolinsky, 2003; Levy, Reyes, & Alex, 2011). However, reconstructions of cortical neurons that engage in long-range excitatory projections do not reveal evidence for neuron-class-specific connections (Kisvárday et al., 1986; but see Bock et al., 2011).

Second, the physiology of inhibition is either untuned or broadly tuned in WTA models, so that inhibition is activated similarly by any input to cortex, a stance that is not supported by in vivo single-cell electrophysiology (Mariño et al., 2005).

Finally, in the simplest WTA networks, inhibitory neurons receive no external input. In columnar cortex, inhibitory neurons certainly receive input from outside the layer their soma resides in, from both other layers and other cortical and subcortical structures (Binzegger, Douglas, & Martin, 2004). There is no evidence that feedforward inputs specifically target excitatory or inhibitory classes (Freund, Martin, Somogyi, & Whitteridge, 1985; Freund, Martin, & Whitteridge, 1985; Anderson, Dehay, Friedlander, Martin, & Nelson, 1992).

To seriously consider competition as a canonical computational mechanism for cortex, this potential conflict between model assumptions and cortical anatomy must be resolved. When is it possible for two points in columnar cortex to be in competition? In this letter, we study this question in both very small networks that can be analyzed mathematically and in larger networks via simulations.

In section 2 we present linear-threshold network models for groups of two or three cortical columns and determine analytically the conditions under which two columns can be in competition through disynaptic or multisynaptic inhibition. In addition to examining these simple models that are tractable for direct analysis, we also present simulations in larger 1D and 2D models in section 3. The parameters in these models are designed to capture the anatomical issue of the relative extent of lateral excitatory and inhibitory projections. Through piecewise linear systems analysis of the tractable models, we obtain bounds on parameter regimes that permit disynaptic inhibitory competition between two cortical columns, and we then compare these results with the larger simulation models.

Surprisingly, we find that the presence and strength of cooperation or competition between two columns in a network is determined primarily by the direct excitatory and inhibitory coupling between the two columns, with indirect network effects only weakly modulating this direct cooperation or competition. We refer to this phenomenon as the direct coupling effect. Our results provide a simple intuitive rule of thumb for understanding cooperation and competition between two columns in a large network: that cooperative and competitive effects arise primarily from the direct influence of one column on another.

## 2. Analytical Models

### 2.1. A Cortical “Column.”.

The concept of a cortical column is primarily functional (Mountcastle, Berman, & Davies, 1955). In cat, monkey, ferret, tree shrew, and many other higher mammals, neurons existing on a line perpendicular to the pia share many commonalities in their function. Aside from the canonical example of cat somatosensory cortex (Mountcastle et al., 1955), neurons in visual cortex exhibit this strong columnar organization by sharing the orientation preference of their vertically adjacent neighbors (Hubel & Wiesel, 1968). However, this fact should not be interpreted to mean that a “column” is an isolated unit, either functionally or anatomically. A lateral displacement of even the width of a neuron's soma is sufficient to record a measurable difference in orientation preference in visual cortex, implying that a functional column is about as small as it can possibly be (Hubel & Wiesel, 1968). Anatomically, projections from the neurons in a column are diffuse. Although many intrinsic cortical projections are made across laminae, they nevertheless span a horizontal distance much larger than the size of a single soma in columnar and rodent cortex (Weliky & Katz, 1994; Hellwig, 2000; Lund, Angelucci, & Bressloff, 2003; Thompson & Bannister, 2003; Holmgren, Harkany, Svennenfors, & Zilberter, 2003; Boucsein, Nawrot, Schnepel, & Aertsen, 2011). Input projections to cortex also do not treat single columns as independent entities; single input fibers projecting from the LGN cover large areas in primary visual cortex (Lund et al., 2003). The notable exception is rodent somatosensory cortex, where input fibers carrying information from single whiskers project to large, nonoverlapping regions within layer 4 known as barrels.

In this letter, we take a column to be a small region within a neocortical area of a higher mammal, of the minimum size such that the function of each column is homogeneous but that neighboring columns can have different functions. This allows us to simplify the neurons in a column to a small population of interacting excitatory and inhibitory units. However, our simulations incorporate the fact that single columns make lateral projections to a large number of neighboring columns and receive input from a similar large number of neighbors. The function of our column model is discrete, but the virtual anatomical inputs and outputs of our columns are highly overlapping.

### 2.2. Model Simplifications.

We assume that a column of cortical tissue can be reduced to a population of excitatory neurons and a population of inhibitory neurons. We model the average activity of these two classes with two linear-threshold units, which are known to be a good approximation to the I–F (current to firing rate) curve for an adapted cortical neuron (Ermentrout, 1998a). The differing proportions of inhibitory and excitatory neurons in cortex are modeled by adding a factor to our synaptic weights to correct for this. Although different neuron classes may have different time constants of activation, we will show that the possibility of competition is independent of these time constants.

We assume that neurons connect to each other based on opportunity and without bias, an assumption known as Peters’ rule (Peters, 1979; Braitenberg & Schüz, 1991). This implies that an excitatory projection to a point in cortex forms synapses with both excitatory and inhibitory neurons at that location, without preference for a particular neuron class. This is the most conservative assumption to make regarding neural connectivity. Although some specific connections are known to exist in cortex (Fairén & Valverde, 1980; Somogyi, Freund, & Cowey, 1982; Stepanyants, Tamás, & Chklovskii, 2004; Morishima, Morita, Kubota, & Kawaguchi, 2011), the majority of local and lateral connections do not show evidence of class-specific targeting (Kisvárday et al., 1986; Binzegger et al., 2004). We further assume that input to a cortical column targets both excitatory and inhibitory populations, without bias (Freund, Martin, Somogyi et al., 1985; Freund, Martin, & Whitteridge, 1985; Kisvárday et al., 1986; Anderson et al., 1992; Keller & Asanuma, 1993).

We assume that connections between columns in cortex are arranged predominantly spatially, such the coupling strength between two points decreases monotonically with distance. This is of course not true for a single cortical neuron, but is a reasonable aggregate assumption based on Peters' rule (Binzegger et al., 2004; Perin, Berger, & Markram, 2011).

### 2.3. Basic Column Model.

The foundation of the analytical models presented here is a simplified version of a cortical column, consisting of a coupled pair of an excitatory and an inhibitory linear-threshold unit (Wilson & Cowan, 1973; Landsman, Neftci, & Muir, 2012; see Figure 2A). These units are designed to correspond in behavior to the average excitatory neuron and average inhibitory neuron in the small population of neurons within a single cortical column of very narrow width. The excitatory and inhibitory pair of units are assumed to exist at the same point on a cortical sheet, so that each unit has the same average self-connectivity as with the other unit of the pair. In this letter, when we refer to “self-excitation” and “self-inhibition,” we mean recurrent excitation within the population of neurons that is represented by a single excitatory or inhibitory unit.

*x*and

_{E}*x*are the internal state of the excitatory and inhibitory unit in the pair; [

_{I}*x*]

^{+}denotes the linear-threshold transfer function [

*x*]

^{+}=max(

*x*, 0); and other parameters are as described in Table 1.

w _{ER} | Recurrent synaptic weight from an excitatory unit to the units in the same column |

w _{ECm} | Synaptic weight from an excitatory unit to the units in another cortical column m steps away |

w _{IR} | Recurrent synaptic weight from an inhibitory unit to the units in the same column |

w _{ICm} | Synaptic weight from an inhibitory unit to the units in another cortical column m steps away |

Time constant of unit n | |

Activation gain of unit n | |

Activation threshold of unit n | |

External input current to column n |

w _{ER} | Recurrent synaptic weight from an excitatory unit to the units in the same column |

w _{ECm} | Synaptic weight from an excitatory unit to the units in another cortical column m steps away |

w _{IR} | Recurrent synaptic weight from an inhibitory unit to the units in the same column |

w _{ICm} | Synaptic weight from an inhibitory unit to the units in another cortical column m steps away |

Time constant of unit n | |

Activation gain of unit n | |

Activation threshold of unit n | |

External input current to column n |

### 2.4. Summary of Analytical Method.

The details of our analysis are presented in appendix A. Briefly, we construct a set of differential equations embodying one of the columnar network models shown in Figure 2. Since the systems are piecewise-linear, a Jacobian of the system can be constructed for each linear partition in the state space defined by the activity of all units (Hahnloser, 1998b). The real parts of the eigenvalues and trace of the Jacobians determine when the system is stable in a bounded input-bounded output (BIBO) sense. The BIBO stability criterion guarantees that the system will not approach infinite activity for a finite input. For the simple systems shown in Figure 2, the set of eigenvalues can be described analytically. This allows constraints on each of the system parameters to be found that guarantee BIBO stability.

To determine whether two columns in a model are in competition, we measure the activity increase or decrease of activity in the excitatory unit in column 2 produced by an increase in the input to column 1 (i.e., ). The value of this partial derivative depends on the system parameters, including the weights between the two columns. When the partial derivative is negative, increasing the input to column 1 leads to a decrease in the activity of column 2 via disynaptic inhibition or other network effects. Due to the symmetric nature of our models, the same interaction would also occur in the reverse direction from column 2 to column 1. If increasing the input to either column decreases the activity of the other, we say the columns are in competition. Again, for our simple models, we can find closed analytical forms for the partial derivative and so can solve for simple conditions on each of the system parameters corresponding to competitive interactions.

By combining the conditions for BIBO stability and for competition, we can determine what parameter constraints ensure that a model operates in a stable winner-take-all (WTA) mode. Our method for evaluating competition operates on system fixed points and does not take into account transient modes. However, we also identify when a system is expected to operate in a nonoscillatory mode such that transient dynamics can be ignored.

### 2.5. Two-Column Analytical Model.

Analysis of the two-column model is described in detail in section A.2. This model examines two points in a columnar cortical system in an abstract form, including only the direct excitatory and inhibitory connections between the two columns (see Figure 2B; *w _{EC}* and

*w*, respectively). More complex network interactions contributed by intermediate columns are excluded in this minimalistic model.

_{IC}The question explored by the simple two-column model is this: When can two points in columnar cortex be in direct competition, disregarding network connectivity external to the columns in question? To answer that question, we examine the fixed-point solutions of the two-column model to determine its behavior and examine the Jacobian of the model network to determine its stability properties (see Figure 3). For the two columns to be in competition, an input given only to column 1 should reduce the activity of column 2, and this must occur in a network that is stable in a bounded-input, bounded-output sense (BIBO stability).

We found that two points in a columnar system can be in competition only when the inhibitory coupling *w _{IC}* between the two columns is stronger than the excitatory coupling

*w*. This is a strong result that does not depend on the thresholds for excitation and inhibition or on the time constants of excitation and inhibition (see section A.2). We found also that hard-WTA competitive behavior (i.e., one column is silenced by the other) can occur only for a certain range of input differentials between the two columns. This result implies that for non-saturating columnar systems, there is no parameter regime that guarantees hard-WTA operation regardless of the network input; networks operate in a soft- or hard-WTA regime depending on the difference in input between two columns. We also found that nonzero thresholds for excitation and inhibition cannot introduce or abolish competition. However, they can establish a memory state in an already competitive network. A network in this regime can maintain suprathreshold activity without input once a winner has been determined.

_{EC}The two-column model described here ignores contributions from other columns across the cortical surface. We considered whether multisynaptic inhibition provided by intermediate columns could be strong enough to drive competition between two points by exploring more elaborate models that include intermediate columns, described in the following sections.

### 2.6. Three-Column Ring Analytical Model.

The two-column model neglects the effect of network interactions that might be mediated by additional columns. For example, competition between two distant points in a columnar system could be mediated by a third column placed at an intermediate location. We explored this possibility by designing networks containing three columns. The first such network had three columns arranged in a ring (see Figure 2C). The connections in the model are homogeneous, such that every column is equivalent. Competition in this network is sought between two of the three columns.

This model is analyzed in detail in section A.3. We found, just as for the two-column model described above, that competition can occur only when the direct inhibitory coupling between the two columns is stronger than the direct excitatory coupling. The third column cannot provide a sufficient indirect inhibitory contribution to mediate competition. We call this the direct coupling effect: the interaction between two columns is primarily determined by direct excitatory and inhibitory coupling.

However, since the three columns in the model examined here were arranged in a ring, it is possible that the direct excitatory and inhibitory connections between the two columns that should compete were unrealistically strong. We therefore examined another three-column model with the columns arranged in a line rather than a continuous ring.

### 2.7. Three-Column Chain Analytical Model.

The direct connections between two distant columns in cortex may be weak; certainly two proximal columns are expected to have stronger coupling than two distant columns. We examined a more general form of the three-column network, where three columns are arranged in a linear chain (see Figure 2D). Competition was sought between the columns at the two ends of the chain (edge columns). As for the previous model, analysis of this network indicated whether competition between two distant columns in cortex (represented by the edge columns) could be driven by the activity of an intermediate column (represented by the central column). The principal difference from the previous model was in the structure of the connections between the two edge columns. These columns shared symmetric mutual coupling weights (*w*_{EC2} and *w*_{IC2}), which were not constrained to be equal to the weights between the central and edge columns (*w*_{EC1} and *w*_{IC1}). The three-column chain model therefore approximated the physical arrangement of three equally spaced columns, such that the two edge columns were further apart and therefore more weakly connected.

This model is analyzed in detail in section A.4. Surprisingly, despite the potentially weaker coupling between the edge columns, the central column was still not able to drive competition between them. This appears unintuitive, but is caused by the assumption of homogeneous local connections between neighboring columns. For one edge column to indirectly inhibit the other, it must first activate the central column. This implies that the excitatory coupling from edge to center columns should be stronger than the inhibitory coupling. Likewise, since connections in cortex are assumed to be homogeneous, the connections from the center to both edge columns are then also dominated by excitation. This implies that driving an edge column will recruit both excitation and inhibition in the central column, but that driving the central column will also activate the edge columns. It is therefore not possible to indirectly activate the central column by driving an edge column and have a net suppressive effect on the opposite edge column.

We also examined the conditions required for competition between an edge column and the center column. In this configuration, the two end columns could be positioned close together in cortical space with the central column equidistant (but far) from both end columns. Coupling between the edge columns could be arranged to be dominated by inhibition, with excitatory coupling between edge and center columns, which one might assume would lead to indirect competition between edge and center columns. However, for the indirect competition to outweigh direct excitation, direct inhibition between edge columns would have to be so strong that it would in fact lead to complete suppression of one edge column, thereby eliminating the effect. Thus the direct coupling effect applies also to this configuration; coupling between center and edge columns must be dominated by inhibition for competition to be present between them.

Once again, we must conclude from this analytical model that for competition to occur between two columns, we need consider only the direct column-to-column coupling, which must be dominated by inhibition.

## 3. Simulation Models

The simple analytical models we have described had only a few units and directly modeled at most three columns. The constraints for stability and competition were remarkably similar from the simplest to the most complex analytical model, implicating the direct excitatory and inhibitory coupling over multisynaptic network interactions. But how predictive are these simple models for a larger-scale 1D or 2D simulation composed of many columns, and with realistic spatial profiles of connectivity? The models discussed so far treated a cortical column as an isolated entity; the interactions between several columns were divorced from the remainder of a cortical area. We would like to understand how competition is mediated across a homogeneous cortical surface. We would also like to address the possibility that the summed effect of inhibition from many columns across a larger model might succeed in driving competition where a single intermediate column cannot.

To answer these questions, we simulated linear and two-dimensional models composed of columns with the same structure as the basic analytical column (see Figure 2A). In place of simple point-to-point connectivity, we introduced spatial profiles of synaptic connections based on gaussian fields (see Figures 4A and 4B) with synaptic parameters estimated from the experimental literature (see appendix B).

### 3.1. Presence of Competition in Simulated Networks.

*w*and

_{Eji}*w*are the excitatory and inhibitory projections from point

_{Iji}*i*to point

*j*, respectively. We find that the stability criteria given for our networks hold regardless of the spatial pattern of a stimulus. In other words, local columnar inhibitory feedback is able to stabilize local excitatory activity, even in the presence of wide-ranging excitatory input to a column in the model.

We examined the presence and absence of competition in these linear models by injecting a point excitatory stimulus into a single column of a quiescent network with stable, nonoscillatory dynamics (see section 2). Once the network reached the stable fixed point, we measured the net current arriving at each column in the network, provoked by the point stimulus passing through the entire network. Two locations are in competition if providing a positive input current to a source column results in a net suppressive effect on a target column, indicated by a net negative current arriving at the target column. Since the coupling patterns of our networks are homogeneous and symmetric, the effect of injecting a point excitatory stimulus is identical between any two locations on the network when using either location as the source. Therefore, locations across the network for which the effect of a point stimulus is to provide a net negative input current are in mutual competition with the stimulated column.

We designed linear networks with spatial profiles of lateral connectivity encompassing lateral excitation and lateral inhibition (see Figures 4A and 4C). Each network consisted of 360 columns spaced at a pitch. The spatial range of lateral excitation for both models shown in Figure 4 was ; the spatial range of inhibition was for the network with local inhibition (see Figure 4A) and for the network with lateral inhibition. Total synaptic strength for each neuron was estimated to be realistic for cat visual cortex (see Table 3) synaptic coupling between columns was determined by the mean field estimate under the assumption of gaussian connectivity profiles, normalized to the total estimated synaptic strength. Injecting excitatory input currents into single columns of these models produced regions across the networks that received net excitatory and inhibitory currents at a steady state through the combined interactions of many columns of the networks.

We found that under realistic spatial profiles of lateral excitation and short-range inhibition (see Figures 4A and 4B), and under a Mexican hat arrangement with lateral inhibition (see Figures 4C and 4D), the direct coupling effect predicted a central region of competition that matched the simulation results to within the spatial resolution of the simulation. However, in the case of lateral excitation, a region of competition mediated by multicolumnar interactions emerged (asterisks and inset in Figure 4B). This competition occurred because a column activated by lateral excitation distant from the point stimulus can suppress activity locally through short-range inhibitory connections. However, since the gain of single synaptic connections is low, an effect relying on three or more synapses must also be comparatively weak. Under the realistic parameters simulated here, the scale of the multisynaptic effect was at least four orders of magnitude weaker than that produced by the direct coupling effect.

We performed equivalent experiments in two-dimensional networks with symmetric gaussian profiles of lateral excitation and inhibition, with other parameters identical to the one-dimensional models. The overall patterns of competition and facilitation were qualitatively the same as for the one-dimensional linear networks (not shown).

### 3.2. Accuracy of Analytical Predictions.

The direct coupling effect predicted competition for the particular weight parameters simulated in Figure 4. To determine how well the two-column analytical predictions hold for an arbitrary homogeneous model, we directly compared the numerical predictions between a linear model and our two-column model configured with identical coupling strengths. We simulated 2500 linear models with gaussian profiles of excitatory and inhibitory coupling (such as those shown in Figure 4), built with random and independent excitatory and inhibitory spatial ranges and total synaptic strengths as given in Table 3. Each model was composed of 400 columns (400 excitatory and 400 inhibitory units). We injected current into 50 pairs of columns in each model, taken in turn and spanning a range of spatial separations, and numerically computed the resulting activation fixed point. We then injected a step current into one column in the pair and numerically computed the partial derivative to measure the presence and strength of competition between the pair of columns, as for the analytical models described above (see section 2.4). We then reduced the linear model to a two-column configuration by removing all weights except those within and between the units in the pair of driven columns. The derivative was again computed numerically for the two-column model. Cases where the sign of the predicted and measured strengths of competition did not match indicated weight configurations where the analytical predictions did not hold.

Figure 5 shows the comparison between the strength of competition predicted under the two-column model and the strength of competition measured in the line model. The derivatives computed for the two-column model showed an impressive predictive power for the line model, such that most prediction and measurement pairs lay close to a 45 degree line passing through the origin. A small gain factor difference between predicted and measured facilitation was apparent due to the effect of recurrent amplification in network interactions in the line model. A very small proportion of simulated line models exhibited competition when the two-column model predicted facilitation, and vice versa (highlighted points in Figure 5). However, all mismatches between two-column and line model results occurred close to the origin, where interactions between the two tested columns were very weak.

## 4. Discussion

We explored the possibility of competition between columns in simple models for columnar cortex that allow the relationship between competition and the spatial profiles of excitation and inhibition to be examined directly. Networks composed of up to three columns were analytically tractable and could be solved exactly. In this way we obtained closed-form constraints on the model parameters that permit competition to exist between two columns, which we found to involve only the direct lateral coupling between the columns. In a columnar model with homogeneous connectivity, the direct inhibitory coupling between two columns must be stronger than the direct excitatory coupling to permit competition to emerge.

In our analyses described here, we found that our toy analytical models provided a great deal of insight into the behavior of larger systems that are not tractable for analysis. In particular, we found that conditions for stability and competition are remarkably insensitive to the size of the analyzed model and continue to apply even in the context of increasingly complex network interactions (see also Landsman et al., 2012). Surprisingly, we found that the presence and strength of competition or cooperation between two columns was primarily determined by the direct excitatory and inhibitory coupling between those columns. We observed very slight deviations from our analytical expectations in 2D and 1D models. However, the deviations due to multicolumnar network interactions were considerably weaker than the direct coupling effects predicted by our analytical models. We therefore expect that in a biologically realistic network or in cortex itself, the first-order direct coupling effects are likely to remain, while the small deviations from these effects are unlikely to be a significant factor in the face of the many noisy phenomena that influence a biological network.

We found the constraint relating inhibitory and excitatory coupling to be independent of the time constants and thresholds of excitatory and inhibitory elements in a network. However, positive excitatory thresholds introduce a subtractive influence on the fixed point of a network. This can introduce the appearance of competition if the internal state of the network is not accessible and instead the output firing rate gains and are used to evaluate the presence of competition. If both columns are driven with unequal inputs and , a subtractive threshold will result in the gains *g*_{1} and *g*_{2} being unequal, even if the derivatives and are equal. The difference in gains does not indicate the presence of competition through recurrent network interactions in this case, and the ratio *g*_{1}/*g*_{2} will converge to 1 as the overall strength of input increases. Illusory competition can also occur if the inputs to the network are appropriately structured. For example, Mexican hat–shaped input can induce lateral cooperative and competitive interactions in a network without lateral inhibition (Linkser, 1986).

Increasing the length of the inhibitory time constant can lead to oscillatory dynamics (Wilson & Cowan, 1973; Hahnloser, 1998a; Tang & Tan, 2005; Landsman et al., 2012). This does not change whether the fixed points of the network express competition between columns, but can cause the network to oscillate around the fixed point. In this case, the fixed point will not be informative of the dynamics of the network and may not accurately reflect the relationship between the activity of two columns.

### 4.1. Implications for Cortical Models.

Our results show that the possibility and lateral extent of disynaptic competition in cortical field models with homogeneous, nonspecific connectivity is accurately predicted by the direct difference between the spatial profiles of excitation and inhibition emerging from a point. The predictions for lateral excitation and inhibition architectures are illustrated in Figure 6. Classical lateral-inhibition architectures produce an annulus of competition surrounding a core of facilitation, depending on the relative strengths of the excitatory and inhibitory components (see Figures 6A–6C). This mechanism has been used via lateral-inhibition neighborhood functions in developmental models of cortical areas to provide local spatial grouping of function and medium-range decorrelation of function, and to therefore reproduce some of the form of functional maps in visual cortex (von der Malsburg, 1973; Swindale, 1982; Grabska-Barwińska & von der Malsburg, 2008; Antolík & Bednar, 2011; Plebe, 2012). The same mechanism can be used to describe pattern formation during ongoing activity in columnar cortex (Ernst, Pawelzik, Sahar-Pikielny, & Tsodyks, 2001; Pinto & Ermentrout, 2001; Blumenfeld, Bibitchkov, & Tsodyks, 2006; Baker & Cowan, 2009).

Broadly tuned or untuned inhibitory feedback has also been used in abstract competitive models to explain the intracortical emergence of sharp orientation tuning in primary visual cortex (Douglas et al., 1994; Ben-Yishai et al., 1995; Somers et al., 1995; Li, 1998, 2002). If we are to interpret these models as applying to columnar visual cortex (e.g., cat, tree shrew, ferret, monkey), where orientation is smoothly mapped to space across the surface of area 17, then these models require competition over long distances across the cortical surface.

In contrast, the extent of competition in lateral-excitation models is expected to be even narrower than the range of local inhibition (see Figure 6F). Note that this is true in spite of the presence of widespread disynaptic inhibition in the models. Since the measured cortical architecture appears to be of this type, our results raise serious questions for all cortical models that rely on lateral or global inhibition.

Our results do not mathematically prohibit competition between columns in cortex. Not only are there extremely weak deviations from the direct coupling effect in the models we examine, but it is also certainly possible to hard-wire a model of arbitrary and asymmetric connections between columns to provide multicolumnar competition. Our models examine the expectation for homogeneous and symmetric cortical networks, reflecting the minimal assumption of opportunistic connectivity between neurons. Our results show that the baseline expectation for competition in cortex can be estimated by the direct coupling between points in cortex. Searching for competition in cortex must be a search for deviations from nonspecific, homogeneous, and symmetric connectivity.

Accordingly, we considered whether effective lateral inhibitory profiles (and thereby lateral competition) might be obtained in a network with lateral excitatory projections and only local inhibitory projections, through specificity of where on a respective axonal and dendritic tree synaptic connections were formed. For example, synapses on an inhibitory axonal arbor that are distal to the soma of the source neuron might be biased to contact the distal segments of its targets (see Figures 7A–7C). This effectively widens the spatial range of inhibition without requiring long-range inhibitory projections, and under the direct coupling effect therefore permits lateral competition to occur (see Figure 7C). The opposite mode of synapse location specificity would also support lateral competition (see Figures 7D–7F). This hypothesis is consistent with the assumptions made for our analytical models, consistent with the known spatial ranges of excitatory and inhibitory axonal projections and consistent with the absence of neuron class projection bias described in the literature (Kisvárday et al., 1986).

The question of dendritic location specificity is difficult to tackle experimentally, and so has been only sparsely examined. In the mammalian hippocampus, both long-range and local projections are laminar specific, which due to the highly ordered radial arrangement of Purkinje and granule cell dendrites implies that individual pathways are highly selective for particular dendritic (and somatic) domains (Blackstad, 1956, 1958; Ribak & Seress, 1983; Soriano & Frotscher, 1989; Han, Buhl, Lörinczi, & Somogyi, 1993; Deller, Martinez, Nitsch, & Frotscher, 1996). Lamination is also a striking feature of the neocortex, and there is some evidence that afferent projections to cortex are also laminar-specific. Petreanu and colleagues investigated whether individual pathways targetting rodent barrel cortex, arising from other cortical areas and subcortical structures, formed synapses on specific dendritic segments of excitatory neurons (Petreanu, Mao, Sternson, & Svoboda, 2009). They found that long-range projections to neurons in layers 2, 3, and 5 targeted specific dendritic domains ranging in depth from basal to apical dendrites. In contrast, local excitatory projections from layers 2 and 3 to neurons in layer 5 did not show a preference for a particular dendritic location. However, the results in hippocampus and barrel cortex do not measure preference for lateral dendritic location of the form we discussed in Figure 7, but only for vertical dendritic location within a cortical column. In principle, the experimental technique of Petreanu and colleagues could be applied to explore lateral dendritic specificity, but this remains an unexplored hypothesis.

### 4.2. Intracolumnar Competition.

Our results indicate that while lateral competition is difficult to justify in columnar cortical architectures, competition could nevertheless occur between neighboring cortical columns over short distances (see Figures 4A, 4B, and 6D, 6F). Within a single column, the machinery required for competition—recurrent excitatory and inhibitory connections—is readily available without making unreasonable assumptions about the cortical architecture. Indeed, responses of neighboring neurons in cat visual cortex are highly decorrelated, over and above what is expected from differences in their respective receptive fields (Yen, Baker, & Gray, 2007; Tolhurst, Smyth, & Thompson, 2009; Ecker et al., 2010; Martin & Schröder, 2013). This surprising lack of correlation between neurons with similar orientation preference and similar retinotopic location could occur through local competition between neurons within a cortical column. Decorrelation of neurons that receive similar inputs would increase the information coding capacity of single neurons and populations in cortex (Shamir & Sompolinsky, 2004; Averbeck, Latham, & Pouget, 2006).

Some existing models for learning receptive field properties in cortex are defined without an explicit mapping to cortical space, but are nevertheless compatible with the concept of strong competition within a column of visual cortex (Olshausen & Field, 1996; Bell & Sejnowski, 1997; Perrinet, 2004; Rehn & Sommer, 2007). These models seek to learn maximally sparse cortical representations by providing negative feedback between neurons with similar receptive fields. Neurons in strongest competition would therefore represent similar locations and preferred orientations in visual space, and consequently map to similar locations in cortical space.

Recent work exploring competition and information processing in non-columnar (mouse visual) cortex (Muir, Molina-Luna, Helmchen, & Kampa, 2014), competition and learning within local populations (Jug, Cook, & Steger, 2012) and dynamics of cortical columns with local inhibition (Landsman et al., 2012) show that local excitatory connectivity can provide a rich repertoire of complex dynamics and competitive behaviour for information processing in cortex.

## Appendix A: Detailed Analysis

### A.1. Analytical System Definition.

*x*is the activation of unit

_{n}*n*; is the time constant of unit

*n*;

**W**is the matrix composed of the individual weights

*w*of the network;

_{ij}**a**is the vector of activation gains of the network;

**v**is the vector of activation thresholds ; is the current injected into unit

*n*; and with

In this notation, [*x*]^{+} is the linear-threshold transfer function given by [*x*]^{+}=max(*x*, 0), and denotes the element-wise product of the vectors **a** and **b**. The activation gains can be absorbed into the weights arising from each unit without loss of generality; for further analysis, we take and omit the vector **a** from equation A.1. All parameters except the weights **W** are constrained to be non-negative. The definition of all parameters is given in Table 2.

Time constant for unit n | |

T | Matrix composed of all network time constants |

x _{n} | Activation value of unit n |

x | Vector composed of unit activations x _{n} |

Activation gain of unit n (slope of the linear-threshold transfer function) | |

a | Vector composed of unit gain factors |

Activation threshold of unit n | |

v | Vector composed of unit activation thresholds |

w _{ij} | Synaptic weight from unit j to unit i |

W | Matrix composed of all network weights w _{ij} |

W^{+} | Matrix composed of network weights, with rows and columns corresponding to inactive units set to zero, that is, the weight matrix corresponding to the active network partition |

External instantaneous input current injected into unit n | |

J^{+} | Jacobian of the system for the active network partition |

Part[p] | Nomenclature for referring to a particular partition p of the network, where p is a Boolean vector indicating which columns of the network are active in the partition |

Set of eigenvalues of the system Jacobian J, in partition p | |

N | Number of units in the network |

Time constant for unit n | |

T | Matrix composed of all network time constants |

x _{n} | Activation value of unit n |

x | Vector composed of unit activations x _{n} |

Activation gain of unit n (slope of the linear-threshold transfer function) | |

a | Vector composed of unit gain factors |

Activation threshold of unit n | |

v | Vector composed of unit activation thresholds |

w _{ij} | Synaptic weight from unit j to unit i |

W | Matrix composed of all network weights w _{ij} |

W^{+} | Matrix composed of network weights, with rows and columns corresponding to inactive units set to zero, that is, the weight matrix corresponding to the active network partition |

External instantaneous input current injected into unit n | |

J^{+} | Jacobian of the system for the active network partition |

Part[p] | Nomenclature for referring to a particular partition p of the network, where p is a Boolean vector indicating which columns of the network are active in the partition |

Set of eigenvalues of the system Jacobian J, in partition p | |

N | Number of units in the network |

**I**is the identity matrix; denotes element-wise division of the matrices

**a**and

**b**;

**W**

^{+}is the network weight matrix, with rows and columns corresponding to inactive units set to zero; and

**T**is the square matrix composed of all unit time constants : A partition is stable in the bounded-input bounded-output (BIBO) sense when the eigenvalues of

**J**

^{+}have no positive real components, and . Note that the full system can have a mixture of stable and unstable partitions and that the system can be globally stable if all unstable partitions result in a transition to stable partitions (Hahnloser, 1998a). Partitions that contain large eigenvalues with complex components have oscillatory dynamics, which lead to either stable or unstable spirals depending on the magnitude of the real component of the eigenvalues.

### A.2. Stability and Behavior of Two Columns.

*x*and

_{En}*x*, respectively, where

_{In}*n*is the column number. The time constants for the system are defined by the class of each unit, with a class time constant for the excitatory units and another for the inhibitory units. Activation thresholds are similarly defined by class, giving and . The pair of units in a column receive a common input . The system weights are as shown in Figure 2B; the weight matrix is therefore given by In this work, partitions are denoted by superscripts indicating which columns of the network are active. For example, denotes the set of eigenvalues in the network partition when columns 1 and 2 have nonzero activity. The network partition itself is denoted Part[11]. Sets of equations that apply to a given partition are grouped with a vertical line as shown here:

#### A.2.1. Simplifying Substitutions.

#### A.2.2. Zero Thresholds; Equal Time Constants.

In many of the conditions for stability that will follow, the constraint (or similar), implying that *w _{ER}*<1+

*w*, appears often. With all other weights set to zero and the excitatory gain , a value of

_{IR}*w*=1 implies that if the excitatory unit has a net activity of

_{ER}*r*, then the recurrent excitatory input current supplied back to the same excitatory unit will also be

*r*. In other words, the open-loop gain of the recurrent excitatory connection is unitary. If

*w*>1, implying that the open-loop gain of the recurrent excitatory connection is greater than unitary, it is easy to see that the activity of the excitatory unit will grow without bound (in the absence of any network stability mechanism such as recurrent inhibition, or single-unit stability mechanism such as a saturating transfer function). If

_{ER}*w*<1, the open-loop gain of the recurrent excitatory connection is less than 1, implying that for an activity of

_{ER}*r*, the recurrent excitatory input will be less than

*r*.

Note that we generally ignore the partition where all columns are switched off (Part[00] or Part[000] for a three-column network). This partition is never unstable (under the reasonable assumption of bounded weights), never oscillatory, cannot exhibit competitive behavior, and is guaranteed to transition to another partition for inputs greater than the excitatory threshold .

The factor in equation A.11 is required for the network to be globally stable. The factor , however, speaks directly to the possibility of lateral competition in this simple network, since it reduces to *w _{IC}*>

*w*. The direct inhibitory connection between the two columns,

_{EC}*w*, must be stronger than the corresponding excitatory connection,

_{IC}*w*. This is a strong result, since the derivatives of the fixed points (see equation A.9) do not depend on the respective time constants of inhibition and excitation. For any combination of and , differential amplification can occur only if the network is dominated by lateral inhibition.

_{EC}#### A.2.3. Hard Winner-Take-All Behavior.

#### A.2.4. Gain of the Winning Column.

#### A.2.5. Unequal Time Constants.

The time constants of excitation and inhibition have no effect on the fixed points of the two-column network. However, the fixed points are a useful description of the network activity only to the extent that they help to predict the response of the system to a given input. Depending on the network parameters, the fixed points can be exponentially unstable (if the system has unbounded behavior) or can provide a focus around which the network activity oscillates.

*a*=1+

*w*(

_{ER}*w*−1)+

_{IR}*w*and . The constraint in equation A.15 requires that is longer than , since the factor of in the first term is always larger than 1 as long as (a general constraint for stability similar to that given in equation A.7). If

_{IR}*w*=1, then the relationship constraining and is given by the simpler form, which likewise constrains to be longer than for oscillatory dynamics.

_{ER}Unfortunately, the simplifying substitutions of and (see equation A.5) do not help here. The system is oscillatory when either of *r* or *s* becomes negative. The parameter constraints obtained by expanding these inequalities have a similar form to equation A.15, but are long, complicated, and not included here. Nevertheless, for the two-column network as for the single column, must be longer than for oscillatory dynamics to be present.

#### A.2.6. Nonzero Thresholds.

The fixed points in equation A.18 show that the response of the network contains a component that depends on the thresholds of excitation and inhibition and but not on the input and a separate component that depends on the input but not on the thresholds. Therefore, thresholds for excitation and inhibition that are identical between the two columns can modify the fixed points only in a manner independent of the input to the network. This implies that the partial derivatives in both partitions are independent of the activation thresholds, and in fact they have the same form as in equation A.9. Setting a nonzero threshold for either excitation or inhibition therefore has no effect on the existence of competition between columns.

#### A.2.7. Memory State with Nonzero Thresholds.

Winner-take-all networks can support the existence of a memory state, where activity persists in the absence of external input (Rutishauser & Douglas, 2009). The stability and configuration of this memory state can be explored by examining the steady-state network activity equations, with input terms and set to zero. For the two-column network presented here, equation A.18 reveals that for Part[11], the common-mode term *f* in equation A.18 will completely determine the network response. If *f* is positive, a stable memory state will exist; however, this memory state is identical for both columns, and so the activity of both columns will become equal. If *f* is negative, the memory state in Part[11] is unstable, and one or both columns will become inactive.

For the memory state to operate in a competitive switchable mode, where the activity in the network can be nudged from one column to the other, the two-column model must be able to operate in a hard-WTA regime in the absence of input. This is unrelated to the condition in section A.2.3, which applies for nonzero input. For the memory state to be stable, the steady-state solutions given in equation A.18 for Part[10] must be positive for the winning column (assumed to be column 1) and negative for the losing column (column 2).

### A.3. Stability and Behaviour of a Three-Column Ring.

_{IC}>w

_{EC}.

Perhaps surprisingly, the third column cannot mediate competition between columns 1 and 2 by providing disynaptic inhibition. For competition to occur, the direct inhibitory coupling between columns 1 and 2 must be stronger than the direct excitatory coupling.

#### A.3.1. Oscillatory Behavior.

The three-column ring has oscillatory dynamics if any of the roots *r*_{1} through *r*_{3} from the system eigenvalues (see equation A.20) are negative. The full parameter bounds are not included here, but oscillatory dynamics are again possible only if the inhibitory time constant is longer than the excitatory time constant .

### A.4. Stability and Behavior of a Three-Column Chain.

; and .

The eigenvalues for Part[111] above are given only for the simplifying case where , as the general case is overly complex. The eigenvalues for Part[101] are identical to those for Part[110], but depend on the weights *w*_{EC2} and *w*_{IC2} (as well as on ) in place of *w*_{EC1}, *w*_{IC1}, and , respectively.

*w*

_{IC2}>

*w*

_{EC2}. We find that once again, for competition to occur between columns 1 and 3, the direct inhibitory coupling between those columns must be stronger than the direct excitatory coupling.

*w*

_{IC2}>

*w*

_{EC2}. Coupling between columns 1 and 2 must be dominated by inhibition for competition to occur. A coupling regime that supports competition in Part[111] requires the network dynamics in that partition to be unstable, leading to a transition to another partition where one edge column is inactive, thus removing the possible effect of indirect competition mediated by that column.

#### Appendix B: Parameters for the Simulation Models

. | Value . | Formula . | Estimate . | Units . | References . |
---|---|---|---|---|---|

1 | Prop. of excitatory neurons | 0.82 | (Proportion) | (Gabott & Somogyi, 1986; Martin & Whitteridge, 1984) | |

2 | Input to pyramidal cell (total) | 7000 | Synapses | (Binzegger et al., 2004) | |

3 | (exc. synapses) | 5740 | Synapses | ||

4 | (inh. synapses) | 1260 | Synapses | ||

5 | (exc. from other L2/3 pyr.) | 3500 | Synapses | (Binzegger et al., 2004) | |

6 | Input to basket (inh.) cell (total) | 4000 | Synapses | (Binzegger et al., 2004) | |

7 | (exc. synapses) | 3280 | Synapses | ||

8 | (inh. synapses) | 720 | Synapses | ||

9 | Synapses per L2/3 pyr. cell axon | 5000 | Synapses | (Binzegger et al., 2004) | |

10 | Synapses per basket cell axon | 4200 | Synapses | (Binzegger et al., 2004) | |

11 | L2/3 pyr. local/total boutons | 0.5 | (Proportion) | (Binzegger et al., 2007) | |

12 | Average spontaneous firing rate | 7.56 | Hz | (Noda, Freeman Jr, Gies, & Creutzfeldt, 1971) | |

13 | exc. spikes per pC input | 0.066 | spikes/pC | (Ahmed, Anderson, Douglas, Martin, & Whitteridge, 1998) | |

14 | inh. spikes per pC input | 0.310 | spikes/pC | (Nowak et al., 2003) | |

15 | exc. PSP charge | 0.1 | pC/spike | (Binzegger et al., 2009) | |

16 | inh. PSP charge (basket) | 0.365 | pC/spike | (Binzegger et al., 2009) | |

17 | syn. release probability | 0.1 | (probability) | (Binzegger et al., 2009) | |

18 | exc. synapse strength per syn. | 0.01 | pC/spike/syn. | ||

19 | inh. synapse strength per syn. | 0.0365 | pC/spike/syn. | ||

20 | exc. gain multiplier per syn. | pC/pC/syn. | |||

21 | inh. gain multiplier | pC/pC/syn. | |||

22 | inh. syn. strength delta (est.) | R21/R20 | 17.14 | (Proportion) | |

23 | inh. syn. strength delta | 10.00 | (Proportion) | (Binzegger et al., 2009) | |

Spontaneous input | |||||

24 | exc. spikes into L2/3 pyr. cell | 4339 | Hz | ||

25 | inh. spikes into L2/3 pyr. cell | 953 | Hz | ||

26 | exc. spikes into basket cell | 2480 | Hz | ||

27 | inh. spikes into basket cell | 544 | Hz | ||

Estimated effective lumped output weights | |||||

28 | L2/3 pyr cell | 2.71 | pC/pC | ||

29 | Basket cell (delta) | 4.99 | pC/pC | ||

30 | Basket cell (delta est.) | 8.55 | pC/pC |

. | Value . | Formula . | Estimate . | Units . | References . |
---|---|---|---|---|---|

1 | Prop. of excitatory neurons | 0.82 | (Proportion) | (Gabott & Somogyi, 1986; Martin & Whitteridge, 1984) | |

2 | Input to pyramidal cell (total) | 7000 | Synapses | (Binzegger et al., 2004) | |

3 | (exc. synapses) | 5740 | Synapses | ||

4 | (inh. synapses) | 1260 | Synapses | ||

5 | (exc. from other L2/3 pyr.) | 3500 | Synapses | (Binzegger et al., 2004) | |

6 | Input to basket (inh.) cell (total) | 4000 | Synapses | (Binzegger et al., 2004) | |

7 | (exc. synapses) | 3280 | Synapses | ||

8 | (inh. synapses) | 720 | Synapses | ||

9 | Synapses per L2/3 pyr. cell axon | 5000 | Synapses | (Binzegger et al., 2004) | |

10 | Synapses per basket cell axon | 4200 | Synapses | (Binzegger et al., 2004) | |

11 | L2/3 pyr. local/total boutons | 0.5 | (Proportion) | (Binzegger et al., 2007) | |

12 | Average spontaneous firing rate | 7.56 | Hz | (Noda, Freeman Jr, Gies, & Creutzfeldt, 1971) | |

13 | exc. spikes per pC input | 0.066 | spikes/pC | (Ahmed, Anderson, Douglas, Martin, & Whitteridge, 1998) | |

14 | inh. spikes per pC input | 0.310 | spikes/pC | (Nowak et al., 2003) | |

15 | exc. PSP charge | 0.1 | pC/spike | (Binzegger et al., 2009) | |

16 | inh. PSP charge (basket) | 0.365 | pC/spike | (Binzegger et al., 2009) | |

17 | syn. release probability | 0.1 | (probability) | (Binzegger et al., 2009) | |

18 | exc. synapse strength per syn. | 0.01 | pC/spike/syn. | ||

19 | inh. synapse strength per syn. | 0.0365 | pC/spike/syn. | ||

20 | exc. gain multiplier per syn. | pC/pC/syn. | |||

21 | inh. gain multiplier | pC/pC/syn. | |||

22 | inh. syn. strength delta (est.) | R21/R20 | 17.14 | (Proportion) | |

23 | inh. syn. strength delta | 10.00 | (Proportion) | (Binzegger et al., 2009) | |

Spontaneous input | |||||

24 | exc. spikes into L2/3 pyr. cell | 4339 | Hz | ||

25 | inh. spikes into L2/3 pyr. cell | 953 | Hz | ||

26 | exc. spikes into basket cell | 2480 | Hz | ||

27 | inh. spikes into basket cell | 544 | Hz | ||

Estimated effective lumped output weights | |||||

28 | L2/3 pyr cell | 2.71 | pC/pC | ||

29 | Basket cell (delta) | 4.99 | pC/pC | ||

30 | Basket cell (delta est.) | 8.55 | pC/pC |

Note: exc: excitatory; inh: inhibitory; prop: proportion; pyr: pyramidal; syn: synapses.

## Acknowledgments

We gratefully acknowledge Tom Binzegger, Kevan Martin and Rodney Douglas for providing the data in Figure 1 (Binzegger et al., 2007). We thank Rodney Douglas for spurring this work on in its early stages. We also thank the participants of the Winner-Take-All and Neural Computation work groups at the Capo Caccia meeting (http://capocaccia.ethz.ch), who provided a stimulating environment for discussion of this work and on cortical computation in general. This work was funded by a John Crampton Travelling Fellowship to D.R.M., by the European Commission (FP6-2005-015803 DAISY), by the Velux Stiftung, and by CSN fellowships to D.R.M.

## References

*x*- and

*y*-type thalamic afferents. II. Identification of postsynaptic targets by GABA immunocytochemistry and Golgi impregnation

*x*- and

*y*-type thalamic afferents. I. Arborization patterns and quantitative distribution of postsynaptic elements

## Author notes

D.R.M. is now at Biozentrum, University of Basel, Basel, Switzerland.