Pattern forming mechanisms of color vision

Abstract While our understanding of the way single neurons process chromatic stimuli in the early visual pathway has advanced significantly in recent years, we do not yet know how these cells interact to form stable representations of hue. Drawing on physiological studies, we offer a dynamical model of how the primary visual cortex tunes for color, hinged on intracortical interactions and emergent network effects. After detailing the evolution of network activity through analytical and numerical approaches, we discuss the effects of the model’s cortical parameters on the selectivity of the tuning curves. In particular, we explore the role of the model’s thresholding nonlinearity in enhancing hue selectivity by expanding the region of stability, allowing for the precise encoding of chromatic stimuli in early vision. Finally, in the absence of a stimulus, the model is capable of explaining hallucinatory color perception via a Turing-like mechanism of biological pattern formation.


INTRODUCTION
Our experience of color begins in the early visual pathway, where, from the moment light strikes the retina, cone-specific neuronal responses set off the mechanisms by which the photons' chromatic information is converted to the hues we ultimately see. While color vision scientists agree that the single-cell processing of chromatic stimuli occurs along the two independent cone-opponent L-M and S-(L+M) pathways (Conway, Eskew, Martin, & Stockman, 2018;Kaiser & Boynton, 1996), there is yet no consensus as to how and where the divergent signals are synthesized to encode hue. To complicate matters, cone-opponency, observed in electrophysiological recordings of single neurons (Shapley & Hawken, 2011), is often confounded with hue-opponency, a phenomenon first theorized by Ewald Hering in the 19th century and later mapped out in clinical studies by Jameson and Hurvich (De Valois, Cottaris, Elfar, Mahon, & Wilson, 2000;Jameson & Hurvich, 1955;Shevell & Martin, 2017).
Best depicted in the Derrington-Krauskopf-Lennie (DKL) stimulus space (Figure 1), coneopponency predicts that neurons tuned to either the L-M or S-(L+M) pathway will not respond to light whose wavelengths isolate the other (Derrington, Krauskopf, & Lennie, 1984). It is tempting to equate these null responses to the four unique hues of color-opponent theory, in which unique blue, for example, is observed when the "redness" and "greenness" of a perceived color exactly cancel. But the wavelengths of the unique hues specified by perceptual studies (Jameson & Hurvich, 1955) only roughly match the wavelengths isolating either coneopponent pathway (Wool et al., 2015;Wuerger, Atkinson, & Cropper, 2005;Xiao, 2014), and, more fundamentally, we do not yet understand the mechanisms behind the processing that the analogy implies (Mollon & Jordan, 1997;Stoughton & Conway, 2008;Valberg, 2001). That is, how do we get from the single neurons' chromatic responses to our perception of color?
Several authors have addressed these questions through combinatorial models that parameterize the weights of the L, M, and S cones contributing to successive stages of processing (De Valois & De Valois, 1993;Gegenfurtner & Ennis, 2015;Judd, 1949;Mehrani et al., 2020;Stockman & Brainard, 2010). Though differing in their assumptions of modularity, the theories share a mechanistic framework for the transition of single-cell receptive field properties Figure 1. The DKL space maps chromatic stimuli onto a circle with two "cardinal" axes representing the L-M and S-(L+M) pathways. The excitatory or inhibitory effect of a stimulus on coneopponent cells tuned to either pathway can be thought of as a projection of its location in DKL space onto the relevant axis. Stimuli isolating the two pathways correspond roughly to wavelengths associated with the red, green, blue, and yellow unique hues of color-opponent theory, leading to the common, but mistaken, conflation of chromatic stimulus and color perception.
Stimulus space: A geometrical construct in which chromatic stimuli are represented by the relative cone responses they yield.
Color-opponent theory: Theory of color appearance that postulates that the four unique huesred, green, blue, and yellow-are perceived antagonistically. That is, there is no such thing as a reddish green or a bluish yellow.

Cone-opponency:
Receptive field property of neurons in the early visual pathway, by which chromatic stimuli are processed through the comparison of the relative L, M, and S cone responses. (Brown, 2014). Starting with cells in the retina and lateral geniculate nucleus (LGN) known to be tuned broadly to the cone-opponent axes, these proposed mechanisms build up to cells in various cortical areas more narrowly tuned to divergent (and debated) chromatic directions in DKL space. While parsimonious, this formalism comes at the cost of tuning the cone weights arbitrarily, disregarding specific properties of real neurons' receptive fields (Eskew, 2009;Kaiser & Boynton, 1996;Stockman & Brainard, 2010). Furthermore, the linear combinatorial mechanism is not, on its own, able to account for the variety of color cells observed in the visual cortex (Garg et al., 2019;Johnson, Hawken, & Shapley, 2001;Shapley & Hawken, 2011). In addition to the forward flow of chromatic information through the successive stages of processing, the encoding of color reflects the neuronal dynamics within each. Modelers agree that the next forays into a mechanistic theory of color vision should consider these intracortical circuits, but disagree about where such interactions first become important (De Valois & De Valois, 1993;Hanazawa, Komatsu, & Murakami, 2000;Liu et al., 2020;Wachtler, Sejnowski, & Albright, 2003).
Electrophysiological studies of macaque visual cortex have shed some light on this question, showing that the processing of individual hues previously associated with higher level mechanisms has its origins in the primary visual cortex ( V1) (Garg et al., 2019;Gegenfurtner, 2003;Hanazawa et al., 2000;Li et al., 2022;Wachtler et al., 2003;Xiao, 2014;Xiao, Casti, Xiao, & Kaplan, 2007). These experiments have identified the emergence of neurons in V1 tuned to the gamut of hues in DKL space, as well as to the role of processing nonlinearities in determining their tuning curves (De Valois et al., 2000;Hanazawa et al., 2000;Lennie et al., 1990;Wachtler et al., 2003). Puzzlingly, these cells mainly inhabit the so-called CO "blobs," patchy regions rich in cytochrome oxidase that display a sensitivity to stimuli modulating either of the cone-opponent axes rather than the full set of hues (Landisman & Ts'o, 2002b;Li et al., 2022;Livingstone & Hubel, 1984;Salzmann, Bartels, Logothetis, & Schüz, 2012). Some have speculated that this colocalization stems from a mixing of cell populations encoding the two cardinal pathways (Li et al., 2022;Xiao, 2014) while others indicate a distinct population of hue-sensitive neurons in the "interblob" regions, more conclusively associated with orientation tuning (Garg et al., 2019;Landisman & Ts'o, 2002a). As a whole, however, these studies point to the need for a population theory of chromatic processing remarkably early in the visual pathway.
In this article, we present a model of color processing in which intracortical neuronal dynamics within V1 serve as the substrate for hue perception. Drawing on the canonical Wilson-Cowan neural field equations and the ring model of orientation tuning, we show that this population approach allows us to account for cells responsive to the full range of DKL directions without the need to fine-tune input parameters (Ben-Yishai, Bar-Or, & Sompolinsky, 1995;Burstein, 2022;Hansel & Sompolinksy, 1998;Wilson & Cowan, 1972, 1973. The threshholding we employ bears in mind the input-response nonlinearities of previous combinatorial models, but zooms out of the single-cell, feedforward interpretation of input as the stimulus-driven LGN afferents to individual neurons. Rather, we model input as the total synaptic current into a population of cells, taking into account both the cone-opponent LGN afferents as well as the hue-dependent connectivity between distinct neuronal populations. The resulting demarcation between the cone-opponent and the hue-selective mechanisms in the same population of cells points to the importance of V1 in the transition from chromatic stimulus to color perception. To characterize this role, we study the effects of the model's connectivity parameters and processing nonlinearities on the narrowness and stability of the hue tuning curves. In the final part of the paper, we show that the model is able to explain color Cardinal pathways: In DKL space, the two orthogonal axes representing stimuli isolating the L-M and S-(L+M) cone-opponent pathways.
Wilson-Cowan neural field equations: Coupled set of partial differential equations describing the network dynamics of excitatory and inhibitory neural populations. responses in the absence of LGN input, evoking color hallucinations via a Turing-like mechanism of spontaneous pattern formation in DKL space.

MODEL
In light of the patchy distribution of color-sensitive cells reported in Landisman and Ts'o (2002b), Li et al. (2022), Livingstone and Hubel (1984), and Salzmann et al. (2012), we model the color map of V1 as a set of neuronal networks, each encoding the chromaticity of its corresponding region of the visual field. This organization brings to mind the hypercolumnar structure of orientation preference within V1 (Hubel & Wiesel, 1974), which, on the basis of its feature-based connectivity properties, allows for the representation of network activity as a function of a localized feature space. Here, we assume a mean hue-dependent activity a(θ, t) where θ represents a direction in the DKL stimulus space, a strictly physiological conception of "hue" from the hues categorizing color perception, as explained above. In drawing this distinction, and in agreement with Wool et al. (2015) and Li et al. (2022), we give no special status to V1 cells tuned to the DKL directions associated with the unique hues of color-opponent theory, while simultaneously emphasizing the cone-opponent nature of feedforward afferents from the LGN.
The resulting activity a(θ, t) of a network of hue-preferring cells, expressed as a firing rate in units of spikes/second, is dominated by the membrane properties of its constituent cells, whose potential variations occur on the order of the membrane time constant τ 0 , taken to be 10 msec (Ben-Yishai et al., 1995;Carandini & Ringach, 1997;Izhikevich, 2010). In the vein of previous neural mean field models of feature detection (Bressloff & Cowan, 2002, 2003bBressloff, Cowan, Golubitsky, Thomas, & Wiener, 2001;Dayan, Abbott, & Labahn, 2001;Ermentrout, 1998;Gutkin, Pinto, & Ermentrout, 2003), and in close analogy to the ring model of orientation tuning (Ben-Yishai et al., 1995;Hansel & Sompolinksy, 1998), we let a (θ, t ) evolve according to the single-population firing-rate formulation of the Wilson-Cowan equations: (1) where h(θ, t ), the synaptic input, takes into account both excitatory and inhibitory afferents into a population of cells preferring hue θ, and g(h) is an activation function, as described below.
To analyze the relationships between feedforward and recurrent processing and to distinguish between their respective effects on a(θ, t ), we write h(θ, t ) as a sum of the stimulus-driven synaptic input from the LGN and the intracortical input resulting from the hue-dependent network connectivity within V1: We express the input as the average effect of the net synaptic current on the membrane potential of a cell, following the conventions of Ermentrout (1998) and Carandini and Ringach (1997). Thus, h(θ, t ) has units of mV and can take on both positive and negative values, chosen here so that a(θ, t ) typically ranges from 0 to 60 spikes/sec, consistent with electrophysiological experiments penetrating individual color-responsive cells (Conway, 2001;Johnson et al., 2001;Landisman & Ts'o, 2002a;Wachtler et al., 2003).
The input is converted to a firing rate according to the nonlinear activation function where H(x) is the Heaviside step function defined as H(x) = 1 for x > 0 and zero for x ≤ 0. Note that in the context of machine learning, this form of activation is also known as the rectified Spontaneous pattern formation: A system's ability to self-generate new symmetries in the absence of external input.
Activation function: A function mapping the afferent input into a population of neurons (expressed as a current or membrane potential) to the population's firing rate or probability of firing. linear unit function, or ReLU for short. By constraining the network activity to levels below 60 spikes/sec, we ignore the effects of neuronal saturation commonly implemented in models of g(h) (Ben-Yishai et al., 1995;Ermentrout, 1998). Here, T is the threshold potential of a neuron, below which the synaptic input has no effect on the mean firing rate of the network. Interestingly, as a processing feature, this thresholding nonlinearity has been speculated to account for the chromatic responses of individual neurons in V1 (Hanazawa et al., 2000). The amplification of these responses, and thus the mean network response, is modulated by β, the neural gain measured in spikes · sec −1 /mV. We assume that β is determined by far-ranging internal and external influences, from attentional mechanisms to hallucinogenic input (Ferguson & Cardin, 2020;Michaiel, Parker, & Niell, 2019).

Feedforward Input
To parameterize the input, prior work has relied on the direct relationship between cortical feature preferences and properties of the visual stimulus (Ben-Yishai et al., 1995;Bressloff & Cowan, 2003b). Cells in the cortex labeled, for instance, by their spatial frequency preferences can be mapped directly onto a visual space parameterized by the same variable. Thus, the activity of each neuronal population is no longer labeled purely by its position on the cortical sheet, but also by its preferred stimulus in an analogous feature space.
The corresponding network topology may be modeled on the cortical histology, such as the orientation map of Bosking, Zhang, Schofield, and Fitzpatrick (1997) or spatial frequency maps addressed in Bressloff and Cowan (2002), Bressloff and Cowan (2003a), and Bressloff and Cowan (2003b). Conversely, it may be based entirely on functional considerations, as, for instance, in the orientation tuning model of Sompolinksy et al. (Ben-Yishai et al., 1995;Hansel & Sompolinksy, 1998), also known as the "ring model," which posits a topology based on the experimentally motivated assumption that populations with similar orientation preferences are maximally connected (Ben-Yishai et al., 1995) and on the argument that the important features of such a connectivity are captured by its first-order Fourier components (Hansel & Sompolinksy, 1998).
Our model deviates in this regard by emphasizing that the stimulus's chromatic information is first discretized along the two cone-opponent pathways. We incorporate this aspect of early processing by projecting the stimulus's DKL space position θ onto the two cardinal axes: The magnitudes of l and s are thus taken to represent the normalized strengths of the L-M and S-(L+M) cone-opponent signals respectively. The feedforward input is then given by where c is the signal strength, or contrast, expressed as the mean postsynaptic coarse membrane potential (in units of mV) of the target hue population generated by the presynaptic LGN neurons (Carandini & Ferster, 2000). Formulated in this way, the input captures the colocalization of cone-opponency and hue selectivity in the activity of V1 cells as observed in Li et al. (2022) and Xiao et al. (2007). The hue tuning networks, parameterized by θ, are not only responsive to the individual cone-opponent stimulus signals, l and s, but also implement the combinatorial mechanisms by which they are first mixed (De Valois et al., 2000). Substituting the expressions for l and s into Equation 5, we obtain With this form, we point out the similarity of our combinatorial scheme to that of Mehrani et al. (2020), in which the input from cone-opponent V2 cells into hue tuning V4 cells is weighted as a function of the difference in their preferred hue angles. Most evidently, we differ from this model by first combining the cone-opponent signals in V1 rather than V4, in accordance with the above-mentioned studies. But beyond pointing to V1 as the origin of mixing, these experiments indicate that the combinatorial feedforward scheme is not sufficient to account for the variability of neuronal hue preferences. Li et al. (2022) showed, for instance, that the contribution of signals isolating the S-(L+M) pathway is too small to explain the shifting of hue preferences away from the L-M axis by purely combinatorial means. As put forward by Hawken (2011), Wachtler et al. (2003), and Lehky and Sejnowski (1999), a more complete understanding of neuronal hue encoding within V1 requires us to consider the nonlinear population dynamics therein.

Recurrent Interactions
We begin by characterizing the connectivity of the target hue tuning populations with a translation invariant cortical connectivity function w(|x − x 0 |), such that the interactions between neurons in a single CO blob (length scale ∼0.5 mm) depend only on the cortical distance between them (Bullmore & Sporns, 2012;Salzmann et al., 2012). The network's connectivity comprises the interactions of both its excitatory and inhibitory populations, each of which we model as a sum of an isotropic and distance-dependent term: We set E 0 ≥ E 1 > 0 and I 0 ≥ I 1 > 0 so that w exc and w inh are purely excitatory and inhibitory, respectively, in accordance with Dale's law (Ben-Yishai et al., 1995;Dayan et al., 2001).
Next, we map the weighting function onto hue space, drawing from the hue tuning microarchitecture revealed by the imaging studies of Liu et al. (2020) and Xiao et al. (2007). These studies point to a linear relationship between distance and hue angle difference, which minimizes the wiring length of cells tuned to similar hues (Liu et al., 2020). The hue-preferring cells inhabit the so-called "color regions," defined as such for their activation by red-green grating stimuli (Liu et al., 2020). These regions predominantly overlap with the V1 CO blobs (Landisman & Ts'o, 2002b;Li et al., 2022) and are responsive to the full range of hues, much like the patchy distribution of orientation maps within the V1 hypercolumns. Thus, in a similar manner to the local feature processing models of Bressloff and Cowan (2003b) and Ben-Yishai et al. (1995), we model the CO blob as a single color-processing unit consisting of N neurons labeled by the continuous hue preference variable θ 2 [−π, π] (Bressloff & Cowan, 2003b). Figure 2 shows the distribution of hue-responsive neurons within a typical color region ( Figure 2A) as well as a more coarse-grained demarcation of peak activity within several of these regions ( Figure 2B). To describe the spatial organization of their hue preference data, Xiao et al. (2007) and Liu et al. (2020) applied a linear fit to the cortical distance between two cell populations as a function of the difference in their preferred hue stimuli Δθ ≡ |θ − θ 0 | apart in DKL space. Note, this implies a discontinuity between θ = 0 and θ = 2π, allowing for the 2π periodicity of the hue preference label. Liu et al. (2020) report that the linear fit was able to capture the micro-organization of 42% of their tested hue maps, and a regression performed by Xiao et al. (2007) on an individual hue map gave a squared correlation coefficient of R 2 = 0.91.
In agreement with these findings, we let |x − x 0 | = |θ − θ 0 |, absorbing the regression parameters into the connectivity strength values E 0 , E 1 , I 0 , and I 1 in Equation 8. Substituting this change of variables and setting As detailed in Figure 3, for J 1 > 0, this functional form captures the local excitation and lateral inhibition connectivity ansatz typically assumed in neural field models as an analogy to diffusion-driven pattern formation (Amari, 1977;Bressloff, 2003;Hoyle, 2006;Kim, Rouault, Druckmann, & Jayaraman, 2017;Turing, 1952). Notably, neurons in close proximity in both cortical and hue space maximally excite each other, and those separated by Δθ = π maximally inhibit each other, evoking the hue-opponency of perception on a cellular level. We emphasize, however, that this choice of metric is guided by our physiological definition of hue and does not associate a perceived color difference to measurements in hue space.
Here, it is also important to distinguish between the connectivity function and the centersurround receptive fields of single-and double-opponent color cells (Shapley & Hawken, 2011). While the structures of both can be approximated by the same functional form, the resemblance is superficial: the former characterizes the interactions between different neuronal populations, and the latter is a property of single cells, often adapted for computer vision algorithms (Somers, Nelson, & Sur, 1995;Turner, Schwartz, & Rieke, 2018).
Finally, we weigh the influence of the presynaptic cells by convolving the connectivity function with the network activity, arriving at the cortical input to the target hue population at time t: The recurrent input is thus a continuous function in θ, derived from the population-level interactions. As put forward by the above-mentioned imaging studies, these interactions are colocalized with the cone-opponent feedforward input, h lgn , within the same CO blob regions of V1. Collectively, our formulation of h(θ, t ) implements the mixing rules posited by these experiments, without requiring us to arbitrarily fine-tune the relative weights of the afferent signals.

Evolution of Network Activity
We start by observing that by virtue of the invariance of w(θ − θ 0 ) under translations of θ, the convolution operator with μ 2  and ê μ normalized to integrate to 1 on [−π, π]. To calculate the eigenvalues λ μ of the corresponding linear transformations, we make the change of variables θ − θ 0 = ϕ, so that the left-hand side of Equation 12 can be rewritten as The eigenvalues are thus: Next, we assume a(θ, t ) is separable in t and θ and bounded on [−π, π] so that we may expand it in the eigenbasis of the convolution operator as: Substituting the expansion into Equation 10, we have for Equation 2 where w(θ − θ 0 ) is our choice for the connectivity function (Equation 9) and h lgn (θ) is defined as in Equation 6. Evaluating the integrals, we obtain with λ 0 = 2πJ 0 and λ 1 = λ −1 = πJ 1 . Note here that only the zeroth and first-order complex Fourier components remain.
Substituting the expansion Equation 15 and the explicit form of the activation function Equation 3 into Equation 1 yields: In the absence of the nonlinearity, each of the eigenmodes ê μ (θ) would evolve independently of the others, and a complete analysis of the time-dependent system would seek to solve a set of equations for c μ (t ) (see Methods: Linear Solution). However, in our setup, the thresholding introduces a coupling of these coefficients, as the critical hue angles, δ 1 and δ 2 , at which the input is cut off and is determined by the combined c μ (t ) at each point in time.
While an analytical solution to this system is in most cases intractable, it is nonetheless informative to break down the rate equation to a coupled system of equations for the evolution of the coefficients c μ (t ). Taking the inner product of Equation 18 with ê ν and using <ê ν |ê μ > = δ μν , we obtain: where the Heaviside restricts the domain of the inner product to [δ 1 (t ), δ 2 (t )]. The time dependence of the cutoff angles reflects the evolution of this curve, which requires that the thresholding be carried out continuously throughout the duration of the dynamics.
To determine δ 1 and δ 2 , we reformulate the Heaviside as a function of θ. Given that the input h(θ, t) is a real-valued function, c 0 2 ℝ and c 1 = c Ã −1 . For mathematical convenience, we then rewrite Equation 17 in terms of c 0 , Re(c −1 ) ≡ c R −1 , and Im(c −1 ) ≡ c I −1 as Setting The Heaviside can then be expressed as where α ≡ T −q 0 c h , and the time arguments are suppressed for simplicity. In this formulation, the Heaviside sets the limits of integration in Equation 19 as the angles θ = δ 1 , δ 2 where α intersects with cos(θ + γ), as shown in Figure 4. are the angles corresponding to the intersection of α (in gray) and cos(θ + γ) (in black). Here, c = 1, β = 1, and T = −1. θ = π/8. J 0 and J 1 are as in Figure 3.
With this reformulation, the system of equations for the evolution of the coupled c ν (Equation 19) takes the more explicit form: Note that, for all c ν , the integrand of Equation 24 is a function of q 0 (t ), c h (t ), and γ(t ) and therefore, implicitly, only of the coefficients c 0 (t ), c −1 (t ), and c 1 (t ). Thus, the dynamics are determined in full by the evolution of c |ν|≤1 (t ): with h(ϕ, t ) as in Equation 22. Separating Equation 25 into its real and imaginary parts, and noting that a real-valued activity profile a(θ, t ) requires c 0 2 ℝ and c 1 = −c Ã −1 , reduces the system to a set of equations for c 0 (t ), c R −1 (t ), and c I −1 (t ).
Evaluating the integrals, we obtain: where the time arguments of q 0 , c h , γ, δ 1 , and δ 2 are suppressed for clarity.
Written in this form, the system provides a representation of the time evolution of a(θ, t ) in terms of the coupled evolution of the constants c |ν|≤1 . It is important to note that these equations are nonlinear due to the implicit Heaviside in our determination of δ 1 (t ) and δ 2 (t ). While our reformulation of the right-hand side of Equation 24 allows for the explicit representation of the coupling of c ν via the nonlinearity, it is also this coupling that proves the analytical solution of the trajectories intractable. Thus, to describe the behavior of the time-dependent solution, we turn next to a numerical analysis of the system's phase portrait-that is, to an exploration of the features and stability of the system's emergent steady states.

Steady-State Solution
We approach the solution to Equation 1 with a Forward Euler method, propagating the activity from a random array of spontaneous initial values between 0 and 0.2 spikes/sec to its steady-state value. Within each time step (typically chosen to be 1 msec), we coarse-grain the network into n = 501 populations with hue preferences separated evenly across the DKL angle domain [−π, π]. The choice of an odd n allows us to numerically integrate Equation 10 using the Composite Simpson's Rule, whereupon we rectify {h(θ, t ) − T } and evaluate the right-hand side of Equation 1. Below, we use the term tuning curve only in reference to the emergent steady-state activity profiles. Figure 5 shows an example of a hue tuning curve obtained with this method. Note that the peak of the tuning curve is located at the LGN hue input angle , which is equivalent to the steady-state value of −γ in Equation 26 (see Methods: Evolution of Peak Angle). Furthermore, the steady-state solution requires da∞ θ ð Þ dt = 0 so that Equation 1 becomes Thus, the shape of the activity profile at the steady state is equivalent to the net cortical input, cut off by g at δ 1 ≡ θ ⋆ c1 and δ 2 ≡ θ ⋆ c2 . Here, θ ⋆ c1 and θ ⋆ c2 are the critical cutoff angles for the steadystate activity profile, beyond which a ∞ (θ) would take on negative values.
We emphasize that the values of the cortical parameters J 0 , J 1 , c, T, and β are bounded by the physiological properties of V1. Varying these parameters in the subsequent analysis is therefore an investigation of their relative effects on hue processing, and we are not fine-tuning their weights to obtain specific hue tuning curves.
Here, we explore a range of values for the cortical and stimulus parameters under the constraint that the network activity remains between 0 and 60 spikes/sec, as motivated above. We further restrict J 1 > 0 and J 0 < 0 to elicit the local excitation and global inhibition connectivity ansatz of previous neural field models. Our main aim is to graphically characterize the relative effects of the parameters on the width, Δ c = θ ⋆ c2 − θ ⋆ c1 , and peak height, a ∞ (θ ), of the network tuning curves. Together, these two properties reflect the network selectivity and emergent signal strength, respectively. Note that these effects are robust to small additive white noise and may also be gleaned from the net input, expressed as in Equation 20 and evaluated at the steady-state values of the coefficients. It is also important to note here the difference between a network tuning curve and a singleneuron tuning curve. The former is a coarse-grained representation of the CO blob response, with the horizontal axis representing the gamut of hue preferences within a single network. A relatively large tuning width would therefore indicate considerable responses from a wide range of hue tuning cells and poor network selectivity. The single-neuron tuning curve, on the other hand, is an electrophysiological recording of an individual cell's response to a set of hue stimuli, with the horizontal axis representing the range of stimulus hue angles used in the experiment. The peak location of the single-neuron tuning curve would therefore indicate the hue preference of the individual neuron, while the width would represent its selectivity for that specific hue. Thus, though the two types of tuning curves are labeled and shaped similarly, the latter is only useful to characterize our network's constituent neurons and not the emergent properties of the population as a whole (Bressloff & Cowan, 2003b).
Roles of the stimulus strength and cortical threshold. We begin by considering the role of the stimulus signal strength c on the hue tuning width and peak height. Figure 6 shows typical tuning curves for two values of J 1 . We find that the stimulus strength has a quickly saturating effect on Δ c for all J 1 > 0, which is more pronounced at lower values of c as J 1 → 0. Above saturation, the main contribution of the chromatic signal is to increase the network response, that is, to increase a ∞ (θ ).
We also note that at T = 0, the trend reverses, such that increasing c has no effect on the tuning width at T = 0 and a widening effect for T > 0. Figure 7 illustrates this reversal with four tuning curves of matched parameters and varying values of T. The coupling of c and T must be considered because some neural field models (see Amari, 1977;Carandini & Ringach, 1997;Dayan et al., 2001) take T = 0 for mathematical simplicity. Indeed, we might expect that there is no more physiological significance to choosing a threshold potential of T = 0 mV than any other value, beyond their relative magnitudes to h(θ, t). However, the independence of c and Δ c at T = 0 and the significance of the relative signs of c and T elsewhere suggest quite the opposite. The effect of the chromatic input on tuning the network hue selectivity weakens not only once the anisotropic strength parameter, J 1 , is large enough to predominate, but also as T → 0.
The coupling of c and T is equally significant to the effects of T on the tuning curve properties. Figure 8 shows that below a certain value, T primarily modulates a ∞ (θ ). However, for   comparable magnitudes of the stimulus strength and threshold, |c| ∼ |T |, we see a transition in which T also begins to sharpen the tuning curve and continues to do so until the threshold surpasses h(θ, t ) for all θ (i.e., for δ ⋆ 1 = δ ⋆ 2 = 0). Accordingly, for higher stimulus strengths, the thresholding nonlinearity plays a greater role in modulating the network selectivity at lower and a wider range of T values.
Roles of the cortical weights. The anisotropic connectivity strength J 1 exhibits similar relationships to the tuning curve properties to those of c. That is, for T < 0, a ∞ (θ ) grows and Δ c narrows with increasing J 1 (see Figure 9A). The trend with respect to Δ c reverses for T > 0 ( Figure 9B), whereas the trend with respect to a ∞ (θ ) remains unaffected.
These similarities are a mark of the competition between the external input and the cortical parameters in driving the network selectivity and reflect the fact that both parameters modulate the anisotropic terms of the model. This means that the role of J 1 in driving network selectivity becomes more significant with decreasing stimulus strength (see Figure 10). However, a large external input does not suppress the contribution of J 1 to the overall network activity. That is, increasing J 1 results in raising a ∞ (θ ), regardless of the strength of the stimulus. Similarly, a  relatively large value of J 1 does not restrict the growth of the network response with increasing stimulus strength. Thus, the anisotropic tuning introduced by the external input and the recurrent interactions act cooperatively to raise the network's response to the stimulus hue, and competitively to tune its selectivity.
In contrast, J 0 acts cooperatively with the external stimulus to sharpen the curves. As shown in Figure 11, the tuning curves narrow with decreasing values of J 0 , that is, with an increase in the relative strength of global inhibition to global excitation, a trend which is conserved for various stimulus strengths. Furthermore, there is no trend reversal at T = 0. Rather, for much of the parameter space, J 0 acts with the thresholding to sharpen the tuning curves, as is illustrated in Figure 12. This could be expected from the fact that at each point throughout the dynamics, both T and J 0 act isotropically on all hue preferences, lowering or raising the input for all contributing neurons. However, this commonality also means that for |T | >> |c| (where the effect of T on Δ c saturates, as explained above), the thresholding suppresses the role of J 0 , analogous to the competition between c and J 1 . Finally, Figures 11 and 12 also show that increasing the global inhibition acts to reduce the value of a ∞ (θ ) for all c and T.  We thus conclude that the emergent hue curves in V1 are both inherited from the LGN and built on the recurrent interactions. The competition between J 1 and c points to a continuum of regimes in which either h lgn or h ctx dominates. However, in all regimes, J 0 works cooperatively with c to narrow the curves, and all the parameters work together to raise the network response. Likewise, the competition between J 0 and T (both cortical parameters) is modulated by the value of c, and the location of the peak is always completely determined by the LGN signal, regardless of the relative magnitudes of the cortical and stimulus strength parameters (see Methods: Evolution of Peak Angle).
Comparisons with the orientation tuning ring model. Finally, we seek to compare the emergent properties of the hue tuning model with those of the orientation tuning ring model (Ben-Yishai et al., 1995;Hansel & Sompolinksy, 1998). This leads us to separate the analysis into two regions: one corresponding to the analytical regime with J 0 < 1 2πβ and J 1 < 1 πβ , and the other to the extended regime with J 1 ≥ 1 πβ and J 0 constrained as described in the section Stability Analysis. As detailed in Methods: Linear Solution, the former regime defines the (J 0 , J 1 ) parameter space wherein the model permits a closed-from stable solution for cases in which the input into all cells is above threshold. By contrast, the dynamics in the latter, extended regime always implement thresholding and thus do not permit the linear closed-form solution. For comparison purposes, note that these parameter regimes are analogous to the orientation model's homogeneous and marginal regimes, respectively, labels which refer to the system's responses to unoriented stimuli.
An important difference between our two models is our choice to assume modularity for the color vision pathway. As described above, there is no consensus as to when and how the various visual features are separated along the visual pathway. That is, we do not yet understand how the brain recognizes the extent to which an activated color-and orientation-preferring neuron is signaling for a stimulus's color or orientation. And moreover, we do not know at which point of the visual pathway the differentiation becomes important. We have therefore chosen to emphasize the unoriented color selective cells localized in the CO blob regions of V1, though the model is intended to describe the color-processing pathway broadly, for any color-preferring neurons regardless of other feature tuning capabilities. Thus, the choice of modularity is not to reject the possibility of joint feature processing, but rather to parse out the color mechanism for a separate analysis. Furthermore, it is in keeping with perceptual studies which indicate that the red-green and blue-yellow color-opponent systems are only responsive to color stimuli and not to broadband, white light (Stockman & Brainard, 2010). The difference between our two models thus comes to our choice to consider the purely chromatic component of the input afferent from the LGN, whereas the orientation model incorporates external inputs with varying degrees of anisotropy, that is, where represents the degree of anisotropy.
The differing assumptions underlying the formulation of h(θ, t ) have important implications for the subsequent parameter analyses adopted by our two models. In the orientation tuning model, the authors detail the pronounced shift in the relative roles of the cortical and stimulus parameters in narrowing the tuning curve. In this setup, for → 0.5, an increase in c widens the tuning curve, whereas for → 0, the tuning curve selectivity is completely determined by the cortical parameters. The latter scenario constrains the value of the analogous anisotropic cortical parameter, J 2 , to the marginal regime.
In contrast, our model does not apportion separate regions of the parameter space to external and recurrent mechanisms. Rather, in both the analytical and extended regimes, the roles of c and J 1 exist on a spectrum, where the effect of each parameter is suppressed by larger values of the other. Of course, this suppression is more stark in the extended regime because it covers larger values of J 1 . In this sense, our color model draws a similar conclusion to that of the orientation model: when the anisotropic tuning provided by the recurrent interactions is large, the tuning from the stimulus is negligible, and vice versa. However, we emphasize that the transition is not sharp and that c does have an effect on the tuning curve selectivity in the extended regime (see Figure 6B), as does J 1 in the analytical regime.
In this regard, the two models are more consistent in their interpretations of J 0 's contribution to the selectivity of the tuning curves. That is, in the two regimes of each model, the inhibition acts cooperatively with the thresholding to sharpen the tuning curves. Here again, the orientation model makes a distinction between the marginal phase (i.e., = 0 and J 2 2 marginal regime), wherein the tuning curve width is completely determined by the cortical anisotropy, and all other cases, where the isotropic inhibition and stimulus come into play. For these cases, the authors argue, J 0 does not act alone to narrow the curve: though J 0 may sharpen the tuning curves, it is the anisotropy from the input or cortical interactions which acts as the source of the orientation selectivity. Although our color model's tuning mechanism, too, requires a source of anisotropy, we have emphasized above that there is no single source of hue selectivity. When J 1 is small, in either regime, both the stimulus and the uniform inhibition are significant to the hue tuning mechanism.
Ultimately, the orientation model sets up a dichotomy between two specific regions of parameter space. In the nonmarginal case, c is the primary player in the tuning mechanism, and in the marginal case, this role belongs to J 2 . The uniform inhibition is thus given a secondary "sharpening" role. By contrast, in choosing a fully anisotropic h lgn , our color model does not encompass an analogous marginal phase with an always dominating J 1 . Rather, even for large J 1 , the uniform inhibition is at least equally important to the modulation of the tuning width. In fact, as we have shown above, for larger values of c, J 0 is more effective than J 1 in narrowing the tuning curves, for both the analytical and extended regimes.
We thus stress that the two regimes, though analogous to those of the orientation model, do not constitute a division in the hue processing mechanism. Rather, we define the boundary between the analytical and extended regimes solely by whether or not the linear case exists. It is therefore determined by the values of J 0 and J 1 for which the linear solution applies, given that the values of c, T, and β keep the input above threshold throughout the dynamics (Methods: Linear Solution). We note that for each combination of J 0 and J 1 within the analytical regime there exists also a nonlinear case, in which h(θ, t) is cut off by the thresholding nonlinearity and, thereby, the linear solution does not apply. Our definition differs from that of the orientation model, which demarcates the boundary between the homogeneous and marginal phases based on the emergent steady-state tuning curves alone. For more on this approach, see the discussion of the broad and narrow profiles in Hansel and Sompolinksy (1998). As we will show next, the boundary is integral to the corresponding stability analysis of the steadystate tuning curves.

Stability Analysis
To analyze the stability of the emergent tuning curves, we turn once more to our separable activity ansatz assumed in the eigenfunction decomposition of Equation 15. This means that we are faced again with a nonlinearity induced coupling of the time-dependent coefficients and, consequently, the analytical intractability of the associated stability analysis. We therefore set up the Jacobian matrix for a numerical analysis of the local stability.
We begin by adding a small perturbation of the form and substituting the resulting activity into Equation 1. The eigenmodes then evolve according to the following equation for the coefficients D μ (see Methods: Linear Stability Analysis): where δ ⋆ 1 and δ ⋆ 2 are the critical cutoff angles of the steady-state solution, obtained numerically. We observe that the integrand of Equation 30 is a function of D 0 , D R −1 , and D I −1 alone, and, as such, the stability of the steady-state tuning curve is completely determined by the stability of these first-order coefficients.
Evaluating the integrals for ν = 0 and ν = −1, and noting from Equation 21 that we obtain the following system of equations for the evolution of the characteristic coefficients: The entries of the corresponding Jacobian matrix consist of the bracketed prefactors, and may equally be obtained from the general system of equations for the global network dynamics, as follows: where f 1 , f 2 , and f 3 are the right-hand sides of the equations in Equation 26 and the first-order partial derivatives are evaluated at the steady-state values of c 0 , c R −1 , and c I −1 . The stability of the tuning curve is then determined by the eigenvalues of J.
We note that the existence of a steady state is a function of the cortical strengths J 0 and J 1 . By fixing the values of β, c, θ , and T, we are left with a two-parameter family of differential equations, allowing us to analyze this dependence numerically in the associated ( J 0 , J 1 ) parameter space.
Carrying out a parameter sweep across this space, we find that the system features a bifurcation curve, below which the model permits steady-state solutions and above which no equilibrium exists. To determine stability within the former region, we compute J at the emergent steady-state tuning curves of various points in the parameter space. Solving the associated characteristic equations, we observe that the eigenvalues are always real and negative, and thus conclude that all emergent steady-state tuning curves are stable. Figure 13 shows the bifurcation diagrams for two families of equations, distinguished by their values for T. Most notable is the extended regime, which permits stable steady-state solutions beyond the boundary set by the linear case (Methods: Linear Solution). As this parameter regime is not accessible to the linear solution, the tuning curves in this regime are necessarily a product of the thresholding nonlinearity and are thus always cut off below |θ| = π. The thresholding nonlinearity therefore not only expands the region of stability, but also ensures that the tuning curves emerging within the extended regime are selective for hue. As we have seen, this expansion is pivotal when the external input is weak and the anisotropic cortical strength plays the larger role in narrowing the tuning curves. Furthermore, regardless of input strength, it allows for a larger overall network response, as the peak activity, a ∞ (θ ), grows with increasing J 1 . Finally, as we will see in the following section, in the absence of any stimulus (i.e., for c = 0), the extended regime features the spontaneous generation of stable tuning curves and may thus serve as the bedrock for color hallucinations.
However, looking back at Figure 13, perhaps most striking is the horizontal portion of the bifurcation curve at J 0 = 1 2πβ for J 1 < 1 πβ , which sets the same stability conditions on J 0 and J 1 as in the linear case. This is despite the fact that many of the points in the analytical regimes of the two featured families correspond to solutions that implement thresholding, thus signifying that the analytical regime is not an exclusively linear one.
The key to understanding the shape of this region lies in noticing that the bifurcation diagram does not change for varying values of c, T, and θ , as shown in Figure 13 for the two values Bifurcation curve: A curve in parameter space defining a transition in the system's dynamics and stability. of T. The stability conditions on J 0 and J 1 are thus uniquely determined by β alone. Furthermore, for the general diagram (i.e., with β fixed and c, T unfixed), each point of the analytical regime permits linear solutions, in addition to the ones that implement thresholding. Accordingly, the uniqueness of the bifurcation diagram implies that at each point of the analytical ( J 0 , J 1 ) subspace, the stability of the latter, nonlinear solutions is equivalent to that of the linear solutions. This means that the boundary at J 0 = 1 2πβ set by the linear case (Methods: Linear Solution) applies to the full, nonlinear model as well.

A Turing Mechanism for Color Hallucinations
Biological Turing patterns. Underpinning our hue tuning model is the mathematics of reactiondiffusion systems, for which, in particular, Alan Turing's treatment of biological pattern formation offers many valuable insights (Turing, 1952). The general Turing mechanism assumes a system of two interacting chemicals, whose local reaction and long-range diffusion properties govern the dynamics of their relative concentrations. In the original framework these chemicals are termed "morphogens" to elicit their form-producing capabilities within a developing embryo, whose anatomical structure emerges as a result of their underlying concentration dynamics. This, for instance, may be attributed to the morphogens' catalysis of organ formation in different parts of the developing organism.
Most analogous to our model is the formulation which distributes the morphogens across a continuous ring of tissue, parameterized by the cellular position θ. Assuming that the system Reaction-diffusion system: Mathematical model of two or more interacting substances consisting of local dynamics (via the reaction terms) and spatial dynamics (via the diffusion terms). The black dashed line is the bifurcation curve, above which the tuning curves grow without bound. The overlaid symbols correspond to points tested in a parameter sweep over the extended regime. Notably, the parameter sweep produces the same bifurcation curve for both values of T. Here, we must note that for critical values of T, for which the input is not large enough to generate activity, the model permits the trivial a(θ) = 0 steady-state solution in both the analytical and extended regimes. This solution, however, is unstable to perturbations large enough to make the input cross the threshold. For more on bifurcation theory in the context of neural fields, see Gross (2021) and Hesse and Gross (2014). See also Hansel and Sompolinksy (1998) for an analogous "phase diagram" analysis of the orientation ring model. never deviates far from the underlying homogeneous steady state, the two dynamical state equations for their concentrations, X and Y, take the linear form where a, b, c, and d represent the chemical reaction rates, and D X and D Y are the diffusion rates of X and Y, respectively. Here, we set a, c > 0, so that increasing the concentration of X activates the production of both X and Y, and b, d < 0 so that Y has an inhibitory effect on the production of both chemicals (Hoyle, 2006).
In the absence of diffusion (i.e., with D X = D Y = 0), the system has a homogeneous steadystate solution, (X, Y ) = 0, whose stability is determined by a Jacobian composed of the reaction rates, a b c d ! , and hence by the system's local chemical properties alone (Hoyle, 2006). Note that at this point the system is circularly symmetric with respect to interchanging any two cells on the ring.
Assuming the existence of a stable steady-state solution, and the corresponding requirements on the rate parameters a-d, we next set the diffusive terms D X , D Y > 0, taking the separable ansatz for the general solution: Furthermore, we set D X < D Y to generate the local excitation and lateral inhibition of the morphogen concentrations (Hoyle, 2006;Murray, 2003), evoking the connectivity function ansatz Equation 9. The underlying steady state then remains stable if the real parts of the eigenvalues λ μ , obtained from the modified Jacobian, are negative. With the reaction rates fixed from the stability conditions above, these eigenvalues are functions of the diffusion parameters alone. Thus, the conditions for stability may be thought of in terms of a bifurcation diagram in the (D X , D Y ) phase space, comparable to Figure 13.
From here, a set of additional conditions may be placed on D X and D Y so that the system undergoes a Turing bifurcation, wherein at least one λ μ becomes positive and the homogeneous steady state loses its stability. With the addition of a small random perturbation, the instability results in the growth of the corresponding eigenmodes e iμθ , such that, over time, Equation 35 is dominated by the eigenmodes with largest λ μ . These represent stationary waves whose wavelengths are set by the circumference of the ring (i.e., by the spatial properties of the medium) and whose growth is bounded by the higher order terms that had been initially ignored in the near-equilibrium formulation (Murray, 2003;Scholz, 2009;Turing, 1952). The underlying circular symmetry is thus broken and a spatial pattern is formed.
In his seminal paper, Turing extrapolated this mechanism to explain various biological phenomena, such as the development of petals on a flower, spotted color patterns, and the growth of an embryo along various directions from an original spherical state. A hallmark of each of these examples is that there is no input into the system, so the emergent patterns reflect a mechanism of spontaneous symmetry breaking, onset by a perturbation of "some influences unspecified" (Turing, 1952). In light of this, we ask, can the visual cortex self-generate the perception of hue?
Spontaneous symmetry breaking and color hallucinations. To assess our model's ability to selforganize in the absence of visual input, we set c = 0 and seek to establish the presence of a Turing mechanism marked by the following three features: 1. A system comprised of local excitation and long-range inhibition. 2. Spontaneous symmetry breaking in the absence of input within a region of a parameter space defined by the relevant bifurcation parameter(s). 3. The emergence of patterns that are bounded by the system's nonlinearities.
As noted above, requiring D X < D Y in Equation 34 sets up the diffusion-driven activatorinhibitor dynamics governing the evolution of the morphogen concentration across the ring of cells. With these assumptions, Turing's reaction-diffusion equations bear a strong resemblance to our one-population generalization of the excitatory and inhibitory color cell dynamics in the absence of LGN input: where the local excitation and long-range inhibition are incorporated in the anisotropic interaction term J 1 cos(θ − θ 0 ), and the reaction terms aX(θ, t ), bY(θ, t ), cX(θ, t ), and dY(θ, t ) find their neural analogue in the term −a(θ, t ). Importantly, the notions of "local" and "long-range" here describe interactions in the DKL space, and not in the physical cortical space correlate to Turing's ring of tissue. Accordingly, we treat J 1 as the Turing bifurcation parameter and look for spontaneous color tuning beyond a bifurcation point J 1 = J T 1 . Additionally, we observe that the onset of pattern formation is determined by a critical value of T, so that the relevant parameter space for our exploration is (J 1 , T ) ( Figure 14). This analysis is summarized in Figure 15.  We observe that within the analytical regime, the system generates a stable homogeneous steady-state solution a ∞ (θ) = const ≥ 0 for all values of the parameters β, T, J 0 , and J 1 (Figure 15A-B). As such, from the closed-form linear steady-state solution (Methods: Linear Solution), we obtain We further observe that beyond J 1 = 1 πβ , a stable homogeneous steady-state solution remains at a ∞ (θ) = 0 for T ≥ 0 ( Figure 15C). However, at T = 0, this radial symmetry is broken, and the cortex generates spontaneous tuning curves with peak locations determined by the random initial conditions ( Figure 15D-F). Thus, the system bifurcates when J 1 = 1 πβ and T = 0, permitting the onset of color hallucinations in a region defined by these two values ( Figure 14). Note that the unimodal tuning curves predict stationary, single-hued phosphenes. Extension to other CO blob networks would therefore indicate a hallucination comprised of multiple phosphenes of varied hues, each determined by the local cortical activity at hallucinatory onset.
Bearing these predictions in mind, we point to a recent functional MRI study of blind patients experiencing visual hallucinations (Hahamy, Wilf, Rosin, Behrmann, & Malach, 2021). The study attributes these visions to the activation of the neural networks underlying normal vision, precipitated by the hyper-excitability of the cortex to spontaneous resting-state activity fluctuations when it is deprived of external input. This is suggestive of the required lowering of neuronal threshold at the onset of color hallucinations predicted here. Notably, a reduction in membrane potential threshold has also been attributed to the action of hallucinogens (De Vos, Mason, & Kuypers, 2021;Varley, Carhart-Harris, Roseman, Menon, & Stamatakis, 2020).
Finally, we note that the stability of the emergent tuning curves is determined by the bifurcation diagram of Figure 13. This means that, in addition to expanding the region of stability in the presence of chromatic stimuli, the model's nonlinearity allows for stable, spontaneous color hallucinations in their absence.
Having thus established a Turing-like mechanism for our model's self-organization, we end with an analogy to Turing's original diffusion-driven formulation. In his concluding example, Turing applies the mechanism to explain the growth of an embryo along various axes of its original spherical state. This growth is driven by diffusion, directed by the "disturbing influences," shaped by the system's chemical and physical properties, and bounded by the system's nonlinearities. It is all too clear to see the parallels with our hue tuning model, wherein a hallucination is driven by the anisotropy of the cortical interactions, its hue determined by the initial conditions, its selectivity shaped by the cortical parameters, and its stability ensured by the thresholding nonlinearity.

DISCUSSION
This paper presents a neural field model of color vision processing that reconceptualizes the link between chromatic stimuli and our perception of hue. It does so guided by the premise that the visual cortex initiates the mixing of the cardinal L-M and S-(L+M) pathways and thereby transforms the discrete cone-opponent signals to a continuous representation of chromatic information. Such mixing mechanisms have been implemented by previous combinatorial models of color processing, though through a largely feedforward approach or at the level of the single neuron.
Our theory bears in mind the mixing mechanism, but reframes the stage-wise combinatorial scheme to one based on the nonlinear population dynamics within the visual cortex. Accordingly, we propose a hue-based cortical connectivity, built upon the cortical hue map micro-architecture revealed by recent optical imaging studies of V1. By considering the intracortical network interactions, we have accounted for V1 cells responsive to the gamut of DKL directions without the need to fine-tune the cortical parameters. We do so without restricting to a particular category of V1 neuron, as both single-opponent and doubleopponent, and altogether novel types of cells, have been suggested as the primary messengers of chromatic information. Rather, we zoom out from the individual neuron's receptive field to model the aggregate, population-level properties and, in particular, the stable representation of hue. We thereby offer that chromatic processing in the visual cortex is, in its essence, a self-organizing system of neuronal-activity pattern formation, capable of encoding chromatic information in the presence of visual stimuli and generating information in their absence.
Further, in assuming modularity for chromatic processing, we have not ruled out a mechanism for joint feature processing. Our choice to focus on the unoriented color cells within the CO blob regions allowed us to parse out the chromatic pathway for an independent analysis, but should not be interpreted as a claim about its functional independence. We leave open the question of the functional and anatomical separation of the various visual streams.
Equally unsettled is the question of how much S cone input contributes to the mixing of the cone-opponent channels, with some studies showing a relatively weak S cone input into the neurons of V1, compared to its L and M cone counterparts (Li et al., 2022;Xiao, 2014). The variations across these experiments may stem, in part, from differences in optical imaging and electrode penetration techniques, including the particulars of the chromatic stimulus used (Li et al., 2022;Liu et al., 2020;Polimeni, Granquist-Fraser, Wood, & Schwartz, 2005;Salzmann et al., 2012). On the whole, however, single-cell recordings have identified two main types of color-responsive regions: color patches that contain neurons tuned exclusively to stimuli modulating either of the cone-opponent pathways, and patches with neurons exhibiting a mixed sensitivity to a combination of the two (Landisman & Ts'o, 2002a;Li et al., 2022;Livingstone & Hubel, 1984). Further experiments on the connectivity between these regions, and among the single-and double-opponent color cell populations of which they consist, may point to added micro-architectures for the hue maps, along the lines of the geometric orientation models of Bressloff and Cowan (2003a) and Bressloff et al. (2001).
Finally, we emphasize that the mechanism we offer departs from previous combinatorial color models that predict hue sensation at the final stage of processing (De Valois & De Valois, 1993;Mehrani et al., 2020), as well as neural field models that conflate cone-and coloropponency in their interpretations (Faugeras, Song, & Veltz, 2022;Smirnova, Chizhkova, & Chizhov, 2015;Song, Faugeras, & Veltz, 2019). The emergent hue tuning curves we have characterized are a network property reflective of the physiological neuronal responses, and should not be confounded with our perception of hue. A photon of wavelength 700 nm striking a retina is no more "red" than any other particle-color is a perceptual phenomenon not yet represented in these first stages of vision. By recognizing that the hue tuning mechanism of the visual cortex is an early stop in the visual pathway, we point to the need for further field theory approaches to our understanding of color perception.
where the constants K ν are determined by the Fourier coefficients c ν (0) of the initial activity a(θ, 0).
For J 0 < 1 2πβ and J 1 < 1 πβ , the solution approaches the globally asymptotic stable steady-state tuning curve We call the corresponding ( J 0 , J 1 ) parameter space the analytical regime.
Globally asymptotic stable steady state: A system's equilibrium solution which attracts all other solutions regardless of initial conditions.

Evolution of Peak Angle
We first assume that upon receiving a stimulus θ at time t = 0, the network has a random spontaneous firing rate a(θ, 0). Using Equation 15, with c 0 2 ℝ and c μ = c Ã −μ , we expand the activity profile in terms of the initial values of the corresponding coefficients c μ (0): with tan(ϕ μ ) = c I −μ c R −μ and r 2 μ = (c I −μ ) 2 + (c R −μ ) 2 such that ϕ μ (0) are completely determined by the initial conditions. Thus, at t = 0 the activity profile is composed of an infinite sum of cosine functions, each peaked about a corresponding disparate angle ϕ μ , and therefore has no discernible peak. To characterize the evolution of the network activity from these initial conditions to its hue tuning profile at t → ∞, we seek to obtain the steady-state values of ϕ μ and the corresponding tuning curve peak inductively as follows.
Let us first take μ = 1. As seen in Figure 4, we note that δ 1 (t ) and δ 2 (t ) are symmetric about γ(t ) such that δ 2 + γ = 2π − (δ 1 + γ ). Using this symmetry, we factor out cos(γ ) and sin(γ ), respectively, in the equations for c R −1 and c I −1 in Equation 26: and time arguments suppressed. We let F ⋆ and γ ⋆ denote the steady-state values of F and γ, respectively, allowing for the following expressions for the steady-state values of c R −1 and c I −1 : Thus, we have Similar calculations for the steady-state values of the higher order coefficients yield the general equations