Recent work suggests that changing convolutional neural network (CNN) architecture by introducing a bottleneck in the second layer can yield changes in learned function. To understand this relationship fully requires a way of quantitatively comparing trained networks. The fields of electrophysiology and psychophysics have developed a wealth of methods for characterizing visual systems that permit such comparisons. Inspired by these methods, we propose an approach to obtaining spatial and color tuning curves for convolutional neurons that can be used to classify cells in terms of their spatial and color opponency. We perform these classifications for a range of CNNs with different depths and bottleneck widths. Our key finding is that networks with a bottleneck show a strong functional organization: almost all cells in the bottleneck layer become both spatially and color opponent, and cells in the layer following the bottleneck become nonopponent. The color tuning data can further be used to form a rich understanding of how color a network encodes color. As a concrete demonstration, we show that shallower networks without a bottleneck learn a complex nonlinear color system, whereas deeper networks with tight bottlenecks learn a simple channel opponent code in the bottleneck layer. We develop a method of obtaining a hue sensitivity curve for a trained CNN that enables high-level insights that complement the low-level findings from the color tuning data. We go on to train a series of networks under different conditions to ascertain the robustness of the discussed results. Ultimately our methods and findings coalesce with prior art, strengthening our ability to interpret trained CNNs and furthering our understanding of the connection between architecture and learned representation. Trained models and code for all experiments are available at https://github.com/ecs-vlc/opponency.
1 Introduction
The tendency for learning machines to exhibit oriented-edge receptive fields, similar to those found in nature, has long been observed (Bell & Sejnowski, 1997; Krizhevsky, Sutskever, & Hinton, 2012; Lehky & Sejnowski, 1988; Lindsey, Ocko, Ganguli, & Deny, 2019; Olah et al., 2020a; Olshausen & Field, 1996; Shan, Zhang, & Cottrell, 2007; Wang, Cottrell, & Kanan, 2015). However, learning machines rarely exhibit the functional organization found in nature. In convolutional neural networks, we typically find oriented-edge receptive fields in early layers, rather than a progression from center-surround receptive fields to oriented-edge receptive fields as is common in biological vision (Hubel & Wiesel, 2004). In an important work, Lindsey et al. (2019) demonstrate that the addition of a bottleneck to a deep convolutional network can induce center-surround receptive fields, suggesting a causal link between anatomical constraints and the nature of learned visual processing. In order to refine our understanding of this causal relationship, we pursue an electrophysiological interpretation of convolutional networks that incorporates opponency and color tuning.
Cells with center-surround and oriented edge receptive fields are spatially opponent. From the classic work of Kuffler (1953), Hubel and Wiesel (1962, 2004), and others summarized in Troy and Shou (2002) and Martinez and Alonso (2003), these neurons form the building blocks of feature extraction in the primary visual cortex. Formally, a neuron that is excited by a particular stimulus and inhibited by another in the same stimulus space is said to be opponent to that space. For example, if a neuron is excited by a stimulus in some part of the visual field and inhibited in another, it is spatially opponent. Alternatively, if a neuron is excited by stimulus of a certain wavelength and inhibited by a stimulus of another, it is spectrally opponent. Spectral opponency, first hinted at by the complementary color system from Goethe (1840) and later detailed by Hering (1920), was observed and characterized at a cellular level only around 1960 (Daw, 1967; De Valois, Smith, Kitai, & Karoly, 1958; Naka & Rushton, 1966; Wagner, MacNichol, & Wolbarsht, 1960; Wiesel & Hubel, 1966). Combined, the theories of spatially opponent feature extraction in the visual cortex (Hubel & Wiesel, 1962, 2004; Kuffler, 1953; Troy & Shou, 2002), trichromacy (Helmholtz, 1852; Maxwell, 1860; Young, 1802), and spectral opponency (De Valois, Abramov, & Jacobs, 1966) constitute a deep understanding of the early layers of visual processing in nature.
The notional elegance of the above theories has served to motivate much of the progress made in computer vision, most notably including the development of multilayer (deep) convolutional neural networks (CNNs) (Bottou et al., 1994; Le Cun & Bengio, 1995; Le Cun et al., 1990) that are now so focal in our collective interests. Multilayer CNNs are learning models designed to mimic the functional properties, namely, spatial feature extraction and retinotopy, of the retina, lateral geniculate nucleus (LGN), and primary visual cortex. By virtue of the ease with which one can train such models, multilayer CNNs offer a unique opportunity to study the emergence of visual phenomena across the full gamut of constraints and conditions of interest. It is widely observed that trained convolutional neurons experience the same kinds of receptive fields as those found in nature and that the learned features become successively more abstract with depth (Krizhevsky et al., 2012; Olah et al., 2020a; Olah, Mordvintsev, & Schubert, 2017; Zeiler & Fergus, 2014). However, we do not typically see structural organization of these cell types. For example, edge and color information is confounded in the first layer of ZFNet (Zeiler & Fergus, 2014), with some color information also encoded in the second layer. Furthermore, as Lindsey et al. (2019) addressed, none of the convolutional neurons have center-surround receptive fields of the kind observed in retinal ganglion cells. Rafegas and Vanrell (2018) analyzed color selectivity in a deep CNN, finding cells that are excited by two groups of stimuli that are roughly opposite in hue. To classify these cells as opponent would additionally require an understanding of the stimuli that inhibit each cell. There has been some exploration of the role of inhibition in deep CNNs (Olah et al., 2018), although we are not aware of any demonstration that learned convolutional cells are ever truly opponent in the sense that they are both inhibited below and excited above a baseline by some stimuli.
With the exception of recent developments in metalearning (Tan & Le, 2019; Zoph & Le, 2017), new convolutional architectures are typically designed with the aim of increasing either width (Zagoruyko & Komodakis, 2016) or depth (He, Zhang, Ren, & Sun, 2016; Szegedy et al., 2015) while preventing the vanishing gradient problem with auxiliary losses (Szegedy et al., 2015), skip connections (He et al., 2016), dense connections (Huang, Liu, Van Der Maaten, & Weinberger, 2017), or stochastic depth (Huang, Sun, Liu, Sedra, & Weinberger, 2016), to name a few. However, the finding by Lindsey et al. (2019) that network architecture can affect the fundamental type of function that is learned (rather than simply affecting capacity) suggests a new approach to both architecture design and interpretability. Specifically, if we can improve our understanding of the bias introduced by the network architecture, we may be able to design new architectures with specific goals in mind or better interpret the performance of preexisting ones.
Clearly, research in this space has the potential to affect our understanding of both deep learning and the neuroscience of vision. In order to realize this potential, large-scale studies are needed that properly establish the connections between the model architecture, the data space, and the kind of visual processing that is learned. Lindsey et al. (2019) mainly rely on qualitative assessment for the identification of center-surround and oriented edge receptive fields, but do propose some quantitative analyses such as the variance in gradient with respect to different inputs as a measure of the linearity of the neuron. The highly detailed analyses of Olah et al. (2020b) give a comprehensive understanding of the function of particular neurons or circuits in deep networks; however, each functional unit or group is currently identified manually. The procedure that Rafegas and Vanrell (2018) proposed could be automated but involves the costly process of determining the image patches that most excite each cell. The Brain-Score project from Schrimpf et al. (2018) is an attempt at providing an assessment of the similarity between a given network and various neural and behavioral recordings from primates. This is uninformative in the sense that it does not provide any information regarding precisely how the function of the network is similar to that of the primate visual system. The same could be said of the work of Gomez-Villa, Martin, Vazquez-Corral, and Bertalmio (2019), who find evidence that CNNs are susceptible to the same visual illusions as those that fool human observers.
In this letter, we develop a framework for automatically classifying convolutional cells in terms of their spatial and color opponency, based on electrophysiological definitions from the neuroscience literature. In addition, we propose a method of obtaining a hue sensitivity curve for a given network, inspired by similar methods in psychophysics. Combined, these approaches provide a descriptor of the functions learned by CNNs that provides rich insight into how they encode information. We apply our framework on a color variant of the model from Lindsey et al. (2019) and demonstrate that following the introduction of a bottleneck, different cell types tend to be organized according to their depth in the network, with no such organization found in networks without a bottleneck. We detail the relationship of data, architecture, and learned representation through a series of control experiments. In total, we have trained 2490 models over nine different settings, all of which have been made publicly available, alongside code for all of our experiments, via PyTorch-Hub at https://github.com/ecs-vlc/opponency.
2 The Physiology and Psychophysics of Early Color Vision
Since the advent of Holmgren's electrophysiology experiments in 1866, which first showed the flow of electrical current in the retina, vision scientists have sought to understand the cellular mechanisms that allow us to see. In a series of articles, Adrian and Matthews (1927a, 1927b, 1928) started to explore the electrical response of the retina to light. In later experiments, practitioners explored how single cells respond to different stimuli; for example, Hartline's (1938) early measurements of the response of single optic nerve fibers to illumination, Barlow's “fly detectors,” Lettvin, Maturana, McCulloch, and Pitts's (1959) “bug perceivers,” and Hubel and Wiesel's (1962) classic experiments in understanding receptive field structure. As a consequence of these experiments, a number of different observations and subsequent classifications of the behavioral characteristics of single cells and cell populations have been made. Although many of these classifications have been disproved or disputed, a number have stood the test of time. In particular, there is now a good shared understanding of how cells in the early parts of the visual systems of a range of primates respond to different spatial and spectral stimuli. This understanding covers the main pathway from the retina, through the lateral geniculate nucleus (LGN) and into early parts of the visual cortex (e.g., V1 and V2). In the following sections we highlight the key findings from previous physiological studies that directly relate to the work presented in this letter.
2.1 Spatial Opponency in Cells
Following Adrian and Matthews (1928), and Hartline (1938, 1940) discovered evidence for different types of cellular behavior to stimuli, and in particular found that inhibitory interactions were sometimes revealed when multiple receptors were excited (Hartline, Wagner, & Macnichol, 1952). Kuffler (1953) and Barlow (1953) investigated this finding further and discovered cells with spatial receptive fields that are opponent to each other. These early results, obtained by presenting spots of light to different parts of the receptive field, showed an antagonism (opponency) between an inner center and outer surround. It is now widely accepted that such center-surround cells can be found in the retina and LGN (Hubel & Wiesel, 2004). In contrast, the majority of cells in V1 are orientation tuned (Livingstone & Hubel, 1984). One approach to analyzing this spatial selectivity involves the presentation of drifting high-contrast sinusoidal gratings (De Valois, Albrecht, & Thorell, 1982; Johnson, Hawken, & Shapley, 2001, 2008; Lennie, Krauskopf, & Sclar, 1990; Levick & Thibos, 1982; Zhao, Chen, Liu, & Cang, 2013). For example, one can characterize orientation selectivity through presentation of gratings with fixed frequency and contrast at a range of orientations (Johnson et al., 2008; Lennie et al., 1990; Levick & Thibos, 1982; Zhao et al., 2013). Similarly, a spatial frequency tuning curve can be obtained through the use of a fixed orientation and contrast (De Valois et al., 1982; Johnson et al., 2001). These analyses again grant a notion of spatial antagonism (spatial opponency here) in the cortex, where there exists a grating configuration that excites the cell and an opponent grating configuration that inhibits the cell (Shapley & Hawken, 2011). Note that although nontypical, presentation of grating stimuli have also been used to detect center-surround organization in the retina (Bilotta & Abramov, 1989) since these are cells that are highly tuned to frequency but not orientation selective.
2.2 Color Vision and Color Opponency
With respect to color vision, the first major physiological finding relates to the discovery of two broad classes of cell that respond to color: those that exhibit opponent spectral sensitivity, and those (nonopponent) that do not. Experiments by De Valois et al. (1966) discovered spectrally opponent cells in the LGN of a trichromatic primate that are excited by particular single-wavelength stimuli and inhibited by others. Additionally, they discovered that, broadly speaking, the cells could be grouped into those that were excited by red and inhibited by green (and vice versa) and cells that were excited by blue and inhibited by yellow (and vice versa). Indeed, these cells would appear to align with Hering's unique hues (red, green, blue, and yellow) (Hering, 1920), which are unique in the sense that none of them can be viewed as a combination of the others. However, the experiments from Derrington et al. (1984) reveal that the cardinal axes of the chromatic response in the macaque LGN are not aligned to Hering's unique hues but to cone responses. The consequence of this finding is that spectrally opponent cells in early primate vision are best described as cone opponent. It has similarly been argued that so-called red/green opponency is better described as magenta/cyan and that these should be viewed as complementary colors rather than opponent (Pridmore, 2005, 2011). (For a more in-depth exposition of the contention between the physiological and psychophysical understanding of spectral opponency see Shevell & Martin, 2017.) Cells that are spectrally nonopponent have also been observed in primate LGN; these are cells that are not sensitive to specific wavelengths but respond to broad range of wavelengths in the same way (either inhibitory or excitatory) (De Valois, Smith, Karoly, & Kitai, 1958; Jacobs, 1964). In V1, it has been suggested that cells described as selective to orientation but not color by Livingstone and Hubel (1984) are in fact color opponent but with unbalanced cone inputs such that they respond to general changes in luminance (Johnson et al., 2001; Lennie et al., 1990).
More recently, techniques such as functional magnetic resonance imaging (fMRI) have been used to explore population coding of vision and color-related processes (Boynton, 2002; Engel, Zhang, & Wandell, 1997; Seymour, Williams, & Rich, 2015; Wade, Augath, Logothetis, & Wandell, 2008). In particular, studies have shown strong responses in V1 to stimuli that are preferred by spectrally opponent cells (Engel et al., 1997; Kleinschmidt, Lee, Requardt, & Frahm, 1996; Schluppeck & Engel, 2002). The work of Wade et al. (2008) validates that the early visual system of the macaque (where many of the single-cell measurements of color vision have been taken) correlates strongly with humans in terms of overall population responses to chromatic contrast; this is important to our work since we seek functional archetypes that are of general efficacy in visual intelligence. It is, however, worth noting that Wade et al. also show that in later areas of the visual pathway, the topographical organization of the macaque is fundamentally different.
Following De Valois et al.'s initial findings, there has been a realization that cells responsive to color could be further grouped into single opponent and double opponent cells. The defining characteristic of double opponent cells is that they respond strongly to color patterns but are nonresponsive or weakly responsive to full-field color stimuli (e.g., solid color across the receptive field, slow gradients, or low-frequency changes in color) (Shapley & Hawken, 2011). In the retina, double opponency presents as spectrally opponent cells with center-surround organization (Troy & Shou, 2002). In the primary visual cortex, there are both the spectrally opponent cells with oriented receptive fields mentioned above and nonoriented double opponent cells in the cytochrome oxidase rich blobs (Livingstone & Hubel, 1984). Note that one interpretation is that double opponent cells are both spatially and spectrally opponent.
2.3 Linearity of Retinal Ganglion Cell Response
There is a connection between anatomy and the relative presence of linear and nonlinear cells in the retina. For example, midget cells, which are well approximated by a linear model (Smith et al., 1992), are the most prevalent ganglion cell type in the human retina (Dacey, 1993). In contrast, the most prevalent ganglion cell type in the mouse retina is a nonlinear feature detector that is thought to act as an overhead predator detection mechanism (Zhang, Kim, Sanes, & Meister, 2012), not dissimilar to the previously noted fly detectors and bug perceivers. In their experiments with CNNs, Lindsey et al. (2019) suggest that the contrast between the anatomy of the primate and mouse visual systems can be considered in terms of network depth. The authors subsequently present evidence that the natural differences in function derive from these associated differences in visual system anatomy. In particular, deeper networks learn linear features in early layers, whereas shallower networks learn nonlinear features.
3 Opponency in Artificial Vision
The notion of a spatially opponent receptive field has a long history in computer vision. Notably, the Marr-Hildreth algorithm for edge detection (Marr & Hildreth, 1980) performs a Laplacian of gaussian (often approximated by a difference of gaussian (DoG)), which resembles the function performed by center-surround ganglion cells in the retina. Oriented-edge receptive fields were also modeled in early approaches to visual recognition. In particular, edge orientation histograms (Freeman & Roth, 1995; McConnell, 1986) and later histograms of oriented gradients (Dalal & Triggs, 2005) are similar in principle to a layer of neurons with oriented-edge receptive fields with different rotation, frequency, and phase. DoG and edge orientation assignment are also integral components of the well-known scale invariant feature transform (SIFT) descriptor (Lowe, 1999).
In addition to approaches that directly model opponent receptive fields, several studies have shown emergent opponency in learning machines. For example, Lehky and Sejnowski (1988) found evidence for orientation selectivity in a neural network trained with backpropagation to determine the curvature of simple surfaces in procedurally generated images. Olshausen and Field (1996) demonstrated the emergence of basis functions that resemble oriented receptive fields when learning an efficient sparse linear code for a set of images. Similar results are presented by Bell and Sejnowski (1997), who show that a nonlinear infomax network, which performs independent component analysis (ICA), trained on images of natural scenes, produces sets of visual filters that show orientation and spatial selectivity. Lehky and Sejnowski (1999) use a four-layer neural network to map cone responses to a population of gaussian tuning curves in CIE color space and demonstrate color opponent neurons in the hidden layers. Karklin and Lewicki (2003) propose a hierarchical probabilistic approach to learning a nonlinear efficient code. The authors demonstrate the emergence of higher-order features such as object location, scale, and texture. Alternatively, Shan et al. (2007) introduced recursive ICA, where the outputs of a previous application of ICA are transformed such that it may be reapplied. The authors again demonstrate the emergence of these higher-order features when applying their model to natural images. Wang et al. (2015) use recursive ICA to automatically learn visual features that accord with those found in the early visual cortex. The authors subsequently model the object recognition pathway using gnostic fields (Kanan, 2013, 2014), a brain-inspired model of object categorization. Wang et al. (2015) demonstrate that the features in the first ICA layer, trained on natural images, are oriented-edges with the color opponent characteristics typical of V1 neurons (dark-light, yellow-blue, red-green). The second-layer filters are sensitive to edges of different frequency and orientation, reminiscent of complex cells in V1. Cells that exhibit responses similar to simple and complex neurons in V1 can only be observed in the two ICA layers.
In this work we are primarily concerned with opponency in deep CNNs, for which some early approaches used variants of ICA to learn the filters (Le, Karpenko, Ngiam, & Ng, 2011). Modern CNNs are trained using the backpropagation algorithm, similar to the work of Lehky and Sejnowski (1988, 1999), such that the features learned are dependent on the objective function of the model. In addition, CNNs are typically constructed with many more layers of nonlinear feature extraction than the one or two layers used in ICA. As a result, CNNs permit a notion of functional organization: “what happens where” rather than just “what happens.” Due to the connections between CNNs and ICA, one might reasonably expect CNNs to exhibit emergent opponency. This is indeed the case, with multiple works pointing out that learned filters in early layers appear to be spatially and color selective (Krizhevsky et al., 2012; Lindsey et al., 2019; Olah et al., 2020a; Rafegas & Vanrell, 2018; Zeiler & Fergus, 2014).
Rafegas and Vanrell (2018) propose an automated measurement of the spectral selectivity of convolutional neurons. For their approach, the authors find image patches that maximally excite each neuron and construct an index with high values when these patches are consistent in color. The authors further suggest that a neuron is double opponent if it is selective to two distinct colors that are roughly opposite in hue. Note that these definitions of opponency are not direct correlates of the previously discussed definition. The key difference is that the electrophysiological definition requires an understanding of the stimuli that inhibit cells in addition to the stimuli that excite them. This is important since although cells that are excited by two colors may be projecting the input on an opponent axis, they may also just be activating for both colors indiscernibly. The double colour selective neurons found by Rafegas and Vanrell (2018) are typically red-cyan, blue-yellow, and magenta-green. These do not closely reflect the opponent axes of the primate LGN. This is to be expected since cone opponency observed in nature translates to channel opponency in a convolutional model, and so we can reasonably expect the opponent axes to be aligned with extreme RGB values rather than cone responses or Hering's unique hues (although note that these are a subset of the RGB extrema).
4 Methods
In this section, we detail our methodology for classifying convolutional cells according to their spatial and color processing. Generaliszing the discussed physiological definitions, to classify a cell as opponent, we require a set of stimuli, the ability to measure the response of the cell to each stimulus, and a measurement of the baseline response of the cell (in order to establish excitation and inhibition). The response of each neuron to the input is readily available in a deep network, and we define the baseline as the response of the cell to a black input (a matrix of zeros). If there exists a stimulus for which the cell is excited (responds above the baseline) and a stimulus for which the cell is inhibited (responds below the baseline), then the cell is opponent to the axis of variance of the stimuli set. We first describe the two stimuli sets that we will use for the classification of spatial and color opponency. We go on to discuss automatic classification of a cell as double opponent and how we can infer the specific type of an opponent cell. In addition, we introduce an approach for studying the hue sensitivity curve of a deep network, inspired by Bedford and Wyszecki (1958). The experiments in this section form our core results. We later perform a control study to determine how well these results extend to different settings.
4.1 Spatial Opponency
Examples of grating patterns used as stimuli for the spatial opponency experiments. These samples have been generated using PsychoPy (Peirce et al., 2019), with different angles (), frequency of 4, and phase of 0.
Examples of grating patterns used as stimuli for the spatial opponency experiments. These samples have been generated using PsychoPy (Peirce et al., 2019), with different angles (), frequency of 4, and phase of 0.
4.2 Color Opponency
To classify spectral opponency, De Valois et al. (1966) vary the stimuli according to wavelength. For our experiments, we propose using stimuli that vary in hue rather than wavelength. The reason for this is that the trained networks will expect an RGB input, and there is no exact mapping from wavelength to RGB. We could consider a more biologically valid color representation such as the cone response space used by Lehky and Sejnowski (1999) but opt for RGB as it is the standard practice in deep learning. We sample colors in the hue, saturation, lightness (HSL) color space for all integer hue values with saturation of 1.0 and lightness of 0.5. We then convert our stimuli to RGB before forwarding to the network and constructing the color tuning curve. We can perform classification by following the same process of comparing to the baseline as in the spatial setting. We use the terms hue opponency and color opponency interchangeably to refer to the different cell types found through this process.
4.3 Double Opponency
As discussed, we can automatically classify a cell as double opponent if it is both color and spatially opponent. Our interests here lie in whether double opponent cells emerge in convolutional networks trained with a classification objective. Note that it has been observed that most spectrally opponent cells in macaque V1 are also orientation selective (Johnson et al., 2008), that is, they are double opponent. Unlike in the single-opponent cases, we do not define a notion of double nonopponency or double unresponsiveness (although such classifications could be made if required).
4.4 Excitatory and Inhibitory Colors
Using the color tuning curve, we can further determine the hue that most excites or inhibits each cell. Since cells are typically equipped with a nonlinear activation function, there may be a wide range of stimuli for which they produce the lowest response. As such, we use the preactivation output to infer the most inhibitory stimulus. This excitation and inhibition data will allow us to plot the distribution of colors to which cells in networks are tuned. Note that this distribution is insufficient to describe the type of opponency since it does not permit an understanding of whether there are distinct classes of opponent cell. For example, the distribution of excitation and inhibition does not distinguish between two groups of cells that are red/green opponent and blue/yellow opponent, respectively, or many groups of cells that are red/green opponent, green/blue opponent, blue/red opponent and so on. One option would be to apply a clustering technique to the most excitatory and inhibitory responses. However, this would introduce additional challenges through the need for appropriate algorithm and hyperparameter choice. Instead, we can additionally study the conditional distribution of maximal excitation, given maximal inhibition by some colors in a chosen range. We suggest evaluation of these conditional distributions for the following hue ranges: red (), yellow (), green (), cyan (), blue (), and magenta (). By enabling direct assessment of the inhibition/excitation pairs, this will give a much deeper understanding of the kinds of opponency present in the networks being analyzed.
4.5 Hue Sensitivity
In addition to the hue tuning curve, we can consider the hue sensitivity of a network. Specifically, we look to replicate the experiments of Bedford and Wyszecki (1958), who showed that the change needed to elicit a just-noticeable difference in hue to a human observer is a complex function of wavelength. Long, Yang, and Purves (2006) further suggest that the reason for this nonuniform spectral sensitivity derives from the statistics of natural scenes, showing that the curve predicted from a data set of natural images bears a strong resemblance to that obtained for a human observer. Another way to explain the discrimination curve is in terms of cone responses (Zhaoping, Geisler, & May, 2011). This is more direct since scene statistics can be seen as indirectly controlling wavelength discrimination through evolutionary modifications of cone properties. It is expected that such a sensitivity curve, though over hue rather than wavelength, will enable a more holistic view of color tuning.
To perform a similar experiment to Bedford and Wyszecki (1958), note that the just-noticeable-difference method is inversely related to the gradient of the perceived color with respect to wavelength, which can be seen as a form of sensitivity. By virtue of automatic differentiation, it is trivial to obtain the gradient of the activation in a layer of our network with respect to the RGB input. Since the conversion from HSL to RGB is piece-wise differentiable, we can further obtain the approximate gradient of the activation with respect to hue. Note that we use the hidden-layer activation of a network rather than a notion of perceived color, so it is unclear whether these results should reflect the biological data. Furthermore, in light of the above, one might expect that the predominant features of the sensitivity curve should derive from the relative responses of the RGB channels as a function of hue.
5 Results
We now present the results for our core experiments with Retina-Net models trained on color CIFAR-10. We later perform a control study and provide an in-depth discussion of the implications of these results; our aim in this section is merely to present the core findings of this work.
5.1 Retina-Net
(a) Schematic of the Retina-Net model from Lindsey et al. (2019). (b, c) CIFAR-10 test accuracy for the different combinations of retinal bottleneck and ventral depth explored in the experiments. Mean and standard error given over 10 trials.
(a) Schematic of the Retina-Net model from Lindsey et al. (2019). (b, c) CIFAR-10 test accuracy for the different combinations of retinal bottleneck and ventral depth explored in the experiments. Mean and standard error given over 10 trials.
As with Lindsey et al.'s work, the networks are trained to perform classification on the CIFAR-10 data set (Krizhevsky, 2009), the only difference being that our model expects RGB inputs rather than grayscale. The choice of an object categorization task is validated by previous studies showing a strong correlation between neural unit responses of CNNs trained on such a task and the neural activity observed in the primate visual stream (Cadena et al., 2017; Güçlü & van Gerven, 2015; Yamins et al., 2014). (For further discussion of these results, refer to Lindsey et al., 2019.) Note that there may be many other learning tasks that are biologically valid in the sense that they yield similar functional properties. For example, self-supervised learning through deep information maximization (Hjelm et al., 2018) and contrastive predictive coding (Hénaff et al., 2019) may present viable alternatives to the supervised object recognition used here.
We train models across the same range of hyperparameters as Lindsey et al. (2019): bottleneck width and ventral depth . Again following Lindsey et al. (2019), we perform 10 repeats, with error bars denoting the standard deviation in result across all repeats. Networks were trained for 20 epochs with the RMSProp optimizer and a learning rate of with initial weights sampled via the Xavier method (Glorot & Bengio, 2010). We note that in order to replicate the results from Lindsey et al. (2019), we required additional regularization. Specifically, we use a weight decay of and data augmentation (random translations of of the image width/height, and random horizontal flipping). Figures 2b and 2c give the average terminal accuracy for models trained on both grayscale and color images, respectively. The grayscale accuracy curves match those given in Lindsey et al. (2019). The accuracy for networks trained on color images is generally higher, particularly for networks with no ventral layers. We will discuss additional training settings that are variants of the above.
5.2 Characterizing Single Cells
To begin, we illustrate our framework for characterizing single cells. Figure 3 shows the first-order receptive field approximations, orientation tuning curves, and color tuning curves for four cells in the bottleneck layer of a network with and . Following Lindsey et al. (2019), the receptive field approximation is the gradient (obtained through backpropagation) of the output of a single convolutional filter in a single spatial position (that is, a single convolutional “neuron”) with respect to a blank input with a constant value of 0.01. This small, positive amount is required to ensure that each of the cells is in the linear region of the ReLU activation function (that is, the gradient is nonzero). The gradient image is then normalized and scaled so that it can be interpreted visually. Visually, cells 1, 3, and 4 appear to be grayscale edge filters, whereas cell 2 is red/blue or magenta/cyan center-surround. However, the limitation of this analysis is the noise in the approximation. For example, one could argue that cell 1 is center-surround with a dark center and a magenta surround. Assessments given for any of the cells will be similarly contentious. Furthermore, this representation permits no understanding of inhibition. For example, cell 2 may be better described as tuned to blue hues in the interval rather than center-surround opponent.
To further characterize each cell, we employ our described approach. To characterize spatial opponency, in Figures 3b and 3c, we provide orientation tuning curves for the frequency and phase that elicit the weakest and strongest responses, respectively. If the cell responds above the baseline in one tuning curve and below in the other, or if either curve crosses the baseline, then the cell is spatially opponent. We can therefore say that by our definition, cells 1, 3, and 4 are spatially opponent. In contrast, cell 2 is merely spatially nonopponent, always responding above the baseline for any choice of rotation, frequency, and phase. In addition to classifying opponency, we can identify the orientation tuning of each cell by further study of the curves in Figure 3c. Figure 3d gives the color tuning curves for each cell. As hue is the only parameter to consider, classification here is simpler: the cell is hue opponent if the tuning curve crosses the baseline. Given this definition, we can say that cells 1, 3, and 4 are hue opponent, although the extent of inhibition is different in each case. Furthermore, for every cell, we can identify the range of hues to which it is tuned.
Characterization of the four cells in the second retinal layer of a network with and . (a) The receptive field approximation obtained from the gradient of the cell with respect to a blank image. (b, c) Orientation tuning curves for the frequency and phase combination that yielded the smallest and largest response, respectively. (d) Color tuning curve over the hue wheel. Cells 1, 3, and 4 are double opponent; cell 2 is nonopponent.
Characterization of the four cells in the second retinal layer of a network with and . (a) The receptive field approximation obtained from the gradient of the cell with respect to a blank image. (b, c) Orientation tuning curves for the frequency and phase combination that yielded the smallest and largest response, respectively. (d) Color tuning curve over the hue wheel. Cells 1, 3, and 4 are double opponent; cell 2 is nonopponent.
Following interpretation of the tuning curves, we can now state that cells 1, 3, and 4 are double opponent and cell 2 is nonopponent both spatially and with regard to hue. Furthermore, for each cell, we can state the orientation and hue to which it is tuned. For example, cell 2 is broadly excited by blue stimuli but with a distinct peak at a hue of around . Cell 2 is spatially tuned to lines oriented in the interval . Although it is true that this approach gives us a deeper understanding of each cell, the real value is in the fact that each of the above can trivially be automated over the whole cell population. We therefore transition away from studying single cells and instead consider the distributions of different cell types for the remainder of the letter.
5.3 Characterizing Cell Populations
For each result in this section, we automate cell classification following our described method and present the distribution of each cell type as a function of retinal bottleneck width and ventral depth. This allows us to understand the effect that these two architectural variables have on the kinds of cells that are learned and where they are found in the network. Note, however, that cells in deeper layers are expected to have a highly nonlinear response and thus may have receptive field properties that are quite different from the opponent cells observed in shallower layers. As such, observations regarding these deeper layers (Ventral 2 in particular) should be considered only in the context of our approach and may not generally apply to the broader understanding of opponency.
5.3.1 Spatial Opponency
Distribution of spatially opponent, nonopponent, and unresponsive cells in different layers of our model as a function of bottleneck width, for a range of ventral depths. Functional organization emerges for networks with tight bottlenecks. The last convolutional layer (e.g., Retina 2 when depth is 0, Ventral 1 when depth is 1 and so on) exhibits a reduction in spatial opponency. The penultimate convolutional layer (Retina 2 when depth is 1, Ventral 1 when depth is 2, and so on) exhibits an increase.
Distribution of spatially opponent, nonopponent, and unresponsive cells in different layers of our model as a function of bottleneck width, for a range of ventral depths. Functional organization emerges for networks with tight bottlenecks. The last convolutional layer (e.g., Retina 2 when depth is 0, Ventral 1 when depth is 1 and so on) exhibits a reduction in spatial opponency. The penultimate convolutional layer (Retina 2 when depth is 1, Ventral 1 when depth is 2, and so on) exhibits an increase.
5.3.2 Color Opponency
Curves showing how the distributions of the color opponent classes change for the second retinal and first two ventral layers as the bottleneck is increased, for a range of ventral depths, are given in Figure 5. As the bottleneck decreases, the second retina layer exhibits a strong increase in hue opponency, nearing for a bottleneck of one. Conversely, cells in the first ventral layer show a decrease in hue opponency over the same region. For all but the tightest bottlenecks, up to half of the cells are hue nonopponent. Hue nonopponent cells show almost the exact opposite pattern to hue opponent cells. The implication of this result is that networks with strong hue opponent representations in the bottleneck layer exhibit an increase in hue nonopponent cells in Ventral 1. Since this spike in opponency returns in Ventral 2, we speculate that Ventral 1 merely preserves the opponent code from Retina 2 for downstream processing and learns a set of filters that are tuned but nonopponent. This is inconsistent with the evidence that spatially tuned cells in primate V1 are also color opponent (Lennie et al., 1990; Johnson et al., 2001). However, it should be stressed that our model of the primary visual cortex and ventral stream is highly simplified. In particular, we do not explicitly model the LGN or subsequent projections to different layers of V1, and greater similarity may well be observed in such a case. Similar to the results for spatial opponency, there is a consistent reduction/spike in hue opponency in the last and penultimate convolutional layers, respectively. Averaged over bottleneck width, the number of hue opponent cells is generally lower than the number of spatially opponent cells.
5.3.3 Double Opponency
Figure 6 shows the distribution of double opponent cells as a function of bottleneck size and ventral depth, giving a similar picture to the spatial and hue opponency plots. The results suggest that the majority of hue opponent cells are also spatially opponent. This finding is in alignment with the observation that most hue opponent cells in the macaque V1 are also orientation selective (Johnson et al., 2008).
5.3.4 Types of Opponency
Distribution of color opponent, nonopponent, and unresponsive cells in different layers of our model as a function of bottleneck width, for a range of ventral depths. Functional organization again emerges for networks with tight bottlenecks. Furthermore, the last and penultimate convolutional layers exhibit a reduction and increase in color opponency, respectively. The echoes the spatial findings from Figure 4.
Distribution of color opponent, nonopponent, and unresponsive cells in different layers of our model as a function of bottleneck width, for a range of ventral depths. Functional organization again emerges for networks with tight bottlenecks. Furthermore, the last and penultimate convolutional layers exhibit a reduction and increase in color opponency, respectively. The echoes the spatial findings from Figure 4.
Distribution of double opponent cells in different layers of our model as a function of bottleneck width and ventral depth. Most spatially opponent cells are also color opponent, and so these distributions bare a strong similarity to those in Figures 4 and 5.
Distribution of excitatory and inhibitory hues for cells in different layers of networks with random weights and networks trained on RGB images. Maximal excitation and inhibition before training are naturally aligned to the hues that correspond to RGB values of 255 or 0. Trained networks show a preference for green and magenta. Some cells are highly nonlinear, maximally excited by orange/red and cyan/blue.
Distribution of excitatory and inhibitory hues for cells in different layers of networks with random weights and networks trained on RGB images. Maximal excitation and inhibition before training are naturally aligned to the hues that correspond to RGB values of 255 or 0. Trained networks show a preference for green and magenta. Some cells are highly nonlinear, maximally excited by orange/red and cyan/blue.
Conditional distribution of excitatory hues for cells that are most inhibited by red (), yellow (), green (), cyan (), blue (), and magenta () for Shallow () and Deep () networks with Narrow () and Wide () bottlenecks. Narrow networks learn a simple color system, with cells that are maximally excited/inhibited by extreme RGB values (dashed vertical lines). Deep networks show an increase in cells that are most excited by blue.
Conditional distribution of excitatory hues for cells that are most inhibited by red (), yellow (), green (), cyan (), blue (), and magenta () for Shallow () and Deep () networks with Narrow () and Wide () bottlenecks. Narrow networks learn a simple color system, with cells that are maximally excited/inhibited by extreme RGB values (dashed vertical lines). Deep networks show an increase in cells that are most excited by blue.
We can now observe that the primary opponent axis in our networks is green/magenta, with cells that are inhibited by red or magenta and excited by green being unique to the Wide/Shallow networks. In addition, we can say that the majority of hue opponent cells (that is, cells in the Narrow networks) are channel opponent. In the Wide networks, we find cells that are broadly excited by orange/red and cyan/blue. These cells persist in the first ventral layer and are not typically present in Narrow networks. This suggests that the Wide networks are responsible for the peaks in Figure 7. We find the presence of cells that are excited by blue and inhibited by yellow, red, and green more prominently in the Deep networks, with particular prevalence in the Narrow Deep networks. In general, the range of excitatory and inhibitory hues is greater in the Wide networks, suggesting increased prevalence of complex, nonlinear cells. This mirrors the finding from Lindsey et al. (2019) that cells in this setting tend to have a non-linear receptive field. Note that we have found that cells in the ventral layer (not included in the figure) are excited and inhibited by a much wider range of hues, particularly in the Narrow networks. This suggests that the bottleneck induces an efficient color code that enables cells in later layers to become attuned to highly specific hues. Recall that we observe an increase in the proportion of color tuned but nonopponent cells in Ventral 1 in models with tight bottlenecks, corroborating this assertion.
5.3.5 Hue Sensitivity
Mean gradient of the sum of the bottleneck layer response with respect to hue for Shallow () and Deep () networks with Narrow () and Wide () bottlenecks. The shaded region indicates the standard error across the trained models. Discontinuities derive from the conversion from HSL to RGB. Sensitivity is an approximately linear function of hue for Narrow networks, and particularly in the Narrow Deep setting, again showing a simple color code in the bottleneck layer. Conversely, Wide Shallow networks exhibit a highly nonlinear sensitivity to hue.
Mean gradient of the sum of the bottleneck layer response with respect to hue for Shallow () and Deep () networks with Narrow () and Wide () bottlenecks. The shaded region indicates the standard error across the trained models. Discontinuities derive from the conversion from HSL to RGB. Sensitivity is an approximately linear function of hue for Narrow networks, and particularly in the Narrow Deep setting, again showing a simple color code in the bottleneck layer. Conversely, Wide Shallow networks exhibit a highly nonlinear sensitivity to hue.
6 Control Experiments
In this section, we perform a series of targeted experiments to assess how well our results extend to different settings. These experiments are intended to improve our understanding of the conditions under which the various forms of opponency emerge, supporting a comprehensive discussion.
6.1 Random Weights
Distribution of spatially and color opponent, nonopponent, and unresponsive cells in different layers of our models with gaussian weights (mean and variance from filters of the same depth in a reference pretrained model with and ) as a function of bottleneck width. Some opponency is explained by simple statistics of the filters. Functional organization emerges only as a result of training.
Distribution of spatially and color opponent, nonopponent, and unresponsive cells in different layers of our models with gaussian weights (mean and variance from filters of the same depth in a reference pretrained model with and ) as a function of bottleneck width. Some opponency is explained by simple statistics of the filters. Functional organization emerges only as a result of training.
6.2 Greyscale
(a) Distribution of spatially opponent, nonopponent, and unresponsive cells in different layers of our model as a function of bottleneck width, for models trained with grayscale images showing that the known spatial opponency from Lindsey et al. (2019) is detected by our method. (b) Distribution of excitatory and inhibitory hues for cells in different layers of networks trained on images with distorted color (hue rotation of ). The most prevalent excitatory and inhibitory colors are aligned with the RGB extremes closest to a rotation of the peaks in Figure 7.
(a) Distribution of spatially opponent, nonopponent, and unresponsive cells in different layers of our model as a function of bottleneck width, for models trained with grayscale images showing that the known spatial opponency from Lindsey et al. (2019) is detected by our method. (b) Distribution of excitatory and inhibitory hues for cells in different layers of networks trained on images with distorted color (hue rotation of ). The most prevalent excitatory and inhibitory colors are aligned with the RGB extremes closest to a rotation of the peaks in Figure 7.
6.3 Distorted Color
To further explore the idea that the opponency in our networks derives from the statistics of the data, we trained a batch of models on images with distorted color. Specifically, we convert the images into HSV space and offset the hue channel by , before converting back into RGB and forwarding to the network. Our interest here is not in whether opponency emerges, but in the effect this distortion has on it. Figure 11b shows the distribution of excitatory and inhibitory colors in networks trained with distorted inputs. Here, the most prevalent excitatory and inhibitory colors are aligned with the RGB extremes closest to a rotation of the peaks in Figure 7. This is consistent with our observation that the vast majority of color opponent neurons are channel opponent. In contrast, the additional excitation peak has been rotated by exactly from orange/red to green. This demonstrates that the cells that are excited by specific hues emerge as a result of the statistics of the data, not of the input color space.
6.4 CIELAB Space
(a) Distribution of spatially and color opponent, nonopponent, and unresponsive cells in different layers of models trained on images in LAB space as a function of bottleneck width, showing that functional organization is not unique to RGB. (b) Excitatory/inhibitory hues in LAB space for random and trained networks. Training increases prevalence of blue/green and excitation by orange/red and cyan/blue.
(a) Distribution of spatially and color opponent, nonopponent, and unresponsive cells in different layers of models trained on images in LAB space as a function of bottleneck width, showing that functional organization is not unique to RGB. (b) Excitatory/inhibitory hues in LAB space for random and trained networks. Training increases prevalence of blue/green and excitation by orange/red and cyan/blue.
6.5 Street View House Numbers
Distribution of spatially and color opponent, nonopponent, and unresponsive cells in different layers of models trained on Street View House Numbers (SVHN) (Netzer et al., 2011) as a function of bottleneck width. Spatial opponency is present, with a similar distribution to the networks trained on CIFAR-10. Color opponency is generally lower, increasing only slightly for networks with narrow bottlenecks.
Distribution of spatially and color opponent, nonopponent, and unresponsive cells in different layers of models trained on Street View House Numbers (SVHN) (Netzer et al., 2011) as a function of bottleneck width. Spatial opponency is present, with a similar distribution to the networks trained on CIFAR-10. Color opponency is generally lower, increasing only slightly for networks with narrow bottlenecks.
6.6 ImageNet
Distribution of spatially and color opponent, nonopponent, and unresponsive cells in different layers of models trained on ImageNet (Russakovsky et al., 2015) as a function of bottleneck width, showing how our findings transfer to a higher resolution setting. There is an increase in opponency for narrow bottlenecks, which decays rapidly. Emergent organization is observed only partially in the networks with the tightest bottlenecks.
Distribution of spatially and color opponent, nonopponent, and unresponsive cells in different layers of models trained on ImageNet (Russakovsky et al., 2015) as a function of bottleneck width, showing how our findings transfer to a higher resolution setting. There is an increase in opponency for narrow bottlenecks, which decays rapidly. Emergent organization is observed only partially in the networks with the tightest bottlenecks.
6.7 Intel Scene Classification
Distribution of spatially and color opponent, nonopponent, and unresponsive cells in different layers of models trained on the Intel scene classification challenge data set (Intel, 2018) as a function of bottleneck width. With fewer classes (six in this case), the number of opponent cells is much higher. The distribution of opponent cells in Retina 2 bares strong similarity with the results from CIFAR-10. This does not extend to the ventral layers, which have near-identical cell distributions.
Distribution of spatially and color opponent, nonopponent, and unresponsive cells in different layers of models trained on the Intel scene classification challenge data set (Intel, 2018) as a function of bottleneck width. With fewer classes (six in this case), the number of opponent cells is much higher. The distribution of opponent cells in Retina 2 bares strong similarity with the results from CIFAR-10. This does not extend to the ventral layers, which have near-identical cell distributions.
6.8 Classifying Mosaics
Distribution of spatially and color opponent, nonopponent, and unresponsive cells in different layers of models trained on mosaic images as a function of bottleneck width with example mosaic images. These results show that when the spatial structure of the input is removed, some spatial opponency, particularly in Retina 2, is removed also. Color opponency is similarly affected, suggesting a complex dependence between spatial and color processing.
Distribution of spatially and color opponent, nonopponent, and unresponsive cells in different layers of models trained on mosaic images as a function of bottleneck width with example mosaic images. These results show that when the spatial structure of the input is removed, some spatial opponency, particularly in Retina 2, is removed also. Color opponency is similarly affected, suggesting a complex dependence between spatial and color processing.
6.9 Shuffled Color Channels
Distribution of spatially and color opponent, nonopponent, and unresponsive cells in different layers of models trained on images with shuffled color channels as a function of bottleneck width with example shuffled images. When consistent color information is removed, most color opponency is also removed. Spatial opponency remains.
Distribution of spatially and color opponent, nonopponent, and unresponsive cells in different layers of models trained on images with shuffled color channels as a function of bottleneck width with example shuffled images. When consistent color information is removed, most color opponency is also removed. Spatial opponency remains.
7 Discussion
Equipped with the results of our experiments, we now discuss the conclusions that can be drawn regarding spatial and color processing in convolutional neural networks. In addition, we suggest possible directions for future work building on these findings. Our primary finding is that the addition of a bottleneck in the Retina-Net model induces functional organization when trained on color (RGB) CIFAR-10. We have further shown that this finding generalizes to networks trained on images in the CIELAB color space. There is some evidence that this result differs when the networks are trained on other data sets, although the key finding, that structure emerges only with the tightest bottlenecks, remains. In the case of ImageNet, more experimentation with a model capable of fitting to the data would be required to understand this fully. Regarding network depth, our experiments have uncovered an increase in the number of opponent cells in the penultimate convolutional layer of the network and a corresponding decrease in the last convolutional layer. Our experiments with random networks demonstrate that all of the discussed opponency is learned and that most opponency is not a result of simple statistics of the weights.
In addition to these high-level observations, we have shown that an analysis based on approaches from neuroscience can yield a rich understanding of the function performed by a trained network. For example, we have shown that the deep Retina-Net model with a tight bottleneck learns a set of double opponent filters in the bottleneck layer, followed by a set of spatially and color-tuned but nonopponent filters in the first ventral layer, with opponency returning in the second ventral layer. Cells that are maximally excited by blue are a unique feature of these networks not present when the bottleneck is relaxed. Furthermore, these networks tend to learn linear, channel opponent neurons rather than neurons opponent to specific hues. We speculate that this is due to the increased need to learn an efficient color code in the tight bottleneck case.
The key implication of our core findings is that the model architecture can be the source of an inductive bias toward the number of opponent cells. While this finding alone may be of interest, whether it is of any practical significance depends on whether opponency is desirable. By virtue of the fact that opponent cells represent a more efficient encoding of the input, one might speculate that an increase in opponency could lead to increased generalization performance. This view is mildly supported by the plot in Figure 2c, where the networks with and obtained the highest accuracy. We further suggest that opponent cells may be of greater utility in applications such as transfer learning. Specifically, one can envisage a scenario where the prebottleneck weights are fixed and the postbottleneck weights are updated to fit a new data set. Before such a setting could be considered, our findings would need to be demonstrated on a much more capable network architecture that can obtain competitive performance on standard data sets. The finding that the penultimate layer exhibits a spike in opponency may provide insight into the efficacy of layer-wise training procedures such as deep cascade learning (Marquez, Hare, & Niranjan, 2018). Note that cascade learning has been found to work well with transfer learning (Du, Farrahi, & Niranjan, 2019). Based on the evidence presented in this work, one might speculate that cascade learning increases the number of opponent cells and that these cells perform well for the transfer learning task. That said, and as previously discussed, whether the opponent cells in later layers inherit the same properties as opponent cells in earlier layers remains to be determined.
We have also demonstrated a number of similarities between the learned representations of our networks and representations observed in nature. The large number of double opponent cells we find in the retina layer of networks with tight bottlenecks is consistent with what is known about cells in the retina and LGN (Hubel & Wiesel, 2004). There are some consistencies and some inconsistencies between the ventral layers of the model and what is known about spatial and color processing in the visual cortex. However, as discussed, it is not clear that the ventral convolutional architecture is a good analog of the structure of the visual cortex, so such comparisons should be treated with skepticism. Our finding that the type of opponency learned is aligned with extreme values in the input color space accords with the physiological finding that opponency in early stages of the visual pathway is aligned with cone responses (Shevell & Martin, 2017).
The consequence of these demonstrations is not to suggest that convolutional neurons and biological neurons are similar. Instead, we have shown that similarity in the data space, architecture, and problem setting can give rise to similarity in the emergent functional properties. In addition, we have demonstrated some settings in which opponency is either hindered or removed entirely. This kind of controlled experiment may enable the exploration of hypotheses relating to the neuroscience of vision. Specifically, through construction of a data set that mimics an environment or an architecture that mimics an anatomy, one might seek a better explanation of the differences in visual processing between species. This potential is hinted at by our experiments with SVHN, which show that networks trained on the digit recognition task have fewer color opponent cells.
In conclusion, our considerations here provide a strong mandate for future research across a range of interests. Work should be conducted to understand whether the presence of opponent cells promotes increased adversarial robustness. Such research will require the ability to apply our methods to state-of-the-art architectures in order to be of practical relevance. In particular, it remains to be seen whether the introduction of a bottleneck is enough to promote opponency in more complex architectures. Indeed, this may require more sophisticated approaches such as cascade learning. Additionally, future research should attempt to further explore the connection between the problem space and the nature of learned visual processing. For example, it could be possible to construct a model that permits a notion of learnable monochromacy or dichromacy. This would make it possible to better understand the connection between problem complexity and the need for color acuity. Finally, experimentation with networks trained on hyperspectral images, where a complete spectrum is collected for each pixel, may enable more finely grained comparison with physiological data.
References
Author notes
The authors made equal contributions to this letter.