## Abstract

We consider a model of basic inner retinal connectivity where bipolar and amacrine cells interconnect and both cell types project onto ganglion cells, modulating their response output to the brain visual areas. We derive an analytical formula for the spatiotemporal response of retinal ganglion cells to stimuli, taking into account the effects of amacrine cells inhibition. This analysis reveals two important functional parameters of the network: (1) the intensity of the interactions between bipolar and amacrine cells and (2) the characteristic timescale of these responses. Both parameters have a profound combined impact on the spatiotemporal features of retinal ganglion cells’ responses to light.

The validity of the model is confirmed by faithfully reproducing pharmacogenetic experimental results obtained by stimulating excitatory DREADDs (Designer Receptors Exclusively Activated by Designer Drugs) expressed on ganglion cells and amacrine cells’ subclasses, thereby modifying the inner retinal network activity to visual stimuli in a complex, entangled manner. Our mathematical model allows us to explore and decipher these complex effects in a manner that would not be feasible experimentally and provides novel insights in retinal dynamics.

## 1 Introduction

Visual processing involves some of the most complex neural networks in the vertebrate central nervous system (Marr, 1982; Chalupa & Werner, 2004; Besharse & Bok, 2011; Daw, 2012). The retina is the entry point to our visual system. Located at the back of the eye, this thin neural tissue receives the light that the cornea and lens have captured from different parts of the visual scene, converts it into electrical signals, and, finally, transmits these signals to the brain visual areas. In particular, light follows a vertical excitatory pathway in the retina, from photoreceptors (PRs) to bipolar cells (BCs) and onward to retinal ganglion cells (RGCs), modulated laterally by inhibitory interneurons and horizontal (HCs) and amacrine cells (ACs; see Figure 1, left). RGCs serve as a bridge between the retina and the brain, conveying highly processed and integrated signals from the upstream retinal neurons to downstream visual processing cortical areas. Amazingly, the human brain can recreate images from interpreting parallel streams of information of about 1 million RGCs, the sole retinal output neurons. This ability is partially due to the astonishing functional, anatomical, and molecular diversity across the retinal layers. However, how the different cell classes interact to ultimately encode, via RGCs, a visual scene into spike trains (action potentials) deciphered by the brain remains largely a mystery.

RGCs are indeed embedded in a complex network, and their response is roughly driven by two “controllers”: (1) the output of BCs that includes both the intrinsic response properties of these cells and the actions of HCs and ACs on them (lateral connectivity) and (2) the direct input from ACs via chemical synapses or gap junctions, helping spike synchrony between neighbor RGCs (Masland, 2012b; Demb & Singer, 2015). As a consequence, the response of RGCs to visual stimuli depends not only on the physiological characteristics of the cells (Sanes & Masland, 2015), but also on the network they are embedded in and on the stimulus itself (Cessac, 2022). Previous studies have thus attempted to investigate how the inner retinal neurons are organized into parallel circuits across different cell types and converge onto RGCs (Wässle, 2004; Gollisch & Meister, 2010). This has been studied extensively at the level of bipolar cells, leading to a fairly good understanding of their function (Euler et al., 2014). Other studies have investigated the functional role of amacrine cell (ACs) types in retinal processing (Asari & Meister, 2012; Franke & Baden, 2017; Diamond, 2017; Schröder et al., 2020), suggesting either specific functions such as direction selectivity (starburst ACs) or more general computations, like motion anticipation (Berry et al., 1999; Souihel & Cessac, 2021). A number of studies have also focused on how retinal interneurons contribute to the sensitivity to activity patterns of RGCs. For example, de Vries et al. (2011) studied the spatiotemporal effects of ACs on RGCs outputs and proposed mechanisms related to predictive coding of the retina. Manu et al. (2022) and Ichinose et al. (2014) manipulated HCs and ACs, the first by injecting patterns of current and the latter by using pharmacology to directly measure the effect on retinal receptive fields surround. Protti et al. (2014) studied the spatial organization of excitatory and inhibitory synaptic inputs onto ON and OFF RGCs in the primate retina using pharmacology. Nevertheless, the potential role of the network of BCs-ACs on the RGCs' response, from both a theoretical and experimental perspective, has not been sufficiently explored yet. The scope of the study presented in this article is to take one step further in this direction.

Numerous models of the retina have been proposed, with different levels of biological detail and across multiple spatial and temporal scales. Phenomenological models have become quite popular due to their simplicity and efficiency to fit to experimental data and explain observations, such as the linear-nonlinear model (Chichilnisky, 2001; Paninski, 2003; Simoncelli et al., 2004; Schwartz et al., 2006), the generalized linear models (Pillow et al., 2008), and the black-box convolutional neural networks of retinal function (McIntosh et al., 2017). However, these models are agnostic to the biophysical details that underlie the input-output transform of the visual system. On the other hand, mechanistic models exhibit a direct relationship between the underlying biophysics of neurons and the parameters of the model. Well-known examples are the Hudgkin-Huxley (Hodgkin & Huxley, 1952), the Fitzhugh-Nagumo (FitzHugh, 1969; Nagumo et al., 1962), and the Morris-Lecar (Morris & Lecar, 1981) models, but they were mainly designed to describe the spike generation while most of the neurons in the retina (except RGCs) are not spiking. In addition, this type of model turns out to be cumbersome to analyze and simulate, especially when dealing with collective network dynamics.

The model presented in this article falls into both categories, and it is definitely inspired by some previous work (Berry et al., 1999; Chen et al., 2013; Souihel & Cessac, 2021). It is a multistage, phenomenological model, with simplifications regarding the complex retinal structure. Nevertheless, it aims to be relatively precise from a biological perspective by realistically reproducing RGCs’ responses to light from the experimental recordings. Our primary goal is to gain insights into the underlying biophysical processes giving rise to certain experimentally observed phenomena rather than proposing an exact model of the retina.

On a theoretical ground, we consider a network of BCs connected via ACs, both cell types being connected with chemical synapses to RGCs. Based on a mathematical study, we derive an analytical formula of the RGCs’ receptive field (RF) that takes into account the lateral ACs’ connectivity and shows how the response of RGCs to spatiotemporal stimuli is shaped. Especially, we emphasize the role of the average intensities of the interactions BCs-ACs and ACs-BCs and the characteristic timescales of these cells, responses. Varying these parameters acts on the shape of the response to light with potential prominent effects such as a switch from monophasic to biphasic in the temporal RF.

We illustrate our predictions by analyzing experimental data obtained from the pharmacological action of excitatory DREADDs (designer receptors exclusively activated by designer drugs) in genetically modified mice. DREADDs are activated by “designer drugs” such as clozapine-n-oxide (CNO), resulting in an increase in free cytoplasmic calcium and profound increase in excitability in the cells that express these DREADDs (Roth, 2016). We found DREADD expression in subsets of both RGCs and ACs in the inner nuclear layer (INL; Hilgen et al., 2022). In these conditions, CNO would act on both ACs and RGCs, providing a way to unravel the entangled effects of (1) direct excitation of DREADD-expressing RGCs and increase inhibitory input onto RGCs originating from DREADD-expressing ACs and (2) change in cells' response timescale (via a change in the membrane conductance), thereby providing an experimental setup to validate our theoretical predictions.

We propose a simplified model for the network of BCs-ACs-RGCs. Their concerted activity in response to a visual stimulus is described by a large-dimensional dynamical system whose mathematical study allows an explicit computation of the RF of BCs, ACs, and RGCs’ and, more generally, to anticipate the effects resulting from light stimulation conjugated with the network activity due to ACs' lateral inhibition, in control conditions and in the presence of CNO. Computing the receptive fields of all cell types (especially RGCs) allows us to to disentangle the concerted effect of ACs lateral connectivity and CNO on the RF of RGCs and provides an excellent agreement with experimental data. We argue, on the basis of the model and analytical computations, that the network of BCs-ACs shapes the RF of RGCs’ via two main reduced dimensionless parameters, one characterizing the ratio in the timescales of ACs’ and BCs’ response and the other characterizing the ratio between the synaptic weight BCs $\u2192$ ACs and ACs $\u2192$ BCs. This defines a two-dimensional map, called the “RFs’ map.” We show how the experimental data settle in this map, and we link the observed changes in the shape of the RF when applying CNO, to a displacement in this map. Finally, we extrapolate our model predictions to situations where the RGCs' response could not be explained by the physiological properties of the cell but would involve the work of ACs.

## 2 Methods

### 2.1 The Retina Model

#### 2.1.1 Structure

We assimilate the retina to a superimposition of three layers, each one corresponding to a cell type (BCs, ACs, RGCs) and being a flat, two-dimensional square of edge length $L$ mm where spatial coordinates are noted $x,y$ (see Figure 1).

We consider a simplified form of connectivity, inspired from the real connectivity in the retina, illustrated in Figure 1. First, there are as many BCs as ACs and RGCs ($N$ cells per type so that the total number of cells is $3N$). BCs are labeled with an index $i=1,...,N$, ACs with an index $j=1...N$, and RGCs with an index $k=1...N$. BCs are connected to ACs. We note $WBiAj\u22640$ the weight of the inhibitory synapse from AC $j$ to BC $i$. We use the convention that $WBiAj=0$ when AC $j$ is not connected to BC $i$. Likewise, we note $WAjBi\u22650$ the weight of the excitatory synapse from BC $i$ to AC $j$, $WGkBi\u22650$ the weight of the excitatory synapse from BC $i$ to RGC $k$, and $WGkAj\u22640$ the weight of the inhibitory synapse from AC $j$ to RGC $k$. We note $WBA$, the (square) matrix of connections from ACs to BCs and so on. Note that the model affords as well connections between the same cell types (e.g., ACs to ACs or RGCs to RGCs via gap junctions), but it would consequently complicate the theoretical analysis.

#### 2.1.2 Visual Input

Each neuron has a receptive field (RF), a specific region of the visual field where light stimulus will modify the activity of this neuron. The term *RF* is not limited to the spatial region, but it is often extended to include the temporal structure within this region. The RF usually exhibits a center-surround organization shared by many cell types in the retina (Masland, 2012a). Each BC receives synaptic inputs from its upstream circuitry, a combination of dendritic excitatory inputs from photoreceptors (rods and cones) and inhibitory inputs from horizontal cells (Franke & Baden, 2017) occurring at the level of the outer plexiform layer (OPL), and this interaction emerges on their RF.

*OPL input*:

The spatial part, $KS$, is a classical difference of gaussians. This spatiotemporal kernel is illustrated in Figure 2.

#### 2.1.3 Voltage Dynamics

CNO binds to specific designed receptors leading to an excitatory or inhibitory response, depending on the receptors ($Gi$ or $Gq$; Roth, 2016; Urban & Roth, 2015). Here, we propose a simplification of the actual biological process for mathematical convenience. We model the CNO effect by a current of the form $-gCNOTV-ECNOT$, where $V$ is the voltage of a cell sensitive to CNO, $T$ is the cell type (only ACs or RGCs according to the experimental setup); $gCNOT$ is the conductance of channels sensitive to CNO that is zero in the absence of CNO while it increases with CNO concentration; and $ECNOT$ is the corresponding reversal potential, positive for excitatory CNO and negative for inhibitory.

In equation 2.3, $\tau Bi,\tau Aj,\tau Gk$ are the characteristic integration times of the BC $i$, AC $j$, RGC $k$. As discussed in the next section and in section A.2 (appendix), they depend on CNO in a complex, nonlinear way.

#### 2.1.4 Reduction of Parameters

The model depends on many parameters that constrain the dynamical evolution of the system (see equation 2.3). Although some of our results, like the explicit form for RFs (see equation 3.6), are quite general, it is easier for analytic derivations and for simulations to further simplify these parameters.

First, we consider that the synaptic weights from ACs to BCs are controlled by a unique parameter, $w->0$, $WBA=-w-\Gamma BA$ (inhibitory synapse), where $\Gamma BA$ is an adjacency matrix ($\Gamma BiAj=1$ if the $j$th AC connects to the $i$th BC, and $\Gamma BiAj=0$ otherwise). Likewise, $WAB=w+\Gamma AB$ with $w+>0$ (excitatory synapse). We use a simple form of connectivity, where $\Gamma AB=\Gamma BA$ are nearest-neighbors, adjacency matrices with null boundary conditions. The retina is regularly tiled by different cell types, so this approximation is reasonable, although in reality, one cell connects to more than four, neighbors. Our “cells” must actually be considered “effective” cells with “effective” interactions. In particular, our parameters $w-,w+$ correspond to many synaptic contacts. This simplification allows considerably reducing the number of parameters shaping the synaptic interactions and affording analytic computations (see appendix).

In this context, we can further understand the effect of CNO on the model. We assume, for simplicity, that the lattice is so large that we can neglect boundary effects, so that the characteristic times of BCs, ACs, and RGCs and their rest state noted, $VB*$ and $VA*$, $VG*$, are uniform in space (the general case is discussed in section A.2). We set $MGB=\u2211i=1NWGkBi$ (resp. $MGA=\u2211j=1NWGkAj$). Strictly speaking, these numbers depend on $k$ due to boundary conditions, but we ignore this dependence here.

On this web page noted, we also address the question of the variation of $\tau B,\tau A,VB*,VA*$ as $\zeta A$ varies. We found that $\u2202VB*\u2202\zeta A=-2d\tau A\tau Bw-1+4d2\tau A\tau Bw-w+\u22640$, meaning that BCs get more hyperpolarized when the excitatory effect of CNO on ACs increases, while $\u2202VA*\u2202\zeta A=\tau A1+4d2\tau A\tau Bw-w+\u22650$ (ACs get more depolarized when $\zeta A$ increases). This result is the one qualitatively expected, but note that our modeling is also quantitative. In strong contrast, the computation of $\u2202\tau B\u2202\zeta A$ and $\u2202\tau A\u2202\zeta A$ reveals that the sign of these derivative depends on specific domains in the space of parameters $w-,w+,\theta B$, with nonlinear frontiers. That is, even for this reduced model, the behavior of the characteristic times as CNO increases depends on network parameters in a nontrivial way. There are actually domains where these characteristic times increase with $\zeta A$.

Thus, while a rapid argumentation would predict that the characteristic times of ACs should increase because the membrane conductance increases as $\zeta A$ increases, the opposite can happen as well. This is because, as revealed by equation 2.8, the characteristic time $\tau A$ depends on the rest states (see equation 2.9), which depend themselves on $\zeta A$ via the characteristic times $\tau A,\tau B$. This results in a two-dimensional implicit nonlinear system studied at https://team.inria.fr/biovision/mathematicalderivationscno/.

As explained in section 1, our main goal is to confront this modeling to experiments and extract general statements from it. One major difficulty, however, is that we cannot fine-tune the parameter $\zeta A$ (the CNO conductance) in experiments because it is not possible to establish a CNO dose-response curve of the changes in retinal activity. Therefore, even under modeling simplifications, it is not possible to fit the model from experiments with the nonlinear equations 2.8 and 2.9. When fitting experimental results, we computed the characteristic integration times of cells in control (CTL) and CNO conditions independently (see section 3.1.2). As we cannot access the rest state of BCs and ACs. We fixed the thresholds $\theta A,\theta B$ to zero. (Mathematically, $\theta B$ should be strictly negative, but we can consider a very small absolute value.) Thus, the rest states, equation 2.9, are zero in the absence of CNO. Nevertheless, in experiments, one can observe a small residual polarization, which changes in the presence of CNO. That is why we added the polarization parameter, $b0$, appearing in equation 2.2, a huge but necessary simplification of the entangled equations 2.9.

Most of the analysis that follows considers that no rectification takes place, even under light simulation, so that we essentially consider a linear model. This is supported by Baccus et al. (2008) showing that the voltage of bipolar cells is essentially a linear function of the stimulus for white noise. For a more general analysis dealing with rectification, check the appendix, as well as section 4.

#### 2.1.5 Experimental Setup

In our experiments, excitatory DREADDs (hM3Dq) were activated using CNO on RGCs and ACs co-expressing a certain gene (*Scnn1a* or *Grik4*), triggering a calcium release from organelles, and thus leading to an increase of intracellular concentration of free calcium. This resulted in membrane depolarization and higher neuronal excitability. Our experiments suggested that subclasses of ACs and RGCs could be simultaneously sensitive to CNO, but we did not observe any evidence of an effect on BCs.

Detailed experimental details can be found in our recent publication (Hilgen et al., 2022). All experimental procedures were approved by the ethics committee at Newcastle University and carried out in accordance with the guidelines of the U.K. Home Office, under the control of the Animals (Scientific Procedures) Act 1986. Recordings were performed on the BioCamX platform with high-density-multielectrode array (HD-MEA) arena chips (3Brain GmbH, Lanquart, Switzerland), integrating 4096 square microelectrodes in a 2.67 $\xd7$ 2.67 mm$2$ area and aligned in a square grid with 42 $\mu $ spacing. Light stimuli were projected onto the retina using a LED projector. Briefly, the projector irradiance was attenuated using neutral-density filters to mesopic light levels.

## 3 Results

In this section we present the theoretical and numerical results based on our retina model. We provide only the main conclusions of the mathematical derivations, which are presented in detail in the appendix.

### 3.1 Model Fitting of Ganglion Cell Receptive Fields Characterized from Experimental Data

RGC responses emanate from a dynamic balance of synaptic excitation and inhibition, originating from the interactions of BCs and ACs. We believe that such network connectivity gives rise to various response patterns and we show that our model can capture these joint effects by providing an analytic form of the RF of the cells. As we demonstrate, this computation provides an algorithmic way to fit the model parameters to the light responses recorded from mouse RGCs. One can then infer the possible behavior of ACs and BCs leading to this RGC response, *even if we do not measure them experimentally.*

#### 3.1.1 Mathematical Form of the RF of Retinal Cells

The results that follow hold for all cell types. Thus, we label cells with a generic index $\alpha =1...3N$. BCs have an index $\alpha =1,...,N$, ACs have an index $\alpha =N+1...2N$, RGCs have an index $\alpha =2N+1...3N$, and we write $X\alpha $ the voltage of cell $\alpha $.

The time evolution of the dynamical system in equation 2.3 is controlled by a matrix, $L$, called the transport operator, and explicitly written in the section A.3. $L$ depends on the connectivity matrices $WBA,WAB,WGA,WGB$ and on all the parameters controlling the dynamics. The form of $L$ also depends on the set of rectified cells. In the following, we assume that cells are not rectified; that is, the hyperpolarized BCs do not reach the rectification threshold (the rectified case is discussed in the section 4.5 and in the appendix). Consequently, the dynamical system, equation 2.3, is linear.

In this case, the eigenvalues $\lambda \beta $, $\beta =1...3N$ and the eigenvectors $P\beta $ of $L$ characterize the evolution of cells’ voltages. We note $P$ the matrix transforming $L$ in diagonal form (the columns of $P$ are the eigenvectors $P\beta $) and $P-1$ its inverse.

*Stimulus drive.* The first term, $V\alpha (drive)(t)$ corresponds to equation 2.1 and is nonzero for BCs only. It corresponds to the BCs’ response in the absence of the ACs’ network.

*CNO effects.*The terms

*Stimulus-network interaction term.*In equation 3.1, the term

*all*cells to the stimulus. Especially, it tells us how the direct input, equation 2.1, to BCs is modulated by the concerted activity of BCs and ACs.

Mathematically, equation 3.4 can be interpreted as follows. The drive (index $\gamma =1...N$) triggers the eigenmodes $\beta =1...3N$ of $L$, with a weight proportional to $P\beta \gamma -1$. The mode $\beta $, in turn, acts on the voltage of cell $\alpha =1...3N$ with a weight proportional to $P\alpha \beta $. The time dependence and the effect of the drive are controlled by the integral $\u222b-\u221ete\lambda \beta (t-s)V\gamma (drive)(s)ds$. The eigenvalues $\lambda \beta $ introduce *characteristic timescales* in the response: exponential decay rate for the real part, frequency of oscillations for the imaginary part, if any. The right eigenvectors (columns of $P$) and the left eigenvectors (rows of $P-1$) introduce *characteristic space scales* in the response. The easiest case to figure this out is the nearest-neighbours, connectivity considered in section A.3.5. Here, eigenvectors are labeled by an index $n$ that actually corresponds to a wave vector (see equation A.28) the inverse of a space scale. In this context, the eigenmode with $n=1$ has the largest space scale (the size of the lattice), whereas the eigenmode with $n=N$ corresponds to the minimum space scale (the distance between two nodes). Thus, equation 3.4 illustrates that the response to a spatiotemporal stimulus is a combination of multiple timescales and space scales, leading to interesting phenomena such as resonances, as commented below, or waves of anticipation, as studied in Souihel and Cessac (2021).

*The receptive field of all cell types.*Introducing the function $e\beta (t)\u2261e\lambda \beta tH(t)$ so that $\u222b-\u221ete\lambda \beta (t-s)V\gamma (drive)(s)ds\u2261e\beta *tKBT*tKBS\gamma *x,yS(t)$, and using the separated kernel form, equation 2.2, the response, equation 3.4, reads

*The receptive field of RGCs.*Henceforth, we refer to $K\alpha (t)$ as $KG\alpha (t)$ to make explicit that we are dealing with RGCs. The RF of RGCs can be often written as a product of a space-dependent term and a time-dependent term (separability). In our case, this would correspond to writing $KG\alpha (x,y,t)$ in the form of a product $KG\alpha (x,y,t)=KGT\alpha (t)\xd7KGS\alpha (x,y)$ where

This separation is not strictly possible in equation 3.6 because there is a dependency of $\beta $ on the term $\u2211\gamma =1NP\beta \gamma -1\varpi \beta \gamma KBS\gamma (x,y)$. Thus, from this computation, the RF of RGCs is not expected to be separable in general. However, it is important to remark that the stimulus used in experiments for determining RFs, the white noise, is uniform in probability in the space domain and the time domain. This induces an effective separation of the response that might not hold for more complex stimuli (e.g., moving objects). As the experimental RFs considered here have been obtained from white noise, we will assume separability from now on.

*The spatial part of the RGCs’ RF.* Equation 3.8 appears as an overlap of spatial RFs of BCs. In such approximations of naive overlaps, spatial RFs of BCs are just summed up with a uniform weight. However, here, the RF of each BC is constrained by ACs’ lateral connectivity via the term $P\beta \gamma -1\varpi \beta \gamma $. In particular, equation 3.8 is not necessarily circular even if BCs’ RFs are, and the center of the RGC cell RF is not necessarily at the barycenter of connected BCs RFs. This holds, for example, if AC connectivity is not invariant by rotation.

*The temporal part of the RGC’s RF,*equation 3.7. As we consider monophasic temporal kernels $KBT$ of BCs with the form of equation 2.2, we have

$U\beta $, and thereby $KGT\alpha (t)$, is the temporal part of the RGC receptive field that changes their shape due to variations in the eigenvalues of $L$, that are themselves controlled by model parameters. A striking effect arises when some eigenvalues become complex, leading to temporal oscillations of $U\beta $. This remark is at the core of the analysis exposed in section 3.2.2.

#### 3.1.2 Fitting the RFs of Ganglion Cells

Experimentally, RGCs, RFs were reconstructed from spike triggered average (STA) in response to shifted white noise (SWN). The SWN, introduced by Hilgen et al. (2017) and Pamplona et al. (2022), is a spatially uniform noise where the images of random black and white squares in the classical “white noise” stimulus have, in addition, random spatial shifts, improving the resolution of the STA. This allowed us to fit the spatial and temporal part of the RF. Note, however, that it is difficult to obtain the surround in the spatial part. From the center part of the spatial RF, we fixed the parameter $\sigma p$ in the gaussian pooling, equation 2.7. More extended results can be found in the thesis of E. Kartsaki (Kartsaki, 2022), where we display experimental spatial RFs (see, e.g., section 3.4.1 of the thesis).

We mainly focused on the temporal part of the RF. The time RF estimation resulted in temporal traces with duration 600 ms sampled with a rate $33/4=8.25$ ms. In order to assess the validity of the model, we have fitted these time traces in CTL and CNO conditions. We have a database of 117 cells sensitive to CNO, exhibiting increase or decrease in firing rate beyond a certain threshold.

These reconstructed temporal RFs provide the linear response of an RGC to a spatially uniform flashed stimulus, mathematically corresponding to a Dirac distribution. As we wanted to compare our model’s output, the RGC voltage, $VG$, to this experimental RF, computed from firing rates, we ignored the effect of nonlinearities and assumed that the experimental response is proportional to the RGC voltage. We considered a one-dimensional model (chain) with $N=60$ cells of each type where the cells at the boundaries have a fixed zero voltage (zero boundary conditions), corresponding to the reference rest state. To reduce the effect of boundaries, we made the fit for the RGC in the center of the network.

We performed simulations of the model, equation 2.3, using a spatially uniform Dirac pulse as the stimulus and compute the cell responses using two modalities: simulation of the differential, equation 2.3 (green traces labeled “Sim” in Figure 3) and analytic computation, equation 3.6 (black traces labeled “Th” in Figure 3). We observe that these two traces are always identical, confirming the goodness of the simulation scheme.

We recall that the parameters shaping this response are $A0$, the intensity of the OPL input; $b0$, controlling a residual depolarization/hyperpolarization observed in experimental responses; $w+$, controlling the synaptic intensity from BCs to ACs; $w-$, controlling the synaptic intensity from ACs to BCs; $wGB$, controlling the synaptic intensity from BCs to RGCs; $wGA$, controlling the synaptic intensity from ACs to RGCs; $\tau B$, the characteristic membrane timescale of BCs; $\tau A$, the characteristic membrane timescale of ACs; $\tau RF$, the characteristic membrane timescale of the OPL drive; and $\tau G$, the characteristic membrane time scale of RGCs.

We note $\eta \u2192$ the set of all the parameters shaping the response. $\eta \u2192$ is therefore a point in a 10-dimensional space.

Although the experimental temporal RFs were quite diverse among cells, we were able to fit all of them with good accuracy (the final error was smaller than $1%$). We rejected fits where some parameters became unrealistic (e.g., $\tau A$ larger than 1 s or $w->1$ kHz). We rejected about $4%$ of the fits. An example of fit is shown in Figure 3.

In this synthetic representation, the simulated responses of the OPL, BCs, and ACs connected to the RGC located at the center of the network appear in the top left panel of the figure. We also show the simulated response of the RGC versus the experimental temporal STA of this cell (bottom left). On the top right panel, we see the numerical spatiotemporal RF using a color map. Finally, at the bottom right, we show the power spectrum (modulus of the Fourier transform) of the temporal response. As developed below, this spectrum provides important information on the cell response.

RGCs’ RFs in the presence of CNO could be fit equally well with our model. All fitted cells, in CTL and CNO conditions, can be found on the web: https://gitlab.inria.fr/biovision/dreadds. The C-code used for simulations can also be found there.

### 3.2 Network Connectivity Shapes the Receptive Fields of Ganglion Cells

Throughout our analysis of the experimental data, we observed great variability in the effect of DREADD activation with CNO on the RGCs’ responses. In this section, we develop the consequences of our mathematical analysis in an attempt to explain this observed diversity of CNO effects on RF features. We propose here an explanation purely based on network effects. There are certainly other possible interpretations based on single cell characteristics such as nonlinear effects due to changes in conductance, discussed in section 4. The main advantages of our analysis are that it determines network effects on the RGCs’ RF, controlled by two main parameters, and that it predicts the response to more complex stimuli than full field flashes.

#### 3.2.1 Two Main Parameters Constrain the RF Shape of Ganglion Cells

The entangled feedback effects of ACs-BCs can be characterized by two dimensionless parameters. The first one, $r=\tau A\tau B$, characterizes the ratio between the ACs’ and BCs’ membrane integration times. The second, $s=w-w+$, characterizes the ratio between the ACs’ $\u2192$ BCs’ interaction ($w-$) and the BCs’ $\u2192$ ACs’ interaction ($w+$). Of course, the other parameters play an important role when fitting a specific RF. But what we argue here is that the shape of RF and its space-time scaling essentially depend on the value of $r,s$.

The theoretical explanation is that the RF of an RGC is given by the formula 3.5, which is a cascade of convolutions involving the BC response to the stimulus (OPL input) and the network effects expressed in terms of eigenvalues $\lambda \beta $ and eigenvectors’ components $P\alpha \beta $ appearing in equation 3.6. As explained in section A.3.4, these eigenvalues and eigenvectors are essentially tuned by the two parameters $r,s$. There is also a dependence on other parameters discussed in section 3.2.3. Depending on the location in the space $r,s$, some eigenvalues are real, and some are complex. All eigenvalues have a negative real part, ensuring the stability of the linear system. Imaginary parts in eigenvalues introduce oscillations in the response, whereas the real part fixes a characteristic decay time. The RF formula 3.6, involving a sum of exponentials $e\lambda \beta t$, mixes these effects. As we considered monophasic OPL response here, the time RF of RGCs is monophasic when all eigenvalues are real. In contrast, oscillations in this RF can appear when some eigenvalues are complex. However, the shape of this RF depends in more detail on the period of oscillations, brought by the imaginary part of complex eigenvalues and on the characteristic decay times, brought by the real parts.

When moving in the $(r,s)$ plane, the eigenvalue $n$ switches from real to complex conjugate pair when crossing a critical line, depending on $n$, whose equation A.27 is given in section A.3.4. There are $2N$ eigenvalues associated with the BCs’-ACs’ network, each one determining a critical line in the plane $(r,s)$. The set of all these lines is what we call the skeleton. An example of this skeleton is shown in Figure 4, where we show only some of the critical lines. These lines delimit color regions corresponding to the number of complex eigenvalues (see the color bar legend on the right of the figure).

#### 3.2.2 The RF’s Map

As we move in the $(r,s)$ plane, we notice the following. When $(r,s)$ are small, eigenvalues are real and the terms $P\alpha \beta $ are close to diagonal. In this case, the dominant pole contribution in equation 3.11 is the pole $\omega =i1\tau RF$ corresponding to the OPL contribution. Equation 3.11 has a single peak centered at $\omega =0$, corresponding to a monophasic response. For larger values of $r,s$, some eigenvalues become complex, giving potential additional peaks in the power spectrum. Actually, we observed two cases that are mutually compatible. First, the central peak at $\omega =0$ switches to a nonzero value. This corresponds to the appearance of an exponentially damped oscillation in the RF, giving a biphasic response. However, secondary peaks may appear, leading to residual oscillations, in addition to the main trend (monophasic or biphasic). This gives what we call a *polyphasic* response. Such residual oscillations were observed in our experiments and were relatively numerous (about $40%$). There are, of course, other hypotheses explaining these residual oscillations, but here we support the hypothesis that they are generated by a network effect. An example is given in Figure 3, where we observe, at the bottom left, residual oscillations after the main biphasic response, and, at the bottom right, the power spectrum with a main peak not centered at zero and a secondary peak corresponding to the residual oscillations. Note that this secondary peak is observed on experimental data, but we failed to reproduce it in the fits. This is further explained in section 4.

This analysis leads us to broadly decompose the $(r,s)$ plane into three regions corresponding to cells, response phases: monophasic, biphasic, and polyphasic. One switches from one phase to the other when some peaks in the power spectrum appear or disappear, driven by the spectrum of $L$.

The corresponding phase diagram, obtained numerically, constitutes what we call the “RFs map” shown in Figure 4 (top right). There are four points, labeled A to D, on this map, each representing a different cell’s response phase. For each point, we have plotted the RGC temporal RF, as computed with the model (bottom panels). A more general representation of what is going on when moving along a specific pathway in this map can be found at https://team.inria.fr/biovision/cno_paper_supplementary/, where one can see movies showing how the network effects shape the RGCs’ RFs when $(r,s)$ vary. It is important to note that this map is a projection from a 10-dimensional space in 2 dimensions. It has been obtained, from the skeleton, by fixing the other parameter values, based on the average values obtained from fits over the cell populations (see Figure 5).

Figure 4 illustrates how the network of BCs-ACs shapes the RF of an RGC by a subtle balance between the interactions BCs-ACs, BCs-ACs (parameter s) and the timescale of their response (parameter r). For simplicity, we consider here ON BCs, but the explanation holds also for OFF BCs.

The OPL drive (black line) induces a depolarization of BCs (red line) within a timescale of order $\tau B$. This excites the connected ACs (blue line), with an intensity $w+$. The excitation of ACs hyperpolarizes BCs with an intensity $w-$ within a timescale of order $\tau A$. RGCs receive a combination of excitatory and inhibitory inputs from their afferent circuit with respective weights $wGB$ and $wGA$.

In the monophasic region, ACs respond with the same time scale as BCs. One observes for RGCs a monophasic response panel A, whose intensity depends on the ratio between excitation, provided by BCs, with a weight $wGB$ and inhibition, with a weight $wGA$. When moving to the biphasic region, ACs respond slowly than BCs, leading to the biphasic response illustrated with panel B. When moving upward in the RF map (increasing $s$) this biphasic response becomes polyphasic with oscillations. This is illustrated in panel C. BCs start to rise due to the OPL drive, leading to a rising of ACs, slower than BCs, leading to a hyperpolarization of BCs. This leads to a decrease of ACs’ voltage, and thereby to a rising of BCs, which still respond to the OPL drive. This cycle can be repeated several times, depending again on the parameters $r,s$. Note that the amplitude is always decreasing exponentially fast. The period of the observed oscillations and the damping characteristic time depend on the location in the map. Finally, panel D is the point at the intersection of the regions of the three phases. We show it for completeness. Note that increasing $s$ leads to a decrease of the RGCs’ response. When $s$ is too high, the response becomes too weak to be observed experimentally.

#### 3.2.3 Experimental Cells Spread in the RFs’ Map

To confront our theoretical insight with experiments, we have placed the recorded cells in the RFs’ map as shown in Figure 5. That is, for each experimentally recorded RGC, we fit the model parameters $\eta \u2192$ as explained in section 3.1.2, thereby providing an estimation of $r,s$, independently in CTL and in CNO conditions. This defines a virtual network, made of identical cells in each layer, where all RGCs are responding like the experimental RGC. As mentioned above, the map is only a projection of $\eta \u2192$, which exists in a 10-dimensional space, in the 2-dimensional plane $r,s$. Some parameters are linked. The mathematical analysis in section A.3.4 shows us that the skeletons obtained for a fixed value of $\tau B,w+$ can be extrapolated to other values $\tau 'B,w'+$ by the simple rescaling $r'=r,s'=w'+\tau 'Bw+\tau B2s$. Using this rescaling, the map of Figure 5 has been drawn for a specific value of $w+=8.5$ Hz and $\tau B=30$ ms. This corresponds to mean values of these parameters, averaged over the set of experimental cells (in CTL conditions). In this two-dimensional representation, extra information coming from the other parameters $\tau G,wGB,wGA,A0,b0$ is lost.

Figure 5 (left) shows us the repartition of cells in the map in CTL conditions. A few of them are monophasic, but many of them are biphasic, with a significant proportion close to the polyphasic region and showing residual oscillations. Figure 5 (right) shows the same cells in CNO conditions. The main observation is that CNO (right panel) does not dramatically change the repartition of cells in the RFs’ map. This is made more explicit in the bottom panels of Figure 5. We show the mean and standard deviation of the main model parameters, $\tau A,\tau B,w-,w+,b0$, in CTL and CNO conditions, separating the two subclasses of investigated genes, Grik4 and Scnn1a, and separating ON or OFF cells. These parameters are essentially constant, showing that there is no statistical trend induced by CNO. This is in agreement with our previous paper (Hilgen et al., 2022) where we showed that Scnn1a or Grik4 groups include multiple anatomical cell types, which suggest that our model or fitting does not introduce any obvious bias.

The situation is radically different when investigating the effect on individual cells. Indeed, the application of CNO makes some cells move their representative point from one region in the RF map to the other, thereby drastically changing the cell’s response. Two examples are shown in Figures 6 and 7. In Figure 6, the application of CNO induces a motion of the representative point in the RFs’ map. However, as the cell is close to the area separating the monophasic from the biphasic phase, this motion dramatically the affects shape of the time response. Note that this motion looks tiny because it is plotted in log scale, but in reality, it reflects a relatively large change in the RGC properties. This corresponds therefore to a relatively large change in the properties of the ganglion cell. Actually, the RFs’ map can be refined by plotting the value of the main period $T1$ (corresponding to the main peak in the power spectrum) as shown in the top right figure. The dashed black lines correspond to the separation between the monophasic, biphasic, and polyphasic regions. Note that the switch from bi- to polyphasic corresponds to additional peaks in the spectrum that are not visible when considering the period of the first peak, $T1$. (This transition is seen in Figure 7, where the period of the second peak is shown). In CTL conditions, the cell is located in the green region with a high $T1$ of order 600 ms, in the limit of experimental resolution. With CNO, the cell switches to the gray region where $T1$ is of the order 300 ms. The synthesis panel (same representation as in Figure 3) for CTL (left) and CNO conditions (right) is presented at the bottom of the panels in the figure.

In Figure 7 we show a motion inducing a switch from polyphasic to biphasic. Here, CNO drives the cell from the boundary of the polyphasic region to the biphasic region. The representation is the same as for Figure 6 except that the top right figure displays the period of the second period $T2$ (secondary peak in the power spectrum).

The range of parameters *r*, *s* is very broad and seems to be stretched in terms of biological plausibility. In Figure 5, $s$ can go up to $s=200$, which means that ACs’ inhibitory feedback is 200 times stronger than BC’s excitation onto ACs, which looks quite implausible. In the same vein, we observe values of $r=30$ corresponding to ACs having a 30-fold longer time constant than BCs, which might be questionable. These values must not be taken literally as they correspond to several artifacts. First, as explained above, $s$ has been rescaled to match an RF map drawn from a set of reference parameters with, for example, $w+=8.5$ Hz and $\tau B=30$ ms. In addition, these synaptic weights are actually effective interactions that mimic the effect of cells, populations. These extreme values of $(r,s)$ are therefore somewhat reflecting the fact that we are dealing with interactions among multiple cell types in the inner retina.

## 4 Discussion

In this article, we have investigated the role of AC-mediated lateral connectivity in the response of RGCs to visual stimuli. Our conjecture was that these responses are strongly constrained by such lateral connectivity. Based on the mathematical analysis of a network featuring the interaction of BCs-ACs-RGCs, we were able to produce an analytic form for the spatiotemporal response (receptive fields) of all cell types in the model. This finding has significant implications for the usefulness, identifiability (i.e., its parameters could be obtained from experimental data), and interpretability of the model. First, it provides an algorithmic way to fit the model parameters to the light responses recorded from mouse RGCs, using the analytical formula of the RF. This means that we are able not only to find the parameters that best fit the variables concerning the RGC’s responses but also to infer the possible behavior of ACs and BCs leading to the RGC’s responses, even if we do not measure them experimentally. Second, it provides an intuitive understanding of the role of various model variables and highlights the impact of two phenomenological parameters (with a physical meaning) on the spatiotemporal response: (1) the intensity of the interactions BCs-ACs and (2) the characteristic timescale of these cells' responses. This can be summarized in the two-dimensional RF maps, where one observes phases corresponding to different modalities in the response. We were able to validate experimentally these modeling results, based on the ability to pharmacologically modify the level of ACs and RGC’s neural activity using pharmacogenetics (DREADD-CNO). We now address some caveats and potential extensions of this work.

### 4.1 Polyphasic Phase

Although we observe about $40%$ of polyphasic cells in the experimental plots (characterized by secondary peaks in the power spectrum), the model has difficulty properly fitting them. This is visible in Figure 5, where no cell is within the polyphasic region, whereas the secondary peaks are clearly visible in the experimental power spectra (see Figure 7). This can be explained by several factors. First, the fitting method, trying to estimate a model with 10 parameters from a trace with a few hundred points, has clear limits. In particular, the secondary peaks, having a few points in the power spectrum, are hard to capture and require patient fine tuning. The main limitations may also come from the model itself, as we now explore.

### 4.2 Inhomogeneities

Our model is quite homogeneous. It assumes that there is only one subclass for each cell type, represented by a unique set of parameters, and the connectivity is fairly regular. This homogeneity was the key to answering a specific challenge: understanding better the potential role of the network of BCs-ACs on the RGC’s response from theoretical and experimental perspectives. This study was done in the spirit of theoretical physics: find a set of canonical equations, grounded in reality (here biophysics), and mathematically analyze their behavior with a minimal set of reduced (dimensionless) parameters. Then, confront the model predictions to experiments and propose a simple representation of the observed effects. This type of approach is useful because it allows one to understand the action of cells within a large retinal network at a level of detail and comprehensiveness generally not possible by numerical simulation alone. Yet one may wonder what would happen if more parameters were taken into consideration to get closer to real retinal networks. For example, what would be the consequences of introducing more sustained and transient bipolar and amacrine cells or considering more general forms of connectivity, such as connections between amacrine cells?

where $U\beta m(t)\u2261e\beta *tKBTm(t)$ and $\gamma \u2208m$ stands for “the BC $\gamma $ belongs to population m.” Beyond that, we are not yet able to tell what would be the impact of such heterogeneity on the model. The reason is fairly simple. Adding heterogeneity will increase the number of parameters and in general will prevent us from going as far as we did in this article, constructed in the spirit of having mathematical control of the results. On technical grounds, the structure of eigenvalues and eigenvectors is expected to dramatically change in a way that we cannot generally determine analytically, in contrast to this work, and it would require numerical explorations in a huge space of parameters.

Adding heterogeneity clearly necessitates precisely defining which question we want to answer, and, from this point of view, specific forms of heterogeneity could be studied in an extension of this work. A simple idea would be to introduce some randomness in the parameters characterizing BCs and ACs—for example, considering random leak times in order to have a distribution of characteristic times $\tau A,\tau B$ to study how we depart from the result obtained here when increasing the variance of these times or considering random versions of the incidence matrices $\Gamma BA,\Gamma AB$. An example of this has been considered in Souihel and Cessac (2021) to analyze anticipation effects. In any case, such questions would require long investigations beyond the scope of this article.

### 4.3 Rectification

The model includes weak nonlinearities (rectification) that were neglected in the mathematical computations. The effects of such rectification can be mathematically investigated (Cessac, 2022). Mainly, rectification projects dynamics on the subspace of nonrectified cells. This means that the dimensionality of the dynamical system changes in time, depending on the stimulus and network parameters, with strong consequences for the spectrum of $L$ and thus on the power spectrum briefly discussed in section A.3.4.

### 4.4 Local Nonlinearities versus Network Effects

Additional nonlinearities take place in retinal dynamics. Ion channels have nonlinear behavior-inducing phenomena such as bifurcation and bursting, essential, for example, in the development of the retina where bursting starburst amacrine cells generate retinal waves (Hennig et al., 2009; Karvouniari et al., 2019; Cessac & Matzakou-Karvouniari, 2022). In addition, gain control plays a central role in the response to spatiotemporal stimuli inducing, for example, retinal anticipation (Berry et al., 1999; Chen et al., 2013; Souihel & Cessac, 2021).

Although our study was limited to responses to full-field flashes, we would like to extend the consequence of our analysis to more complex stimuli. First, the presence of peaks in the power spectrum implies the existence of resonances, that is, preferred frequencies for the RGCs. Exciting a cell with a resonance frequency will produce a maximal response. When applying a stimulus like the Chirp stimulus (Hilgen et al., 2022; Baden et al., 2016), there is a phase where periodic flashes, with constant contrast but increasing frequency, are applied. One observes frequently a bump in the experimental RGCs’ response that might correspond to such a resonance.

In addition to preferred time frequencies, our analysis shows that the response of RGCs, induced by the network, may also involve specific space scales. Mathematically, these scales appear in the eigenvectors of the transport operator (see equation A.28 in section A.3.5). The practical implication would be that when a localized, time-periodic stimulus is presented at a resonant frequency and with a small radius, and we increase slowly this radius, one may observe scales where the response is maximal. One of these scales may correspond to the size of the RF, but we conjecture that there should be larger scales where this phenomenon appears. This would be a way to disentangle intrinsic responses of cells from network-induced responses by blocking the AC synapses (e.g., strychnine for glycinergic cells). More generally, the existence of time resonances and preferred space scales would also induce resonant response to moving objects with the appropriate speed. Such effects could also be related to mechanisms giving rise to anticipatory waves (Menz et al., 2020).

### 4.5 Conclusion

This research provides mathematical insights for exploring the potential role of the network of amacrine cells in vision processing. First, it brings in the field of retinal modeling methods and concepts from dynamical systems theory, especially the idea that concerted population activities can be understood with a few reduced parameters while still reproducing experimental observations. Second, it makes a step forward to formalizing the concept that the response of RGCs to stimuli is not only the result of intrinsic cells, characteristic (e.g., their morphology), but it also depends on their interaction with other cells and on the stimulus itself.

This article is a first step in confronting this approach to experiments as a proof of concept opening up several research tracks. First, it could be used to infer the potential structure of spike correlations generated by an RGC population when responding to a moving object. While it is easy to show in this model that lateral inhibition decorrelates the response to full field stimuli, the situation is dramatically different when an object is moving in its visual field. The resulting wave of activity generates nontrivial, transient, spatiotemporal correlations that may contain fundamental information deciphered by the cortex. The corresponding nonstationary spike train distributions can be mathematically (Cessac, 2011; Cessac et al., 2009, 2021) and numerically (Cessac et al., 2017) studied. Pushing forward this formalism could therefore be a key towards understanding better how the retina encodes the visual information in a visual world made of motion.

Second, with the necessity of improving their biological plausibility, this class of model could be used to fit online experimental retinal responses to a simulator by adapting the stimulus to the model prediction in a closed-loop process (Benda et al., 2007). This is in the spirit of current trends of research attempting to construct data-driven models of the retina (Schröder et al., 2020), although our approach targets reducing as much as possible the number of parameters and maximizing the mathematical control so as to avoid the black box effect.

Finally, our work could be used in future studies to explore the role of other RGC subclasses or other retinal neurons and their interactions. In addition, it could be used to disassemble the components of other retinal circuits by manipulating the activity of specific neurons. It could also potentially benefit research in other parts of the nervous system, as fundamental properties of the inner retina are shared with other parts of the brain.

## Appendix

### A.1 Derivation of the Model Equations and Properties

#### A.1.1 Generic Equations for Voltages

We start from the fundamental equation of neuronal dynamics (charge conservation) and make several simplifying assumptions.

First, we assume that neurons are reduced to points. They all have the same leak conductance, $gL$, the same leak reversal potential, $EL$, and the same membrane capacity, $C$. They are passive (no active ion channels). This last assumption is based on the fact that most of the retinal neurons considered here (BCs and ACs) are not spiking. From these assumptions, we get rid of equations for activation-inactivation variables present in most spiking models (Ermentrout & Terman, 2010). For RGCs, spiking is ruled by an LN type of model (see main text).

As exposed in the main text, we model the CNO effect by a current of the form $-gCNOTVq-ECNOT$, where $T$ is the cell type (e.g., ACs or RGCs) so that the CNO conductance is constant and depends only on the cell type. $ECNOT$ is the corresponding reversal potential. We insist, however, that this is only a modeling short-cut, allowing the mathematical treatment of inhibitory and excitatory effects on an equal footing, with only one parameter, the CNO conductance $gCNOT$.

#### A.1.2 Rest States, Characteristic Times, and CNO Dependence

where $IN$ is the $N\xd7N$ identity matrix and $1\u2192N$ is the unit vector in $N$ dimension. Here, we have assumed that the rest states are above threshold and that $I-\tau A\tau BWBAWAB$ and $I-\tau A\tau BWABWBA$ are invertible. See https://team.inria.fr/biovision/mathematicalderivationscno/ for the general case.

The system of equations A.7 and A.8 are thus nonlinearly entangled. Especially, there is a nice feedback effect appearing in feedback loop terms like $WBAWAB$. Actually, for example, the term $IN-\tau A\tau BWBAWAB-1$ involves loops $WBAWABn$ of the order $n>0$, with an amplitude decaying with $n$. Consider now the role of the parameter $\zeta A$. It directly affects ACs and indirectly, through the network, the activity of BCs and RGCs. One observes, first, a direct effect on the ACs’ rest state. Looking at the numerator of equation A.8 increasing $\zeta A$ would have the effect of depolarizing the cell (if $ECNOA>0$) or hyperpolarizing it (if $ECNOA<0$), if $\tau A$ and $\tau B$ were independent of CNO. However, CNO acts as well on these times. The net effect of $\zeta A$ is thus nonlinear and cannot be anticipated by simple arguments.

#### A.1.3 Mathematical Analysis of Network Dynamics

In the following, the notation $diagxnn=1N$ denotes a diagonal $N\xd7N$ matrix with diagonal entries $xn$.

*Joint dynamics.*The joint dynamics of all cell types is given by the dynamical system 2.3. We use Greek indices $\alpha ,\beta ,\gamma =1...3N$ and define the state vector $X\u2192$ with entries

We remark that equation A.11 has a specific product structure: the dynamics of RGCs is driven by BCs and ACs with no feedback. This means that one can study first the coupled dynamics of BCs and ACs and then the effect on RGCs.

*Linear evolution.*We consider the evolution of equation A.11 from an initial time $t0$. Typically, $t0$ is a reference time where the network is at rest, before the stimulus is applied. The dynamical system has almost the form of a nonautonomous linear system driven by the term $F\u2192(t)$. There is, however, a weak nonlinearity due to the piecewise linear rectification appearing in the term $R\u2192(X\u2192)$. Therefore, when the voltages of all cells are large enough, the system is linear. Mathematically, there is a domain of $R3N$:

*Derivation of equation 2.10.*We note the eigenvalues of $L$, $\lambda \beta ,\beta =1...3N$ and its eigenvectors, $P\beta $ (the columns of the matrix $P$ transforming $L$ in diagonal form). We consider first the case $C\u2192=0\u2192$. We have then, from equation A.14,

When $C\u22600\u2192$, there is an additional term corresponding to the rest state, equation A.15.

*Eigenvalues and eigenvectors of $L$.*

$\u2022$ Eigenvalues and eigenvectors. We next consider the case where $\Gamma BA=\Gamma AB$. We note $\kappa n,n=1...N$, the eigenvalues of $\Gamma BA$ ordered as $\kappa 1\u2264\kappa 2\u2264...\u2264\kappa n$ and note the normalized eigenvectors $\psi \u2192n$, $n=1...N$.

We now discuss the limit when $w-$ or $w+$ or both tend to 0. If $w-=0$, $\rho n=w+\kappa n\tau $ from equation A.18. If $w+=0$ there are two solutions of equation A.18, $\rho n=0$ or $\rho n=1\tau w-\kappa n$. Finally, when $w-=w+=0$, $\rho n=0$ and the ansatz, equation A.16, does not apply. Actually, in this case, $L$ is diagonal, the $N$ first eigenvalues are $-1\tau B$, and the $N$ next eigenvalues are $-1\tau A$. We have, in this case, $\lambda n+=-1\tau B$ and $\lambda n-=-1\tau A$. Therefore, we order eigenvalues and eigenvectors of $M$ such that the $N$ first eigenvalues are $\lambda n+,n=1...N$, and the $N$ next are $\lambda n-,n=N+1...2N.$

$\u2022$ Rectification. In this article, we have essentially considered a situation where cells are not rectified, whereas the full model, equation 2.3, considers rectification, in agreement with realistic biological systems. The mathematical effect of rectification of a cell $Aj$ is to set to zero the corresponding row in the matrix $WBA$. This has several consequences. First, we cannot apply the useful ansatz used in the section, that is, $\Gamma BA=\Gamma AB$. In addition, the vanishing of only one row in $L$ completely modifies its spectrum. However, thresholding in rectification corresponds to partition the phase space of the model, a compact subset of $R3N$, into convex subdomains, delimited by hyperplanes. In each of these domains, the matrix $W.R\u2192(X\u2192)$ appearing in equation A.11 is linear with a number of zero eigenvalues corresponding to the number of rectified cells. This matrix acts as a projector on the complementary subspace of its kernel. In each of these subdomains, equation 3.1 applies. One can actually compute, for a given stimulus, the time of entrance and exit in a new subdomain with the effect of modifying the eigenvalues and eigenvectors appearing in equation 3.1. The resulting equation is quite complex, though, and will require further investigations. (See Cessac, 2020, for more details.)

The quantum numbers $nx,ny$ define a wave vector $k\u2192n=nx\pi L+1,ny\pi L+1$ corresponding to wave lengths $L+1nx,L+1ny$. Hence, the first eigenmode $(nx=1,ny=1)$ corresponds to the largest space scale (scale of the whole retina) with the smallest eigenvalue (in absolute value) $s(1,1)=2cos\pi L+1+cos\pi L+1-2$. Each of these eigenmodes is related to a characteristic time $\tau n=1\lambda n$.

Eigenvalues $\kappa n$ can be positive or negative. However, from equation A.22 this has no impact on the eigenvalues as what matters is $\kappa n2$. This induces however a symmetry $\kappa n\u2192-\kappa n$ that can be seen in the skeleton Figure 4, not forgetting that this figure is in log scale.

## Acknowledgments

This project was funded by the Leverhulme Trust (RPG-2016-315 to E.S. and B.C.), by Newcastle University (Faculty of Medical Sciences Graduate School).

## References

*Nature Neuroscience*

*Journal of Neuroscience*

*Nature*

*Current Opinion in Neurobiology*

*Nature*

*Journal of Mathematical Biology*

*Recent trends in chaotic, nonlinear and complex dynamics, in honour of Prof. Miguel A. F. Sanjuán on his 60th birthday.*

*Journal of Imaging*

*Entropy*

*Frontiers in Neuroinformatics*

*Physica D: Nonlinear Phenomena*

*Journal of Statistical Physics*

*Journal of Neuroscience*

*Network*

*How vision works: The physiological mechanisms behind what we see*

*Journal of Neuroscience*

*Annual Review of Vision Science*

*Annual Review of Vision Science*

*Foundations of mathematical neuroscience*

*Nature Reviews Neuroscience*

*Mathematical models of excitation and propagation in nerve*

*Journal of Physiology*

*Neuron*

*Journal of Neuroscience*

*Open Biology*

*Scientific Reports*

*Journal of Physiology*

*Journal of Neuroscience*

*How specific classes of retinal cells contribute to vision: A computational model*

*Scientific Reports*

*Proceedings of the National Academy of Sciences*

*Neuron*

*Visual Neuroscience*

*Advances in neural information processing systems*

*Representations of the amacrine cell population underlying retinal motion anticipation*

*Biophysical Journal*

*Proceedings of the IRE*

*Journal of Physiology*

*Network: Computation in Neural Systems*

*Nature*

*Journal of Physiology*

*Neuron*

*Annual Review of Neuroscience*

*Nonlinearities in bipolar cells and their role for encoding visual signals*

*System identification with biophysical constraints: A circuit model of the inner retina*

*Journal of Vision*

*The cognitive neurosciences*

*Journal of Mathematical Neuroscience*

*Annual Review of Pharmacology and Toxicology*

*Nature Reviews Neuroscience*