We consider a model of basic inner retinal connectivity where bipolar and amacrine cells interconnect and both cell types project onto ganglion cells, modulating their response output to the brain visual areas. We derive an analytical formula for the spatiotemporal response of retinal ganglion cells to stimuli, taking into account the effects of amacrine cells inhibition. This analysis reveals two important functional parameters of the network: (1) the intensity of the interactions between bipolar and amacrine cells and (2) the characteristic timescale of these responses. Both parameters have a profound combined impact on the spatiotemporal features of retinal ganglion cells’ responses to light.

The validity of the model is confirmed by faithfully reproducing pharmacogenetic experimental results obtained by stimulating excitatory DREADDs (Designer Receptors Exclusively Activated by Designer Drugs) expressed on ganglion cells and amacrine cells’ subclasses, thereby modifying the inner retinal network activity to visual stimuli in a complex, entangled manner. Our mathematical model allows us to explore and decipher these complex effects in a manner that would not be feasible experimentally and provides novel insights in retinal dynamics.

Visual processing involves some of the most complex neural networks in the vertebrate central nervous system (Marr, 1982; Chalupa & Werner, 2004; Besharse & Bok, 2011; Daw, 2012). The retina is the entry point to our visual system. Located at the back of the eye, this thin neural tissue receives the light that the cornea and lens have captured from different parts of the visual scene, converts it into electrical signals, and, finally, transmits these signals to the brain visual areas. In particular, light follows a vertical excitatory pathway in the retina, from photoreceptors (PRs) to bipolar cells (BCs) and onward to retinal ganglion cells (RGCs), modulated laterally by inhibitory interneurons and horizontal (HCs) and amacrine cells (ACs; see Figure 1, left). RGCs serve as a bridge between the retina and the brain, conveying highly processed and integrated signals from the upstream retinal neurons to downstream visual processing cortical areas. Amazingly, the human brain can recreate images from interpreting parallel streams of information of about 1 million RGCs, the sole retinal output neurons. This ability is partially due to the astonishing functional, anatomical, and molecular diversity across the retinal layers. However, how the different cell classes interact to ultimately encode, via RGCs, a visual scene into spike trains (action potentials) deciphered by the brain remains largely a mystery.

Figure 1:

Translation of the retinal circuit to a computational network model. (Left) Schematic of the retina. Light activates the photoreceptor cells (PRs), which transduce the input into a cascade of biochemical and electrical events that can stimulate BCs and onward RGCs. This vertical excitatory pathway is modulated by inhibitory interneurons comprising two groups: horizontal (HCs) and amacrine cells (ACs). All of these neural signals are integrated by RGCs and finally converted into action potentials going to the brain. (Right) Schematic view of the model. The joint integration of PRs and HCs in a limited region of space is modeled by a spatiotemporal kernel mathematically defining the OPL input to a BC, corresponding to equation 2.1. The convolution of this kernel with the stimulus drives the BC evolution, modulated by inhibitory connections with ACs. BCs indeed make excitatory synaptic connections with ACs and ACs inhibit BCs. Finally, RGCs pool over many BCs and ACs in their neighborhood. Note that the cells' connections on the right panel are shown just for illustration and do not necessarily correspond to the connectivity used in the model. Cells are organized along a square lattice viewed here in a 3D perspective.

Figure 1:

Translation of the retinal circuit to a computational network model. (Left) Schematic of the retina. Light activates the photoreceptor cells (PRs), which transduce the input into a cascade of biochemical and electrical events that can stimulate BCs and onward RGCs. This vertical excitatory pathway is modulated by inhibitory interneurons comprising two groups: horizontal (HCs) and amacrine cells (ACs). All of these neural signals are integrated by RGCs and finally converted into action potentials going to the brain. (Right) Schematic view of the model. The joint integration of PRs and HCs in a limited region of space is modeled by a spatiotemporal kernel mathematically defining the OPL input to a BC, corresponding to equation 2.1. The convolution of this kernel with the stimulus drives the BC evolution, modulated by inhibitory connections with ACs. BCs indeed make excitatory synaptic connections with ACs and ACs inhibit BCs. Finally, RGCs pool over many BCs and ACs in their neighborhood. Note that the cells' connections on the right panel are shown just for illustration and do not necessarily correspond to the connectivity used in the model. Cells are organized along a square lattice viewed here in a 3D perspective.

Close modal

RGCs are indeed embedded in a complex network, and their response is roughly driven by two “controllers”: (1) the output of BCs that includes both the intrinsic response properties of these cells and the actions of HCs and ACs on them (lateral connectivity) and (2) the direct input from ACs via chemical synapses or gap junctions, helping spike synchrony between neighbor RGCs (Masland, 2012b; Demb & Singer, 2015). As a consequence, the response of RGCs to visual stimuli depends not only on the physiological characteristics of the cells (Sanes & Masland, 2015), but also on the network they are embedded in and on the stimulus itself (Cessac, 2022). Previous studies have thus attempted to investigate how the inner retinal neurons are organized into parallel circuits across different cell types and converge onto RGCs (Wässle, 2004; Gollisch & Meister, 2010). This has been studied extensively at the level of bipolar cells, leading to a fairly good understanding of their function (Euler et al., 2014). Other studies have investigated the functional role of amacrine cell (ACs) types in retinal processing (Asari & Meister, 2012; Franke & Baden, 2017; Diamond, 2017; Schröder et al., 2020), suggesting either specific functions such as direction selectivity (starburst ACs) or more general computations, like motion anticipation (Berry et al., 1999; Souihel & Cessac, 2021). A number of studies have also focused on how retinal interneurons contribute to the sensitivity to activity patterns of RGCs. For example, de Vries et al. (2011) studied the spatiotemporal effects of ACs on RGCs outputs and proposed mechanisms related to predictive coding of the retina. Manu et al. (2022) and Ichinose et al. (2014) manipulated HCs and ACs, the first by injecting patterns of current and the latter by using pharmacology to directly measure the effect on retinal receptive fields surround. Protti et al. (2014) studied the spatial organization of excitatory and inhibitory synaptic inputs onto ON and OFF RGCs in the primate retina using pharmacology. Nevertheless, the potential role of the network of BCs-ACs on the RGCs' response, from both a theoretical and experimental perspective, has not been sufficiently explored yet. The scope of the study presented in this article is to take one step further in this direction.

Numerous models of the retina have been proposed, with different levels of biological detail and across multiple spatial and temporal scales. Phenomenological models have become quite popular due to their simplicity and efficiency to fit to experimental data and explain observations, such as the linear-nonlinear model (Chichilnisky, 2001; Paninski, 2003; Simoncelli et al., 2004; Schwartz et al., 2006), the generalized linear models (Pillow et al., 2008), and the black-box convolutional neural networks of retinal function (McIntosh et al., 2017). However, these models are agnostic to the biophysical details that underlie the input-output transform of the visual system. On the other hand, mechanistic models exhibit a direct relationship between the underlying biophysics of neurons and the parameters of the model. Well-known examples are the Hudgkin-Huxley (Hodgkin & Huxley, 1952), the Fitzhugh-Nagumo (FitzHugh, 1969; Nagumo et al., 1962), and the Morris-Lecar (Morris & Lecar, 1981) models, but they were mainly designed to describe the spike generation while most of the neurons in the retina (except RGCs) are not spiking. In addition, this type of model turns out to be cumbersome to analyze and simulate, especially when dealing with collective network dynamics.

The model presented in this article falls into both categories, and it is definitely inspired by some previous work (Berry et al., 1999; Chen et al., 2013; Souihel & Cessac, 2021). It is a multistage, phenomenological model, with simplifications regarding the complex retinal structure. Nevertheless, it aims to be relatively precise from a biological perspective by realistically reproducing RGCs’ responses to light from the experimental recordings. Our primary goal is to gain insights into the underlying biophysical processes giving rise to certain experimentally observed phenomena rather than proposing an exact model of the retina.

On a theoretical ground, we consider a network of BCs connected via ACs, both cell types being connected with chemical synapses to RGCs. Based on a mathematical study, we derive an analytical formula of the RGCs’ receptive field (RF) that takes into account the lateral ACs’ connectivity and shows how the response of RGCs to spatiotemporal stimuli is shaped. Especially, we emphasize the role of the average intensities of the interactions BCs-ACs and ACs-BCs and the characteristic timescales of these cells, responses. Varying these parameters acts on the shape of the response to light with potential prominent effects such as a switch from monophasic to biphasic in the temporal RF.

We illustrate our predictions by analyzing experimental data obtained from the pharmacological action of excitatory DREADDs (designer receptors exclusively activated by designer drugs) in genetically modified mice. DREADDs are activated by “designer drugs” such as clozapine-n-oxide (CNO), resulting in an increase in free cytoplasmic calcium and profound increase in excitability in the cells that express these DREADDs (Roth, 2016). We found DREADD expression in subsets of both RGCs and ACs in the inner nuclear layer (INL; Hilgen et al., 2022). In these conditions, CNO would act on both ACs and RGCs, providing a way to unravel the entangled effects of (1) direct excitation of DREADD-expressing RGCs and increase inhibitory input onto RGCs originating from DREADD-expressing ACs and (2) change in cells' response timescale (via a change in the membrane conductance), thereby providing an experimental setup to validate our theoretical predictions.

We propose a simplified model for the network of BCs-ACs-RGCs. Their concerted activity in response to a visual stimulus is described by a large-dimensional dynamical system whose mathematical study allows an explicit computation of the RF of BCs, ACs, and RGCs’ and, more generally, to anticipate the effects resulting from light stimulation conjugated with the network activity due to ACs' lateral inhibition, in control conditions and in the presence of CNO. Computing the receptive fields of all cell types (especially RGCs) allows us to to disentangle the concerted effect of ACs lateral connectivity and CNO on the RF of RGCs and provides an excellent agreement with experimental data. We argue, on the basis of the model and analytical computations, that the network of BCs-ACs shapes the RF of RGCs’ via two main reduced dimensionless parameters, one characterizing the ratio in the timescales of ACs’ and BCs’ response and the other characterizing the ratio between the synaptic weight BCs ACs and ACs BCs. This defines a two-dimensional map, called the “RFs’ map.” We show how the experimental data settle in this map, and we link the observed changes in the shape of the RF when applying CNO, to a displacement in this map. Finally, we extrapolate our model predictions to situations where the RGCs' response could not be explained by the physiological properties of the cell but would involve the work of ACs.

2.1  The Retina Model

2.1.1  Structure

We assimilate the retina to a superimposition of three layers, each one corresponding to a cell type (BCs, ACs, RGCs) and being a flat, two-dimensional square of edge length L mm where spatial coordinates are noted x,y (see Figure 1).

We consider a simplified form of connectivity, inspired from the real connectivity in the retina, illustrated in Figure 1. First, there are as many BCs as ACs and RGCs (N cells per type so that the total number of cells is 3N). BCs are labeled with an index i=1,...,N, ACs with an index j=1...N, and RGCs with an index k=1...N. BCs are connected to ACs. We note WBiAj0 the weight of the inhibitory synapse from AC j to BC i. We use the convention that WBiAj=0 when AC j is not connected to BC i. Likewise, we note WAjBi0 the weight of the excitatory synapse from BC i to AC j, WGkBi0 the weight of the excitatory synapse from BC i to RGC k, and WGkAj0 the weight of the inhibitory synapse from AC j to RGC k. We note WBA, the (square) matrix of connections from ACs to BCs and so on. Note that the model affords as well connections between the same cell types (e.g., ACs to ACs or RGCs to RGCs via gap junctions), but it would consequently complicate the theoretical analysis.

2.1.2  Visual Input

Each neuron has a receptive field (RF), a specific region of the visual field where light stimulus will modify the activity of this neuron. The term RF is not limited to the spatial region, but it is often extended to include the temporal structure within this region. The RF usually exhibits a center-surround organization shared by many cell types in the retina (Masland, 2012a). Each BC receives synaptic inputs from its upstream circuitry, a combination of dendritic excitatory inputs from photoreceptors (rods and cones) and inhibitory inputs from horizontal cells (Franke & Baden, 2017) occurring at the level of the outer plexiform layer (OPL), and this interaction emerges on their RF.

A popular approach to simplify the complex process involved is to model the RF as a single spatiotemporal linear filter that essentially represents the opposition between the center of the receptive field, driven by photoreceptors, and the surround signal transmitted by horizontal cells. The membrane potential of the BC’s soma can then be linearly approximated by the convolution of the spatiotemporal kernel KBi, featuring the biophysical processes at the OPL, with the visual stimulus S(x,y,t). As we do not consider color sensitivity here, S characterizes a black and white scene, with a control on the level of contrast [0,1]. The voltage of BC i is stimulus driven by the term OPL input:
Vi(drive)(t)=KBi*x,y,tS(t)=x=-+y=-+s=-tK(x-xi,y-yi,t-s)S(x,y,s)dxdyds,
(2.1)
where *x,y,t means space-time convolution. For simplicity we consider only one family of BCs so that the kernel K is the same for all BCs. What changes is the center of the RF, located at xi,yi, which also corresponds to the coordinates of the BC i. We consider in the article a separable kernel K(x,y,t)=KS(x,y)KT(t) where KS is the spatial part and KT the temporal part. We restrict to ON or OFF BCs with a monophasic spatiotemporal kernel of the form
KT=A0t22τRF3e-tτRF+b0H(t),
(2.2)
where H(t) is the Heaviside function. The parameter A0 controls the amplitude of the OPL input and can be positive (ON BC) or negative (OFF BC), while b0 controls the level of residual polarization observed in experiments, when light stimulation has stopped. This parameter is explained further below. Experimentally, the BC temporal response can be biphasic in time because of receptor desensitization and because HCs feedback on to PRs' terminals. Note however, that the BC’s response can also be monophasic (Schreyer, 2018). As we want to point out the effect of lateral inhibition on the BC’s response, we consider a monophasic temporal OPL input and show that, tuning the lateral connectivity parameters in the model, one can indeed obtain biphasic shapes, as expected, but also more complex dynamical responses.

The spatial part, KS, is a classical difference of gaussians. This spatiotemporal kernel is illustrated in Figure 2.

Figure 2:

OPL kernel. (Left) Spatial part in one spatial dimension. (Right) Temporal part.

Figure 2:

OPL kernel. (Left) Spatial part in one spatial dimension. (Right) Temporal part.

Close modal

2.1.3  Voltage Dynamics

In the model, neurons are characterized by their voltage. Although RGCs are spiking, we are indeed interested only in their voltage variations as a function of the stimulus and network effects. Spiking could be obtained from voltage, for example, by a linear nonlinear Poisson process (LNP) for spiking neurons (Berry et al., 1999; Chen et al., 2013; Souihel & Cessac, 2021). We note VBi, the voltage of BC i; VAj, the voltage of AC j; and VGk, the voltage of RGC k. The joint dynamics of voltages are given by the following dynamical system:
dVBidt=-1τBiVBi+j=1NAWBiAjN(A)VAj+FBi(t),i=1...NdVAjdt=-1τAjVAj+i=1NBWAjBiN(B)VBi+ζA,j=1...N,dVGkdt=-1τGkVGk+i=1NBWGkBiNB(VBi)+j=1NAWGkAjNA(VAj)+ζG,k=1...N.
(2.3)
This form is derived from first principles and using simplifying hypotheses, in section A.1 of the appendix. The rectification term,
NA(VAj)=VAj-θA,ifVAj>θA;0,otherwise,
(2.4)
ensures that the synapse ji is inactive when the presynaptic voltage VAj is smaller than the threshold θA. The same holds for the rectification term NB(VBi) with a threshold θB.
The term
FBi(t)=Vi(drive)τBi+ddtVi(drive),
(2.5)
is the stimulus-driven input to BCs. It takes this form to ensure that in the absence of ACs coupling, VBi(t)=Vi(drive)(t).

CNO binds to specific designed receptors leading to an excitatory or inhibitory response, depending on the receptors (Gi or Gq; Roth, 2016; Urban & Roth, 2015). Here, we propose a simplification of the actual biological process for mathematical convenience. We model the CNO effect by a current of the form -gCNOTV-ECNOT, where V is the voltage of a cell sensitive to CNO, T is the cell type (only ACs or RGCs according to the experimental setup); gCNOT is the conductance of channels sensitive to CNO that is zero in the absence of CNO while it increases with CNO concentration; and ECNOT is the corresponding reversal potential, positive for excitatory CNO and negative for inhibitory.

As CNO modifies the membrane conductance, it also induces a change of polarization, which is characterized by the parameter ζA for ACs and ζG for RGCs, with a general form (see section A.1)
ζT=ECNOTCgCNOT,
(2.6)
where T=A,G and C is the membrane capacitance assumed to be the same for all cells.

In equation 2.3, τBi,τAj,τGk are the characteristic integration times of the BC i, AC j, RGC k. As discussed in the next section and in section A.2 (appendix), they depend on CNO in a complex, nonlinear way.

2.1.4  Reduction of Parameters

The model depends on many parameters that constrain the dynamical evolution of the system (see equation 2.3). Although some of our results, like the explicit form for RFs (see equation 3.6), are quite general, it is easier for analytic derivations and for simulations to further simplify these parameters.

First, we consider that the synaptic weights from ACs to BCs are controlled by a unique parameter, w->0, WBA=-w-ΓBA (inhibitory synapse), where ΓBA is an adjacency matrix (ΓBiAj=1 if the jth AC connects to the ith BC, and ΓBiAj=0 otherwise). Likewise, WAB=w+ΓAB with w+>0 (excitatory synapse). We use a simple form of connectivity, where ΓAB=ΓBA are nearest-neighbors, adjacency matrices with null boundary conditions. The retina is regularly tiled by different cell types, so this approximation is reasonable, although in reality, one cell connects to more than four, neighbors. Our “cells” must actually be considered “effective” cells with “effective” interactions. In particular, our parameters w-,w+ correspond to many synaptic contacts. This simplification allows considerably reducing the number of parameters shaping the synaptic interactions and affording analytic computations (see appendix).

Following Berry et al. (1999), Chen et al. (2013), and Souihel and Cessac (2021), the synaptic weight matrices
WGkBi=wGBe-d2Bi,Gk2σp22πσp,WGkAj=wGAe-d2Aj,Gk2σp22πσp,
(2.7)
are gaussian pooling matrices with wGB>0, wGA<0, and with dBi,Gk, the second Euclidean distance between BC i and RGC k. The pooling standard deviation, σp, was fixed to adapt to the spatial RF of observed cells (see section 3.1.2).

In this context, we can further understand the effect of CNO on the model. We assume, for simplicity, that the lattice is so large that we can neglect boundary effects, so that the characteristic times of BCs, ACs, and RGCs and their rest state noted, VB* and VA*, VG*, are uniform in space (the general case is discussed in section A.2). We set MGB=i=1NWGkBi (resp. MGA=j=1NWGkAj). Strictly speaking, these numbers depend on k due to boundary conditions, but we ignore this dependence here.

The characteristic times are then given by
τB=τL1-2dw-τLEA(VA*-θA)τA=τL1+τLζAECNOA+2dw+EB(VB*-θB)τG=τL1+τLζGECNOG+MGBEBVB*-θB+MGAEAVA*-θA,
(2.8)
where d is the lattice dimension, τL the leak characteristic time, and EB and EA are respectively the reversal potential for the synaptic connection from BCs to ACs and for ACs to BCs (see sections A.1 and A.2 for detail). The rest states are given by
VB*=4d2τAτBw-w+θB-2dτBw-τAζA+2dτBw-θA1+4d2τAτBw-w+VA*=4d2τAτBw-w+θA-2dτAw+θB+τAζA1+4d2τAτBw-w+VG*=τGMGBVB*-θB+MGAVA*-θA+ζG.
(2.9)
Equations 2.8 and 2.9 hold if VB*>θB,VA*>θA, that is, the rest state is not rectified. This entails conditions on ζA that we computed explicitly. However, these computations were too lengthy to be included in the article. (They are on the web page https://team.inria.fr/biovision/mathematicalderivationscno/.) In particular, considering a positive ζA (excitatory CNO) having VB*>θB,VA*>θA require that θB<0 and that it is compatible with θA=0̲, a condition that we will assume from now on.

On this web page noted, we also address the question of the variation of τB,τA,VB*,VA* as ζA varies. We found that VB*ζA=-2dτAτBw-1+4d2τAτBw-w+0, meaning that BCs get more hyperpolarized when the excitatory effect of CNO on ACs increases, while VA*ζA=τA1+4d2τAτBw-w+0 (ACs get more depolarized when ζA increases). This result is the one qualitatively expected, but note that our modeling is also quantitative. In strong contrast, the computation of τBζA and τAζA reveals that the sign of these derivative depends on specific domains in the space of parameters w-,w+,θB, with nonlinear frontiers. That is, even for this reduced model, the behavior of the characteristic times as CNO increases depends on network parameters in a nontrivial way. There are actually domains where these characteristic times increase with ζA.

Thus, while a rapid argumentation would predict that the characteristic times of ACs should increase because the membrane conductance increases as ζA increases, the opposite can happen as well. This is because, as revealed by equation 2.8, the characteristic time τA depends on the rest states (see equation 2.9), which depend themselves on ζA via the characteristic times τA,τB. This results in a two-dimensional implicit nonlinear system studied at https://team.inria.fr/biovision/mathematicalderivationscno/.

As explained in section 1, our main goal is to confront this modeling to experiments and extract general statements from it. One major difficulty, however, is that we cannot fine-tune the parameter ζA (the CNO conductance) in experiments because it is not possible to establish a CNO dose-response curve of the changes in retinal activity. Therefore, even under modeling simplifications, it is not possible to fit the model from experiments with the nonlinear equations 2.8 and 2.9. When fitting experimental results, we computed the characteristic integration times of cells in control (CTL) and CNO conditions independently (see section 3.1.2). As we cannot access the rest state of BCs and ACs. We fixed the thresholds θA,θB to zero. (Mathematically, θB should be strictly negative, but we can consider a very small absolute value.) Thus, the rest states, equation 2.9, are zero in the absence of CNO. Nevertheless, in experiments, one can observe a small residual polarization, which changes in the presence of CNO. That is why we added the polarization parameter, b0, appearing in equation 2.2, a huge but necessary simplification of the entangled equations 2.9.

Most of the analysis that follows considers that no rectification takes place, even under light simulation, so that we essentially consider a linear model. This is supported by Baccus et al. (2008) showing that the voltage of bipolar cells is essentially a linear function of the stimulus for white noise. For a more general analysis dealing with rectification, check the appendix, as well as section 4.

2.1.5  Experimental Setup

In our experiments, excitatory DREADDs (hM3Dq) were activated using CNO on RGCs and ACs co-expressing a certain gene (Scnn1a or Grik4), triggering a calcium release from organelles, and thus leading to an increase of intracellular concentration of free calcium. This resulted in membrane depolarization and higher neuronal excitability. Our experiments suggested that subclasses of ACs and RGCs could be simultaneously sensitive to CNO, but we did not observe any evidence of an effect on BCs.

Detailed experimental details can be found in our recent publication (Hilgen et al., 2022). All experimental procedures were approved by the ethics committee at Newcastle University and carried out in accordance with the guidelines of the U.K. Home Office, under the control of the Animals (Scientific Procedures) Act 1986. Recordings were performed on the BioCamX platform with high-density-multielectrode array (HD-MEA) arena chips (3Brain GmbH, Lanquart, Switzerland), integrating 4096 square microelectrodes in a 2.67 × 2.67 mm2 area and aligned in a square grid with 42 μ spacing. Light stimuli were projected onto the retina using a LED projector. Briefly, the projector irradiance was attenuated using neutral-density filters to mesopic light levels.

In this section we present the theoretical and numerical results based on our retina model. We provide only the main conclusions of the mathematical derivations, which are presented in detail in the appendix.

3.1  Model Fitting of Ganglion Cell Receptive Fields Characterized from Experimental Data

RGC responses emanate from a dynamic balance of synaptic excitation and inhibition, originating from the interactions of BCs and ACs. We believe that such network connectivity gives rise to various response patterns and we show that our model can capture these joint effects by providing an analytic form of the RF of the cells. As we demonstrate, this computation provides an algorithmic way to fit the model parameters to the light responses recorded from mouse RGCs. One can then infer the possible behavior of ACs and BCs leading to this RGC response, even if we do not measure them experimentally.

3.1.1  Mathematical Form of the RF of Retinal Cells

The results that follow hold for all cell types. Thus, we label cells with a generic index α=1...3N. BCs have an index α=1,...,N, ACs have an index α=N+1...2N, RGCs have an index α=2N+1...3N, and we write Xα the voltage of cell α.

The time evolution of the dynamical system in equation 2.3 is controlled by a matrix, L, called the transport operator, and explicitly written in the section A.3. L depends on the connectivity matrices WBA,WAB,WGA,WGB and on all the parameters controlling the dynamics. The form of L also depends on the set of rectified cells. In the following, we assume that cells are not rectified; that is, the hyperpolarized BCs do not reach the rectification threshold (the rectified case is discussed in the section 4.5 and in the appendix). Consequently, the dynamical system, equation 2.3, is linear.

In this case, the eigenvalues λβ, β=1...3N and the eigenvectors Pβ of L characterize the evolution of cells’ voltages. We note P the matrix transforming L in diagonal form (the columns of P are the eigenvectors Pβ) and P-1 its inverse.

In this context, we show in the section A.3.3 that the voltage of a cell with index α is the sum of four terms:
Xα(t)=Vα(drive)(t)+Eα(drive)(t)+Eα(CNOA)+Eα(CNOG),α=1,...3N.
(3.1)

Stimulus drive. The first term, Vα(drive)(t) corresponds to equation 2.1 and is nonzero for BCs only. It corresponds to the BCs’ response in the absence of the ACs’ network.

CNO effects. The terms
Eα(CNOA)=ζAβ=13Nγ=N+12NPαβPβγ-1λβ,α=N+1...2N
(3.2)
and
Eα(CNOG)=ζGτGα,α=2N+1...3N
(3.3)
correspond, respectively, to the impact of CNO on the voltages of ACs and RGCs. There are important nonlinear effects hidden in the terms PαβPβγ-1λβ (see equation 3.2) which depend themselves on the characteristic times see equation 2.8. Thus, the polarization level of ACs and RGCs is not only fixed by the direct effect of CNO on the cell, but is also tuned by entangled network effects.
Stimulus-network interaction term. In equation 3.1, the term
Eα(drive)(t)=β=13Nγ=1NPαβPβγ-1ϖβγ-teλβ(t-s)Vγ(drive)(s)ds,α=1...3N,
(3.4)
where ϖβγ=1τBγ+λβ corresponds to the indirect effect, via the network connectivity, of the stimulus drive on (1) BCs, for α=1...N, (2) ACs, for α=N+1...2N, and (3) RGCs for α=2N+1...3N. Thus, this equation describes the response of all cells to the stimulus. Especially, it tells us how the direct input, equation 2.1, to BCs is modulated by the concerted activity of BCs and ACs.

Mathematically, equation 3.4 can be interpreted as follows. The drive (index γ=1...N) triggers the eigenmodes β=1...3N of L, with a weight proportional to Pβγ-1. The mode β, in turn, acts on the voltage of cell α=1...3N with a weight proportional to Pαβ. The time dependence and the effect of the drive are controlled by the integral -teλβ(t-s)Vγ(drive)(s)ds. The eigenvalues λβ introduce characteristic timescales in the response: exponential decay rate for the real part, frequency of oscillations for the imaginary part, if any. The right eigenvectors (columns of P) and the left eigenvectors (rows of P-1) introduce characteristic space scales in the response. The easiest case to figure this out is the nearest-neighbours, connectivity considered in section A.3.5. Here, eigenvectors are labeled by an index n that actually corresponds to a wave vector (see equation A.28) the inverse of a space scale. In this context, the eigenmode with n=1 has the largest space scale (the size of the lattice), whereas the eigenmode with n=N corresponds to the minimum space scale (the distance between two nodes). Thus, equation 3.4 illustrates that the response to a spatiotemporal stimulus is a combination of multiple timescales and space scales, leading to interesting phenomena such as resonances, as commented below, or waves of anticipation, as studied in Souihel and Cessac (2021).

The receptive field of all cell types. Introducing the function eβ(t)eλβtH(t) so that -teλβ(t-s)Vγ(drive)(s)dseβ*tKBT*tKBSγ*x,yS(t), and using the separated kernel form, equation 2.2, the response, equation 3.4, reads
Eα(drive)(t)=Kα*x,y,tS(t),
(3.5)
with
Kα(x,y,t)=β=13NPαβUβ(t)×γ=1NPβγ-1ϖβγKBSγ(x,y),
(3.6)
where we have set Uβ(t)eβ*tKBT(t). The response of cell α is thus expressed as a convolution of the stimulus with a spatiotemporal kernel Kα(x,y,t), an expected result from the linear response. Nevertheless, it is important to point out that expression 3.6 holds for all cell types, not only RGCs, and that it contains the network effects induced by the network of BCs-ACs. Thus, for α=1...N, equation 3.6 characterizes the indirect (network-induced) response of BCs to the stimulus drive, in addition to the direct response, equation 2.1. For α=N+1...2N, equation 3.6 represents the RF of ACs. Finally, for α=2N+1...3N, we obtain the RF of RGCs. We focus on this last case from now on, essentially because this predicted RF can be compared to experimental RFs, whereas we have no experimental access to BCs’ or ACs’ RF.
The receptive field of RGCs. Henceforth, we refer to Kα(t) as KGα(t) to make explicit that we are dealing with RGCs. The RF of RGCs can be often written as a product of a space-dependent term and a time-dependent term (separability). In our case, this would correspond to writing KGα(x,y,t) in the form of a product KGα(x,y,t)=KGTα(t)×KGSα(x,y) where
KGTα(t)=β=13NPαβUβ(t)
(3.7)
is a temporal kernel and
KGSα(x,y)=γ=1NPβγ-1ϖβγKBSγ(x,y)
(3.8)
is a spatial kernel.

This separation is not strictly possible in equation 3.6 because there is a dependency of β on the term γ=1NPβγ-1ϖβγKBSγ(x,y). Thus, from this computation, the RF of RGCs is not expected to be separable in general. However, it is important to remark that the stimulus used in experiments for determining RFs, the white noise, is uniform in probability in the space domain and the time domain. This induces an effective separation of the response that might not hold for more complex stimuli (e.g., moving objects). As the experimental RFs considered here have been obtained from white noise, we will assume separability from now on.

The spatial part of the RGCs’ RF. Equation 3.8 appears as an overlap of spatial RFs of BCs. In such approximations of naive overlaps, spatial RFs of BCs are just summed up with a uniform weight. However, here, the RF of each BC is constrained by ACs’ lateral connectivity via the term Pβγ-1ϖβγ. In particular, equation 3.8 is not necessarily circular even if BCs’ RFs are, and the center of the RGC cell RF is not necessarily at the barycenter of connected BCs RFs. This holds, for example, if AC connectivity is not invariant by rotation.

The temporal part of the RGC’s RF, equation 3.7. As we consider monophasic temporal kernels KBT of BCs with the form of equation 2.2, we have
Uβ(t)=A02τRF2eλβt-t2λβτRF+12+2tτRFλβτRF+1+2τRF2e-tτRF2λβτRF+13τRF2H(t).
(3.9)

Uβ, and thereby KGTα(t), is the temporal part of the RGC receptive field that changes their shape due to variations in the eigenvalues of L, that are themselves controlled by model parameters. A striking effect arises when some eigenvalues become complex, leading to temporal oscillations of Uβ. This remark is at the core of the analysis exposed in section 3.2.2.

3.1.2  Fitting the RFs of Ganglion Cells

Experimentally, RGCs, RFs were reconstructed from spike triggered average (STA) in response to shifted white noise (SWN). The SWN, introduced by Hilgen et al. (2017) and Pamplona et al. (2022), is a spatially uniform noise where the images of random black and white squares in the classical “white noise” stimulus have, in addition, random spatial shifts, improving the resolution of the STA. This allowed us to fit the spatial and temporal part of the RF. Note, however, that it is difficult to obtain the surround in the spatial part. From the center part of the spatial RF, we fixed the parameter σp in the gaussian pooling, equation 2.7. More extended results can be found in the thesis of E. Kartsaki (Kartsaki, 2022), where we display experimental spatial RFs (see, e.g., section 3.4.1 of the thesis).

We mainly focused on the temporal part of the RF. The time RF estimation resulted in temporal traces with duration 600 ms sampled with a rate 33/4=8.25 ms. In order to assess the validity of the model, we have fitted these time traces in CTL and CNO conditions. We have a database of 117 cells sensitive to CNO, exhibiting increase or decrease in firing rate beyond a certain threshold.

These reconstructed temporal RFs provide the linear response of an RGC to a spatially uniform flashed stimulus, mathematically corresponding to a Dirac distribution. As we wanted to compare our model’s output, the RGC voltage, VG, to this experimental RF, computed from firing rates, we ignored the effect of nonlinearities and assumed that the experimental response is proportional to the RGC voltage. We considered a one-dimensional model (chain) with N=60 cells of each type where the cells at the boundaries have a fixed zero voltage (zero boundary conditions), corresponding to the reference rest state. To reduce the effect of boundaries, we made the fit for the RGC in the center of the network.

We performed simulations of the model, equation 2.3, using a spatially uniform Dirac pulse as the stimulus and compute the cell responses using two modalities: simulation of the differential, equation 2.3 (green traces labeled “Sim” in Figure 3) and analytic computation, equation 3.6 (black traces labeled “Th” in Figure 3). We observe that these two traces are always identical, confirming the goodness of the simulation scheme.

Figure 3:

Summary panel. Panel a illustrates the simulated responses of the OPL (term Vα(drive)(t) in 2.1, black trace), BCs (red trace), ACs (blue trace) connected to the RGC located at the center of the network. Panel b shows the simulated response of the RGC (green “Sim,” and black “Th” trace) versus the experimental temporal STA of this cell (orange dots). The green trace (Sim) is the result of a numerical simulation of the dynamical system, equation 2.3, under a spatially uniform flashed stimulus, whereas the black trace (“Th”) is the result given by the analytic expression, equation 3.6. Panel c shows the spatiotemporal RF of the RGC, time in abscissa, and space in ordinate. Panel d displays the power spectrum of the time response, experimental (orange dots), and theoretical (black lines).

Figure 3:

Summary panel. Panel a illustrates the simulated responses of the OPL (term Vα(drive)(t) in 2.1, black trace), BCs (red trace), ACs (blue trace) connected to the RGC located at the center of the network. Panel b shows the simulated response of the RGC (green “Sim,” and black “Th” trace) versus the experimental temporal STA of this cell (orange dots). The green trace (Sim) is the result of a numerical simulation of the dynamical system, equation 2.3, under a spatially uniform flashed stimulus, whereas the black trace (“Th”) is the result given by the analytic expression, equation 3.6. Panel c shows the spatiotemporal RF of the RGC, time in abscissa, and space in ordinate. Panel d displays the power spectrum of the time response, experimental (orange dots), and theoretical (black lines).

Close modal

We recall that the parameters shaping this response are A0, the intensity of the OPL input; b0, controlling a residual depolarization/hyperpolarization observed in experimental responses; w+, controlling the synaptic intensity from BCs to ACs; w-, controlling the synaptic intensity from ACs to BCs; wGB, controlling the synaptic intensity from BCs to RGCs; wGA, controlling the synaptic intensity from ACs to RGCs; τB, the characteristic membrane timescale of BCs; τA, the characteristic membrane timescale of ACs; τRF, the characteristic membrane timescale of the OPL drive; and τG, the characteristic membrane time scale of RGCs.

We note η the set of all the parameters shaping the response. η is therefore a point in a 10-dimensional space.

The fit was then done by gradient descent to minimize the L2-distance D2(η) between the experimental trace of the temporal STA, STA(s), and the theoretical temporal RF (see equation 3.7), which depends on η. The minimization is done by iterating the differential equation,
dηdu=-ηD2.
The gradient of ηD2 involves ηKGTα(s), which can be explicitly computed when we have the analytic form of RF, or numerically. Note that having the analytic form gives better results especially because it allows second-order corrections (Hessian). Our minimization is done only on the temporal trace. Nevertheless, the simulation allows us to draw the corresponding spatial RF.

Although the experimental temporal RFs were quite diverse among cells, we were able to fit all of them with good accuracy (the final error was smaller than 1%). We rejected fits where some parameters became unrealistic (e.g., τA larger than 1 s or w->1 kHz). We rejected about 4% of the fits. An example of fit is shown in Figure 3.

In this synthetic representation, the simulated responses of the OPL, BCs, and ACs connected to the RGC located at the center of the network appear in the top left panel of the figure. We also show the simulated response of the RGC versus the experimental temporal STA of this cell (bottom left). On the top right panel, we see the numerical spatiotemporal RF using a color map. Finally, at the bottom right, we show the power spectrum (modulus of the Fourier transform) of the temporal response. As developed below, this spectrum provides important information on the cell response.

RGCs’ RFs in the presence of CNO could be fit equally well with our model. All fitted cells, in CTL and CNO conditions, can be found on the web: https://gitlab.inria.fr/biovision/dreadds. The C-code used for simulations can also be found there.

3.2  Network Connectivity Shapes the Receptive Fields of Ganglion Cells

Throughout our analysis of the experimental data, we observed great variability in the effect of DREADD activation with CNO on the RGCs’ responses. In this section, we develop the consequences of our mathematical analysis in an attempt to explain this observed diversity of CNO effects on RF features. We propose here an explanation purely based on network effects. There are certainly other possible interpretations based on single cell characteristics such as nonlinear effects due to changes in conductance, discussed in section 4. The main advantages of our analysis are that it determines network effects on the RGCs’ RF, controlled by two main parameters, and that it predicts the response to more complex stimuli than full field flashes.

3.2.1  Two Main Parameters Constrain the RF Shape of Ganglion Cells

The entangled feedback effects of ACs-BCs can be characterized by two dimensionless parameters. The first one, r=τAτB, characterizes the ratio between the ACs’ and BCs’ membrane integration times. The second, s=w-w+, characterizes the ratio between the ACs’ BCs’ interaction (w-) and the BCs’ ACs’ interaction (w+). Of course, the other parameters play an important role when fitting a specific RF. But what we argue here is that the shape of RF and its space-time scaling essentially depend on the value of r,s.

The theoretical explanation is that the RF of an RGC is given by the formula 3.5, which is a cascade of convolutions involving the BC response to the stimulus (OPL input) and the network effects expressed in terms of eigenvalues λβ and eigenvectors’ components Pαβ appearing in equation 3.6. As explained in section A.3.4, these eigenvalues and eigenvectors are essentially tuned by the two parameters r,s. There is also a dependence on other parameters discussed in section 3.2.3. Depending on the location in the space r,s, some eigenvalues are real, and some are complex. All eigenvalues have a negative real part, ensuring the stability of the linear system. Imaginary parts in eigenvalues introduce oscillations in the response, whereas the real part fixes a characteristic decay time. The RF formula 3.6, involving a sum of exponentials eλβt, mixes these effects. As we considered monophasic OPL response here, the time RF of RGCs is monophasic when all eigenvalues are real. In contrast, oscillations in this RF can appear when some eigenvalues are complex. However, the shape of this RF depends in more detail on the period of oscillations, brought by the imaginary part of complex eigenvalues and on the characteristic decay times, brought by the real parts.

When moving in the (r,s) plane, the eigenvalue n switches from real to complex conjugate pair when crossing a critical line, depending on n, whose equation A.27 is given in section A.3.4. There are 2N eigenvalues associated with the BCs’-ACs’ network, each one determining a critical line in the plane (r,s). The set of all these lines is what we call the skeleton. An example of this skeleton is shown in Figure 4, where we show only some of the critical lines. These lines delimit color regions corresponding to the number of complex eigenvalues (see the color bar legend on the right of the figure).

Figure 4:

(Top left) The skeleton. Map in the (r,s) plane showing the skeleton of eigenvalue structure (white lines). When crossing line n, the eigenvalue n switches from real to complex. The color corresponds to the number of complex eigenvalues (see the color bars on the right). (Top right) The RFs’ map. This figure summarizes how the entangled effects of the network of BCs-ACs acts on the RF of RGCs. In this map in the (r,s) plane, we distinguish three main regions (see text for their determination): monophasic, biphasic, and polyphasic. The points labeled A to D correspond to the temporal RF plotted in the bottom panels. We use the same representation as in Figure 3.

Figure 4:

(Top left) The skeleton. Map in the (r,s) plane showing the skeleton of eigenvalue structure (white lines). When crossing line n, the eigenvalue n switches from real to complex. The color corresponds to the number of complex eigenvalues (see the color bars on the right). (Top right) The RFs’ map. This figure summarizes how the entangled effects of the network of BCs-ACs acts on the RF of RGCs. In this map in the (r,s) plane, we distinguish three main regions (see text for their determination): monophasic, biphasic, and polyphasic. The points labeled A to D correspond to the temporal RF plotted in the bottom panels. We use the same representation as in Figure 3.

Close modal

3.2.2  The RF’s Map

The existence of this skeleton determines regions in the (r,s) plane with specific shapes for the temporal RF, given by equation 3.7, a linear combination of functions Uβ(t) given by equation 3.9. The Fourier transform U^β(ω) of Uβ(t) is
U^β(ω)=11+iωτRF31iω-λβ.
(3.10)
Thus, the Fourier transform of equation 3.7 is
KGTα(ω)=β=13NPαβU^β(ω),
(3.11)
a linear combination of rational fractions. Extending to complex ωs, U^β(ω) has two poles, ω=iτRF and ω=-iλβ, corresponding to complex resonances. The contributions of all these poles (for β=1...3N) are combined in equation 3.11 with weights Pαβ.

As we move in the (r,s) plane, we notice the following. When (r,s) are small, eigenvalues are real and the terms Pαβ are close to diagonal. In this case, the dominant pole contribution in equation 3.11 is the pole ω=i1τRF corresponding to the OPL contribution. Equation 3.11 has a single peak centered at ω=0, corresponding to a monophasic response. For larger values of r,s, some eigenvalues become complex, giving potential additional peaks in the power spectrum. Actually, we observed two cases that are mutually compatible. First, the central peak at ω=0 switches to a nonzero value. This corresponds to the appearance of an exponentially damped oscillation in the RF, giving a biphasic response. However, secondary peaks may appear, leading to residual oscillations, in addition to the main trend (monophasic or biphasic). This gives what we call a polyphasic response. Such residual oscillations were observed in our experiments and were relatively numerous (about 40%). There are, of course, other hypotheses explaining these residual oscillations, but here we support the hypothesis that they are generated by a network effect. An example is given in Figure 3, where we observe, at the bottom left, residual oscillations after the main biphasic response, and, at the bottom right, the power spectrum with a main peak not centered at zero and a secondary peak corresponding to the residual oscillations. Note that this secondary peak is observed on experimental data, but we failed to reproduce it in the fits. This is further explained in section 4.

This analysis leads us to broadly decompose the (r,s) plane into three regions corresponding to cells, response phases: monophasic, biphasic, and polyphasic. One switches from one phase to the other when some peaks in the power spectrum appear or disappear, driven by the spectrum of L.

The corresponding phase diagram, obtained numerically, constitutes what we call the “RFs map” shown in Figure 4 (top right). There are four points, labeled A to D, on this map, each representing a different cell’s response phase. For each point, we have plotted the RGC temporal RF, as computed with the model (bottom panels). A more general representation of what is going on when moving along a specific pathway in this map can be found at https://team.inria.fr/biovision/cno_paper_supplementary/, where one can see movies showing how the network effects shape the RGCs’ RFs when (r,s) vary. It is important to note that this map is a projection from a 10-dimensional space in 2 dimensions. It has been obtained, from the skeleton, by fixing the other parameter values, based on the average values obtained from fits over the cell populations (see Figure 5).

Figure 5:

(Top) Repartition of fitted experimental cells in the RFs' map. Each point corresponds to an experimental cell. (Top left) CTL conditions. (Top right) CNO conditions. (Middle) Mean and standard deviation of τA (left), τB (center), and r (right), fitted from experiments, for genes Grik4 and Scnn1a, in CTL and CNO conditions. We have separated the estimation for OFF cells (blue), ON Cells (green), and all cells (red). (Bottom) Mean and standard deviation of w- (left), w+ (center), and s (right), fitted from experiments. The representation is the same as the previous row.

Figure 5:

(Top) Repartition of fitted experimental cells in the RFs' map. Each point corresponds to an experimental cell. (Top left) CTL conditions. (Top right) CNO conditions. (Middle) Mean and standard deviation of τA (left), τB (center), and r (right), fitted from experiments, for genes Grik4 and Scnn1a, in CTL and CNO conditions. We have separated the estimation for OFF cells (blue), ON Cells (green), and all cells (red). (Bottom) Mean and standard deviation of w- (left), w+ (center), and s (right), fitted from experiments. The representation is the same as the previous row.

Close modal

Figure 4 illustrates how the network of BCs-ACs shapes the RF of an RGC by a subtle balance between the interactions BCs-ACs, BCs-ACs (parameter s) and the timescale of their response (parameter r). For simplicity, we consider here ON BCs, but the explanation holds also for OFF BCs.

The OPL drive (black line) induces a depolarization of BCs (red line) within a timescale of order τB. This excites the connected ACs (blue line), with an intensity w+. The excitation of ACs hyperpolarizes BCs with an intensity w- within a timescale of order τA. RGCs receive a combination of excitatory and inhibitory inputs from their afferent circuit with respective weights wGB and wGA.

In the monophasic region, ACs respond with the same time scale as BCs. One observes for RGCs a monophasic response panel A, whose intensity depends on the ratio between excitation, provided by BCs, with a weight wGB and inhibition, with a weight wGA. When moving to the biphasic region, ACs respond slowly than BCs, leading to the biphasic response illustrated with panel B. When moving upward in the RF map (increasing s) this biphasic response becomes polyphasic with oscillations. This is illustrated in panel C. BCs start to rise due to the OPL drive, leading to a rising of ACs, slower than BCs, leading to a hyperpolarization of BCs. This leads to a decrease of ACs’ voltage, and thereby to a rising of BCs, which still respond to the OPL drive. This cycle can be repeated several times, depending again on the parameters r,s. Note that the amplitude is always decreasing exponentially fast. The period of the observed oscillations and the damping characteristic time depend on the location in the map. Finally, panel D is the point at the intersection of the regions of the three phases. We show it for completeness. Note that increasing s leads to a decrease of the RGCs’ response. When s is too high, the response becomes too weak to be observed experimentally.

3.2.3  Experimental Cells Spread in the RFs’ Map

To confront our theoretical insight with experiments, we have placed the recorded cells in the RFs’ map as shown in Figure 5. That is, for each experimentally recorded RGC, we fit the model parameters η as explained in section 3.1.2, thereby providing an estimation of r,s, independently in CTL and in CNO conditions. This defines a virtual network, made of identical cells in each layer, where all RGCs are responding like the experimental RGC. As mentioned above, the map is only a projection of η, which exists in a 10-dimensional space, in the 2-dimensional plane r,s. Some parameters are linked. The mathematical analysis in section A.3.4 shows us that the skeletons obtained for a fixed value of τB,w+ can be extrapolated to other values τ'B,w'+ by the simple rescaling r'=r,s'=w'+τ'Bw+τB2s. Using this rescaling, the map of Figure 5 has been drawn for a specific value of w+=8.5 Hz and τB=30 ms. This corresponds to mean values of these parameters, averaged over the set of experimental cells (in CTL conditions). In this two-dimensional representation, extra information coming from the other parameters τG,wGB,wGA,A0,b0 is lost.

Figure 5 (left) shows us the repartition of cells in the map in CTL conditions. A few of them are monophasic, but many of them are biphasic, with a significant proportion close to the polyphasic region and showing residual oscillations. Figure 5 (right) shows the same cells in CNO conditions. The main observation is that CNO (right panel) does not dramatically change the repartition of cells in the RFs’ map. This is made more explicit in the bottom panels of Figure 5. We show the mean and standard deviation of the main model parameters, τA,τB,w-,w+,b0, in CTL and CNO conditions, separating the two subclasses of investigated genes, Grik4 and Scnn1a, and separating ON or OFF cells. These parameters are essentially constant, showing that there is no statistical trend induced by CNO. This is in agreement with our previous paper (Hilgen et al., 2022) where we showed that Scnn1a or Grik4 groups include multiple anatomical cell types, which suggest that our model or fitting does not introduce any obvious bias.

The situation is radically different when investigating the effect on individual cells. Indeed, the application of CNO makes some cells move their representative point from one region in the RF map to the other, thereby drastically changing the cell’s response. Two examples are shown in Figures 6 and 7. In Figure 6, the application of CNO induces a motion of the representative point in the RFs’ map. However, as the cell is close to the area separating the monophasic from the biphasic phase, this motion dramatically the affects shape of the time response. Note that this motion looks tiny because it is plotted in log scale, but in reality, it reflects a relatively large change in the RGC properties. This corresponds therefore to a relatively large change in the properties of the ganglion cell. Actually, the RFs’ map can be refined by plotting the value of the main period T1 (corresponding to the main peak in the power spectrum) as shown in the top right figure. The dashed black lines correspond to the separation between the monophasic, biphasic, and polyphasic regions. Note that the switch from bi- to polyphasic corresponds to additional peaks in the spectrum that are not visible when considering the period of the first peak, T1. (This transition is seen in Figure 7, where the period of the second peak is shown). In CTL conditions, the cell is located in the green region with a high T1 of order 600 ms, in the limit of experimental resolution. With CNO, the cell switches to the gray region where T1 is of the order 300 ms. The synthesis panel (same representation as in Figure 3) for CTL (left) and CNO conditions (right) is presented at the bottom of the panels in the figure.

Figure 6:

CNO may change the cell response. The upper left panels show the representative point of a cell, labeled 832, in the RF map, in CTL (purple point) and CNO (red point) conditions. They show the displacement of the representative point of the cell when CNO is applied. The upper right panel shows the same motion, in the same space (r,s), but considering here the main peak in the power spectrum, corresponding to a characteristic period T1. The dashed black lines correspond to the separation between the monophasic, biphasic, and polyphasic regions. One switches from a region with a high T1 of order 600 ms (green area) in CTL, to a region where T1 is of order 300 ms (gray area) in CNO conditions. As shown in the bottom panel, this displacement corresponds to a switch from monophasic to biphasic region.

Figure 6:

CNO may change the cell response. The upper left panels show the representative point of a cell, labeled 832, in the RF map, in CTL (purple point) and CNO (red point) conditions. They show the displacement of the representative point of the cell when CNO is applied. The upper right panel shows the same motion, in the same space (r,s), but considering here the main peak in the power spectrum, corresponding to a characteristic period T1. The dashed black lines correspond to the separation between the monophasic, biphasic, and polyphasic regions. One switches from a region with a high T1 of order 600 ms (green area) in CTL, to a region where T1 is of order 300 ms (gray area) in CNO conditions. As shown in the bottom panel, this displacement corresponds to a switch from monophasic to biphasic region.

Close modal
Figure 7:

CNO may change the cell response. (Left top) Application of CNO moves the representative point of the cell (here, label 521) in from the polyphasic to the biphasic region, resulting in a change in the cell response (Bottom panel). (Right top) Shows how the period of polyphasic oscillations depends on the place in the RFs’ map.

Figure 7:

CNO may change the cell response. (Left top) Application of CNO moves the representative point of the cell (here, label 521) in from the polyphasic to the biphasic region, resulting in a change in the cell response (Bottom panel). (Right top) Shows how the period of polyphasic oscillations depends on the place in the RFs’ map.

Close modal

In Figure 7 we show a motion inducing a switch from polyphasic to biphasic. Here, CNO drives the cell from the boundary of the polyphasic region to the biphasic region. The representation is the same as for Figure 6 except that the top right figure displays the period of the second period T2 (secondary peak in the power spectrum).

Remark.

The range of parameters r, s is very broad and seems to be stretched in terms of biological plausibility. In Figure 5, s can go up to s=200, which means that ACs’ inhibitory feedback is 200 times stronger than BC’s excitation onto ACs, which looks quite implausible. In the same vein, we observe values of r=30 corresponding to ACs having a 30-fold longer time constant than BCs, which might be questionable. These values must not be taken literally as they correspond to several artifacts. First, as explained above, s has been rescaled to match an RF map drawn from a set of reference parameters with, for example, w+=8.5 Hz and τB=30 ms. In addition, these synaptic weights are actually effective interactions that mimic the effect of cells, populations. These extreme values of (r,s) are therefore somewhat reflecting the fact that we are dealing with interactions among multiple cell types in the inner retina.

In this article, we have investigated the role of AC-mediated lateral connectivity in the response of RGCs to visual stimuli. Our conjecture was that these responses are strongly constrained by such lateral connectivity. Based on the mathematical analysis of a network featuring the interaction of BCs-ACs-RGCs, we were able to produce an analytic form for the spatiotemporal response (receptive fields) of all cell types in the model. This finding has significant implications for the usefulness, identifiability (i.e., its parameters could be obtained from experimental data), and interpretability of the model. First, it provides an algorithmic way to fit the model parameters to the light responses recorded from mouse RGCs, using the analytical formula of the RF. This means that we are able not only to find the parameters that best fit the variables concerning the RGC’s responses but also to infer the possible behavior of ACs and BCs leading to the RGC’s responses, even if we do not measure them experimentally. Second, it provides an intuitive understanding of the role of various model variables and highlights the impact of two phenomenological parameters (with a physical meaning) on the spatiotemporal response: (1) the intensity of the interactions BCs-ACs and (2) the characteristic timescale of these cells' responses. This can be summarized in the two-dimensional RF maps, where one observes phases corresponding to different modalities in the response. We were able to validate experimentally these modeling results, based on the ability to pharmacologically modify the level of ACs and RGC’s neural activity using pharmacogenetics (DREADD-CNO). We now address some caveats and potential extensions of this work.

4.1  Polyphasic Phase

Although we observe about 40% of polyphasic cells in the experimental plots (characterized by secondary peaks in the power spectrum), the model has difficulty properly fitting them. This is visible in Figure 5, where no cell is within the polyphasic region, whereas the secondary peaks are clearly visible in the experimental power spectra (see Figure 7). This can be explained by several factors. First, the fitting method, trying to estimate a model with 10 parameters from a trace with a few hundred points, has clear limits. In particular, the secondary peaks, having a few points in the power spectrum, are hard to capture and require patient fine tuning. The main limitations may also come from the model itself, as we now explore.

4.2  Inhomogeneities

Our model is quite homogeneous. It assumes that there is only one subclass for each cell type, represented by a unique set of parameters, and the connectivity is fairly regular. This homogeneity was the key to answering a specific challenge: understanding better the potential role of the network of BCs-ACs on the RGC’s response from theoretical and experimental perspectives. This study was done in the spirit of theoretical physics: find a set of canonical equations, grounded in reality (here biophysics), and mathematically analyze their behavior with a minimal set of reduced (dimensionless) parameters. Then, confront the model predictions to experiments and propose a simple representation of the observed effects. This type of approach is useful because it allows one to understand the action of cells within a large retinal network at a level of detail and comprehensiveness generally not possible by numerical simulation alone. Yet one may wonder what would happen if more parameters were taken into consideration to get closer to real retinal networks. For example, what would be the consequences of introducing more sustained and transient bipolar and amacrine cells or considering more general forms of connectivity, such as connections between amacrine cells?

As mentioned at the end of section 2.1.1, equation 2.3 extends to more cell types and general connectivity. The same holds for equations 3.4 and 3.6. For example, considering MB types of BCs, each with a specific RF Km(x,y,t)=KBSm(x,y)KBTm(t), equation 3.6 would extend to
Kα(x,y,t)=β=13NPαβm=1MBUβm(t)×γmPβγ-1ϖβγKBSγ(x,y),
(4.1)

where Uβm(t)eβ*tKBTm(t) and γm stands for “the BC γ belongs to population m.” Beyond that, we are not yet able to tell what would be the impact of such heterogeneity on the model. The reason is fairly simple. Adding heterogeneity will increase the number of parameters and in general will prevent us from going as far as we did in this article, constructed in the spirit of having mathematical control of the results. On technical grounds, the structure of eigenvalues and eigenvectors is expected to dramatically change in a way that we cannot generally determine analytically, in contrast to this work, and it would require numerical explorations in a huge space of parameters.

Adding heterogeneity clearly necessitates precisely defining which question we want to answer, and, from this point of view, specific forms of heterogeneity could be studied in an extension of this work. A simple idea would be to introduce some randomness in the parameters characterizing BCs and ACs—for example, considering random leak times in order to have a distribution of characteristic times τA,τB to study how we depart from the result obtained here when increasing the variance of these times or considering random versions of the incidence matrices ΓBA,ΓAB. An example of this has been considered in Souihel and Cessac (2021) to analyze anticipation effects. In any case, such questions would require long investigations beyond the scope of this article.

4.3  Rectification

The model includes weak nonlinearities (rectification) that were neglected in the mathematical computations. The effects of such rectification can be mathematically investigated (Cessac, 2022). Mainly, rectification projects dynamics on the subspace of nonrectified cells. This means that the dimensionality of the dynamical system changes in time, depending on the stimulus and network parameters, with strong consequences for the spectrum of L and thus on the power spectrum briefly discussed in section A.3.4.

4.4  Local Nonlinearities versus Network Effects

Additional nonlinearities take place in retinal dynamics. Ion channels have nonlinear behavior-inducing phenomena such as bifurcation and bursting, essential, for example, in the development of the retina where bursting starburst amacrine cells generate retinal waves (Hennig et al., 2009; Karvouniari et al., 2019; Cessac & Matzakou-Karvouniari, 2022). In addition, gain control plays a central role in the response to spatiotemporal stimuli inducing, for example, retinal anticipation (Berry et al., 1999; Chen et al., 2013; Souihel & Cessac, 2021).

Although our study was limited to responses to full-field flashes, we would like to extend the consequence of our analysis to more complex stimuli. First, the presence of peaks in the power spectrum implies the existence of resonances, that is, preferred frequencies for the RGCs. Exciting a cell with a resonance frequency will produce a maximal response. When applying a stimulus like the Chirp stimulus (Hilgen et al., 2022; Baden et al., 2016), there is a phase where periodic flashes, with constant contrast but increasing frequency, are applied. One observes frequently a bump in the experimental RGCs’ response that might correspond to such a resonance.

In addition to preferred time frequencies, our analysis shows that the response of RGCs, induced by the network, may also involve specific space scales. Mathematically, these scales appear in the eigenvectors of the transport operator (see equation A.28 in section A.3.5). The practical implication would be that when a localized, time-periodic stimulus is presented at a resonant frequency and with a small radius, and we increase slowly this radius, one may observe scales where the response is maximal. One of these scales may correspond to the size of the RF, but we conjecture that there should be larger scales where this phenomenon appears. This would be a way to disentangle intrinsic responses of cells from network-induced responses by blocking the AC synapses (e.g., strychnine for glycinergic cells). More generally, the existence of time resonances and preferred space scales would also induce resonant response to moving objects with the appropriate speed. Such effects could also be related to mechanisms giving rise to anticipatory waves (Menz et al., 2020).

4.5  Conclusion

This research provides mathematical insights for exploring the potential role of the network of amacrine cells in vision processing. First, it brings in the field of retinal modeling methods and concepts from dynamical systems theory, especially the idea that concerted population activities can be understood with a few reduced parameters while still reproducing experimental observations. Second, it makes a step forward to formalizing the concept that the response of RGCs to stimuli is not only the result of intrinsic cells, characteristic (e.g., their morphology), but it also depends on their interaction with other cells and on the stimulus itself.

This article is a first step in confronting this approach to experiments as a proof of concept opening up several research tracks. First, it could be used to infer the potential structure of spike correlations generated by an RGC population when responding to a moving object. While it is easy to show in this model that lateral inhibition decorrelates the response to full field stimuli, the situation is dramatically different when an object is moving in its visual field. The resulting wave of activity generates nontrivial, transient, spatiotemporal correlations that may contain fundamental information deciphered by the cortex. The corresponding nonstationary spike train distributions can be mathematically (Cessac, 2011; Cessac et al., 2009, 2021) and numerically (Cessac et al., 2017) studied. Pushing forward this formalism could therefore be a key towards understanding better how the retina encodes the visual information in a visual world made of motion.

Second, with the necessity of improving their biological plausibility, this class of model could be used to fit online experimental retinal responses to a simulator by adapting the stimulus to the model prediction in a closed-loop process (Benda et al., 2007). This is in the spirit of current trends of research attempting to construct data-driven models of the retina (Schröder et al., 2020), although our approach targets reducing as much as possible the number of parameters and maximizing the mathematical control so as to avoid the black box effect.

Finally, our work could be used in future studies to explore the role of other RGC subclasses or other retinal neurons and their interactions. In addition, it could be used to disassemble the components of other retinal circuits by manipulating the activity of specific neurons. It could also potentially benefit research in other parts of the nervous system, as fundamental properties of the inner retina are shared with other parts of the brain.

A.1  Derivation of the Model Equations and Properties

A.1.1  Generic Equations for Voltages

We start from the fundamental equation of neuronal dynamics (charge conservation) and make several simplifying assumptions.

First, we assume that neurons are reduced to points. They all have the same leak conductance, gL, the same leak reversal potential, EL, and the same membrane capacity, C. They are passive (no active ion channels). This last assumption is based on the fact that most of the retinal neurons considered here (BCs and ACs) are not spiking. From these assumptions, we get rid of equations for activation-inactivation variables present in most spiking models (Ermentrout & Terman, 2010). For RGCs, spiking is ruled by an LN type of model (see main text).

As exposed in the main text, we model the CNO effect by a current of the form -gCNOTVq-ECNOT, where T is the cell type (e.g., ACs or RGCs) so that the CNO conductance is constant and depends only on the cell type. ECNOT is the corresponding reversal potential. We insist, however, that this is only a modeling short-cut, allowing the mathematical treatment of inhibitory and excitatory effects on an equal footing, with only one parameter, the CNO conductance gCNOT.

The membrane potential Vq of neuron q then obeys the equation
CdVqdt=-gLVq-EL-gCNOTVq-ECNOT-Y,synpYgq(Y,p)Vq-EY+iq(t),
(A.1)
where Y,syn is the sum over all possible types of synaptic currents shaping the membrane potential of neuron q (e.g., the current resulting from glutamate transmitter with NMDA receptors). The sum pY stands for the contribution of presynaptic neurons p connecting to postsynaptic neuron q via a synapse of type Y. The term iq is an external current (here, the photoreceptors, current that input BCs). The synaptic conductance gq(Y,p) depends in general on the presynaptic voltage Vp and additional activation or inactivation variables. Here, we assume that it depends only on the presynaptic voltage and takes the form gq(Y,p)=λq(Y,p)N(Y)(Vp), where λq(Y,p) is a constant, positive factor and N(Y) is the piecewise linear function introduced in equation 2.4.
We shift the voltages by EL (i.e., replacing Vq by Vq-EL, EX by EX-EL, and so on). We introduce the characteristic timescale of the membrane response for neuron q:
τq=CgL+gCNOT+Y,synpYλq(Y,p)N(Y)(Vp).
τq depends on the presynaptic neurons, states (via the conductances gq(Y,p)). We can write the time dependence of voltages as Vp(t)=Vp*+δVp(t), where Vp* is the rest state, that is, the voltage reached by neurons, in the absence of stimulation, given by
Vq*=gCNOTECNOT+YpYλq(Y,p)N(Y)(Vp*)EYgL+gCNOT+Y,synpYλq(Y,p)N(Y)(Vp*).
(A.2)
This is a complex, nonlinear system of equations where the rest voltages of all neurons are dependent.
The term δVp(t) corresponds to the time variations of the voltage. What makes the dynamics complex are precisely these time variations inducing important nonlinear effects. Here, however, we are investigating effects on the retinal response when applying CNO, considering that CNO acts on a timescale quite larger than the characteristic time τq. That is, we neglect the fluctuations δVp(t) and assume that the characteristic time τq is only controlled by the rest part of the voltages Vp*, depending on the CNO conductance. Thus, we can replace N(Y)(Vp) by N(Y)(Vp*) in the expression of τq,
τq=τL1+τLζT,qECNOT+Y,syn1EYpYWq(Y,p)NY(Vp*),
(A.3)
where τL=CgL is the leak characteristic time.
Finally, we introduce the synaptic weights,
Wq(Y,p)=λq(Y,p)EYC,
(A.4)
the CNO polarization:
ζT,q=gCNOTECNOTC,
(A.5)
and the forcing term Fq(t)=iq(t)C, and we write
dVqdt=-1τqVq+YpYWq(Y,p)N(Y)(Vp)+ζT,q+Fq(t),q=1...N,
(A.6)
where we dropped syn in Y,syn for legibility. This is the general form of cell voltage equations used in the article, leading to equation 2.3.

A.1.2  Rest States, Characteristic Times, and CNO Dependence

We study now the dependence of the rest states and characteristic times in CNO for model 2.3. The details of the computations can be found at https://team.inria.fr/biovision/mathematicalderivationscno/. From equation A.3, the characteristic times take the form
τBi=τL1+τLEAj=1NWBiAjNA(VAj*),τAj=τL1+τLζAECNOA+1EBi=1NWAjBiNB(VBi*),τGk=τL1+τLζGECNOG+1EBi=1NWGkBiNB(VBi*)+1EAj=1NWGkAjNA(VAj*).
(A.7)
Here, we see an evident and expected dependence on CNO. Increasing ζA decreases the characteristic times of the membrane. This is, however, not complete, as the rest states depend themselves on CNO. Indeed, assume, for simplicity, that the system is uniform in space, so that the characteristic times do not depend on the cell index (only on the cell type). Then, the rest states are given by (in vector form)
VB*=τBIN-τAτBWBAWAB-1.τAWBAζA-τAθBWBAWAB-θAWBA1N,VA*=τAIN-τAτBWABWBA-1.ζA-τBθAWABWBA-θBWAB1N,VG*=τGWGAVA*+τGWGBVB*+τGζG-θAWGA-θBWGB.1N,.
(A.8)

where IN is the N×N identity matrix and 1N is the unit vector in N dimension. Here, we have assumed that the rest states are above threshold and that I-τAτBWBAWAB and I-τAτBWABWBA are invertible. See https://team.inria.fr/biovision/mathematicalderivationscno/ for the general case.

The system of equations A.7 and A.8 are thus nonlinearly entangled. Especially, there is a nice feedback effect appearing in feedback loop terms like WBAWAB. Actually, for example, the term IN-τAτBWBAWAB-1 involves loops WBAWABn of the order n>0, with an amplitude decaying with n. Consider now the role of the parameter ζA. It directly affects ACs and indirectly, through the network, the activity of BCs and RGCs. One observes, first, a direct effect on the ACs’ rest state. Looking at the numerator of equation A.8 increasing ζA would have the effect of depolarizing the cell (if ECNOA>0) or hyperpolarizing it (if ECNOA<0), if τA and τB were independent of CNO. However, CNO acts as well on these times. The net effect of ζA is thus nonlinear and cannot be anticipated by simple arguments.

In the case of section 2.1.4 with a reduced set of parameters, the equations for characteristic times reduce to equation 2.8 while the rest states are given by equation 2.9.

A.1.3  Mathematical Analysis of Network Dynamics

In the following, the notation diagxnn=1N denotes a diagonal N×N matrix with diagonal entries xn.

Joint dynamics. The joint dynamics of all cell types is given by the dynamical system 2.3. We use Greek indices α,β,γ=1...3N and define the state vector X with entries
Xα=VBi,α=i,i=1,...,N;VAj,α=N+j,j=1,...,N;VGk,α=2N+k,k=1,...,N.
We introduce F with entries
Fα=FBi,α=i,i=1,...,N;ζA,α=N+j,j=1,...,N;ζG,α=2N+k,k=1,...,N;
and the rectification vector R(X) with entries
Rα(X)=N(B)VBi,α=i,i=1,...,N;N(A)VAj,α=N+j,j=1,...,N;0,α=2N+k,k=1,...,N.
We introduce the 3N×3N matrices,
T=diagτBii=1...N0NN0NN0NNdiagτAjj=1...N0NN0NN0NNdiagτGkk=1...N,
(A.9)
characterizing the characteristic integration times of cells,
W=0NNWBA0NNWAB0NN0NNWGBWGA0NN,
(A.10)
summarizing chemical synapses, interactions. Then, the dynamical system 2.3 reads, in vector form,
dXdt=-T-1.X+W.R(X)+F(t).
(A.11)

We remark that equation A.11 has a specific product structure: the dynamics of RGCs is driven by BCs and ACs with no feedback. This means that one can study first the coupled dynamics of BCs and ACs and then the effect on RGCs.

Linear evolution. We consider the evolution of equation A.11 from an initial time t0. Typically, t0 is a reference time where the network is at rest, before the stimulus is applied. The dynamical system has almost the form of a nonautonomous linear system driven by the term F(t). There is, however, a weak nonlinearity due to the piecewise linear rectification appearing in the term R(X). Therefore, when the voltages of all cells are large enough, the system is linear. Mathematically, there is a domain of R3N:
D=VBiθB,VAjθA,i,j=1,...,N,
(A.12)
where RX is linear so that equation A.11 is linear too (see Cessac, 2022, for more details).
From now on, we consider this linear case. We write L=-T-1+W so that
L=-diag1τBii=1...NWBA0NNWAB-diag1τAjj=1...N0NNWGBWGA-diag1τGkk=1...N.
(A.13)
We introduce the N-dimensional vector 1N=(1)i=1N and the 3N-dimensional vector
C=-θAWBA.1N-θBWAB.1N-θBWGB.1N+θAWGA.1Nandequation3.1readsdXdt=L.X+F(t)+C.
We assume that L is invertible. This assumption and, more generally, the spectrum of L is further discussed in section A.3.4. The general solution of equation 3.11 is
X(t)=eL(t-t0).X(t0)+t0teL(t-s)F(s)ds-L-1.I3N,3N-eL(t-t0).C,
(A.14)
where I3N,3N is the 3N-dimensional identity matrix.
Although this equation is general, it actually stands when one can define a notion of asymptotic regime that is, when L has stable eigenvalues (eigenvalues with a strictly negative real part). The spectrum of L is studied below, and conditions ensuring the stability of eigenvalues are given. Here, we are going to assume that eigenvalues are all stable and that t-t0 is large so that we can remove the transient term eL(t-t0).X(t0) depending on the initial condition X(t0). In addition, the last term converges to
X*=-L-1.C,
(A.15)
the rest state of the linear system, which vanishes whenever the thresholds θA,θB are set to 0.
Derivation of equation 2.10. We note the eigenvalues of L, λβ,β=1...3N and its eigenvectors, Pβ (the columns of the matrix P transforming L in diagonal form). We consider first the case C=0. We have then, from equation A.14,
Xα(t)=β=13Nγ=13NPαβPβγ-1-teλβ(t-s)Fγ(s)ds,
where Fγ=FBi, γ=i=1...N (BCs).
We recall that from equation 2.5, FBγ(t)=Vγ(drive)τBγ+dVγ(drive)dt, so that
-teλβ(t-s)Fγ(s)ds=Vγ(drive)(t)+ϖβγ-teλβ(t-s)Vγ(drive)(s)ds,γ=1...N,
for B cells, with ϖβγ=1τBγ+λβ and using Vγ(drive)(-)=0.
For γ=N+1...2N, Fγ=ζA (ACs) we have
-teλβ(t-s)ζAds=-ζAλβ.
Finally, for γ=2N+1...3N, Fγ=ζG (RGCs):
-teλβ(t-s)ζGds=-ζGλβ.
We remark that
β=13Nγ=13NPαβPβγ-1Vγ(drive)(t)=γ=13NVγ(drive)(t)β=13NPαβPβγ-1=γ=13NVγ(drive)(t)δαγ=Vα(drive)(t).
It follows that
Xα(t)=Vα(drive)(t)+Eα(drive)(t)+Eα(CNOA)+Eα(CNOG),α=1...3N
with
Eα(drive)(t)=β=13Nγ=1NPαβPβγ-1ϖβγ-teλβ(t-s)Vγ(drive)(s)ds,α=1...3N,Eα(CNOA)=-ζAβ=13Nγ=N+12NPαβPβγ-1λβ,α=N+1,...2N,Eα(CNOG)=ζGτG,α=2N+1...3N,
where the last result comes from λβ=-1τG, for β=2N+1...3N (see the next section). The expression of Xα(t) corresponds to equation 3.1.

When C0, there is an additional term corresponding to the rest state, equation A.15.

Eigenvalues and eigenvectors of L.

Linear case. We start from the equation A.13 of the linear operator ruling the dynamics in the set D defined by equation A.12. We consider, as in the main text, the case where all characteristic times τBi are equal to τB, all characteristic times τAj are equal to τA, and all characteristic times τGk are equal to τG. Using the same notations as the main text, we have
L=-INNτB-w-ΓBA0NNw+ΓAB-INτA0NNwGBΓGBwGAΓGA-INNτG,
where 0NN is the N×N 0 matrix and INN the N×N 0 identity matrix.

Eigenvalues and eigenvectors. We next consider the case where ΓBA=ΓAB. We note κn,n=1...N, the eigenvalues of ΓBA ordered as κ1κ2...κn and note the normalized eigenvectors ψn, n=1...N.

We seek the eigenvalues, λβ, and eigenvectors, Pβ, β=1...3N, of L. It is evident, from the form of L, that there are N eigenvalues λβ=-1τG,Pβ=eβ where eβ is the canonical basis vector in direction β. We attribute them the indices β=2N+1...3N as this indexing corresponds to the form of L when w-=w+=0. We seek the 2N remaining eigenvalues and eigenvectors assuming that Pβs has the form
Pβ=ψnρnψnφn,n=1...N,
(A.16)
where ρn is an unknown parameter and φn an N-dimensional vector, to be determined, from the characteristic equation:
L.Pβ=λβPβ.
This leads to the system of equations:
λβ=-1τB-w-ρnκn;w+κn-ρnτA=ρnλβ;λβφn=wGBΓGB+wGAρnΓGAψn-1τGφn.,
(A.17)
We first assume that w-,w+>0 and later discuss the limit when these quantities tend to zero. Combining the two first equations leads to a second-order polynomial in the variable ρn,
w-κnρn2-1τρn+w+κn=0,
(A.18)
giving two solutions for each n:
ρn±=12τw-κn1±1-4μκn2,κn0,1τ0;w+τκn,κn=0,1τ0;±-w+w-,1τ=0,
(A.19)
where
1τ=1τA-1τB
(A.20)
and
μ=w-w+τ20.
(A.21)
The 2N first eigenvalues of L are therefore given by
λn±=-12τAB12τ1-4μκn2,1τ0;-1τA-w-w+κn2,1τ=0,
(A.22)
where
1τAB=1τA+1τB.
(A.23)
We finally obtain 2N vectors φn:
φn±=1λn±+1τGwGBΓGB+wGAρn±ΓGAψn.
(A.24)

We now discuss the limit when w- or w+ or both tend to 0. If w-=0, ρn=w+κnτ from equation A.18. If w+=0 there are two solutions of equation A.18, ρn=0 or ρn=1τw-κn. Finally, when w-=w+=0, ρn=0 and the ansatz, equation A.16, does not apply. Actually, in this case, L is diagonal, the N first eigenvalues are -1τB, and the N next eigenvalues are -1τA. We have, in this case, λn+=-1τB and λn-=-1τA. Therefore, we order eigenvalues and eigenvectors of M such that the N first eigenvalues are λn+,n=1...N, and the N next are λn-,n=N+1...2N.

We finally end up with the following form for the eigenvalues and eigenvectors of L:
λβ=λn+,Pβ=ψnρn+ψn1λn++1τGwGBΓGB+wGAρn+ΓGAψn,β=n=1...N,λβ=λn-,Pβ=ψnρn-ψn1λn-+1τGwGBΓGB+wGAρn-ΓGAψn,β=N+1...2N,n=1...N,λβ=-1τG,Pβ=eβ,β=2N+1...3N.
(A.25)
Skeleton. The eigenvalues λn± in equation A.22 can be real or complex conjugated. By increasing μ, they become complex when
μ>14κnμn,c.
(A.26)
In this case, the real part is -12τAB, and the imaginary part is ±12τ1-4μκn2. If μμn,c, eigenvalues λβ are real with a negative real part. This implies that the linear dynamical system, equation A.11, is stable.
The N equations A.26 define what we call the “skeleton of the RF’s map.” In the main text, we introduced the quantities r=τAτB,s=w-w+. In these variables, the critical condition equation A.26, reads
snc=1w+2τB214κn21-r2r2.
(A.27)
This defines two critical lines symmetric with respect to r=1. These lines are invariant by the variable change r'=r,s'=w'+τ'Bw+τB2s. This allows mapping the skeleton obtained from a set of values τ'B,w'+ to the skeleton obtained with references value τB,w+.

Rectification. In this article, we have essentially considered a situation where cells are not rectified, whereas the full model, equation 2.3, considers rectification, in agreement with realistic biological systems. The mathematical effect of rectification of a cell Aj is to set to zero the corresponding row in the matrix WBA. This has several consequences. First, we cannot apply the useful ansatz used in the section, that is, ΓBA=ΓAB. In addition, the vanishing of only one row in L completely modifies its spectrum. However, thresholding in rectification corresponds to partition the phase space of the model, a compact subset of R3N, into convex subdomains, delimited by hyperplanes. In each of these domains, the matrix W.R(X) appearing in equation A.11 is linear with a number of zero eigenvalues corresponding to the number of rectified cells. This matrix acts as a projector on the complementary subspace of its kernel. In each of these subdomains, equation 3.1 applies. One can actually compute, for a given stimulus, the time of entrance and exit in a new subdomain with the effect of modifying the eigenvalues and eigenvectors appearing in equation 3.1. The resulting equation is quite complex, though, and will require further investigations. (See Cessac, 2020, for more details.)

Nearest neighbors' connectivity. We consider the case where the connectivity matrices ΓAB=ΓBA have nearest neighbors, symmetric connections. We also assume that the dynamics hold on a square lattice with null boundary conditions. We define α=ix1...L=N in one dimension and α=ix+(iy-1)L1...L2=N in two dimensions. We also set n=nx1...L=N in one dimension and n=nx+(ny-1)L1...L2=N in two dimensions. Then the eigenvectors and eigenvalues of these matrices have the form
ψα,n=2L+1d2lsinnlπL+1il,κn=2lcosnlπL+1,
(A.28)
with l=x for d=1 and l=x,y for d=2, especially, in one dimension:
ψα,n=2L+1sinαπL+1n.

The quantum numbers nx,ny define a wave vector kn=nxπL+1,nyπL+1 corresponding to wave lengths L+1nx,L+1ny. Hence, the first eigenmode (nx=1,ny=1) corresponds to the largest space scale (scale of the whole retina) with the smallest eigenvalue (in absolute value) s(1,1)=2cosπL+1+cosπL+1-2. Each of these eigenmodes is related to a characteristic time τn=1λn.

Eigenvalues κn can be positive or negative. However, from equation A.22 this has no impact on the eigenvalues as what matters is κn2. This induces however a symmetry κn-κn that can be seen in the skeleton Figure 4, not forgetting that this figure is in log scale.

This project was funded by the Leverhulme Trust (RPG-2016-315 to E.S. and B.C.), by Newcastle University (Faculty of Medical Sciences Graduate School).

Asari
,
H.
, &
Meister
,
M
. (
2012
).
Divergence of visual channels in the inner retina
.
Nature Neuroscience
,
15
,
1581
1589
.
Baccus
,
S. A.
,
Ölveczky
,
B. P.
,
Manu
,
M.
, &
Meister
,
M
. (
2008
).
A retinal circuit that computes object motion
.
Journal of Neuroscience
,
28
(
27
),
6807
6817
.
Baden
,
T.
,
Berens
,
P.
,
Franke
,
K.
,
Román Rosón
,
M.
,
Bethge
,
M.
, &
Euler
,
T
. (
2016
).
The functional diversity of retinal ganglion cells in the mouse
.
Nature
,
529
(
7586
),
345
350
.
Benda
,
J.
,
Gollisch
,
T.
,
Machens
,
C. K.
, &
Herz
,
A. V.
(
2007
).
From response to stimulus: Adaptive sampling in sensory physiology
.
Current Opinion in Neurobiology
,
17
(
4
),
430
436
.
Berry
,
M.
,
Brivanlou
,
I.
,
Jordan
,
T.
, &
Meister
,
M
. (
1999
).
Anticipation of moving stimuli by the retina
.
Nature
,
398
(
6725
),
334
338
.
Besharse
,
J.
, &
Bok
,
D.
(
2011
).
The retina and its disorders
.
Elsevier Science
.
Cessac
,
B
. (
2011
).
A discrete time neural network model with spiking neurons II. Dynamics with noise
.
Journal of Mathematical Biology
,
62
(
6
),
863
900
.
Cessac
,
B.
(
2020
).
Recent trends in chaotic, nonlinear and complex dynamics, in honour of Prof. Miguel A. F. Sanjuán on his 60th birthday.
World Scientific
.
Cessac
,
B.
(
2022
).
Retinal processing: Insights from mathematical modelling
.
Journal of Imaging
,
8
(
1
).
Cessac
,
B.
,
Ampuero
,
I.
, &
Cofré
,
R.
(
2021
).
Linear response of general observables in spiking neuronal network models
.
Entropy
,
23
(
2
).
Cessac
,
B.
,
Kornprobst
,
P.
,
Kraria
,
S.
,
Nasser
,
H.
,
Pamplona
,
D.
,
Portelli
,
G.
, &
Viéville
,
T.
(
2017
).
PRANAS: A new platform for retinal analysis and simulation
.
Frontiers in Neuroinformatics
,
11
,
49
.
Cessac
,
B.
, &
Matzakou-Karvouniari
,
D
. (
2022
).
The nonlinear dynamics of retinal waves
.
Physica D: Nonlinear Phenomena
,
439
,
133436
.
Cessac
,
B.
,
Rostro-Gonzalez
,
H.
,
Vasquez
,
J.-C.
, &
Viéville
,
T
. (
2009
).
How Gibbs distribution may naturally arise from synaptic adaptation mechanisms: A model based argumentation
.
Journal of Statistical Physics
,
136
(
3
),
565
602
.
Chalupa
,
L. M.
, &
Werner
,
J.
, (Eds.)
. (
2004
).
The visual neurosciences
.
MIT Press
.
Chen
,
E. Y.
,
Marre
,
O.
,
Fisher
,
C.
,
Schwartz
,
G.
,
Levy
,
J.
,
da Silviera
,
R. A.
, &
Berry
,
M
. (
2013
).
Alert response to motion onset in the retina
.
Journal of Neuroscience
,
33
(
1
),
120
132
.
Chichilnisky
,
E. J
. (
2001
).
A simple white noise analysis of neuronal light responses
.
Network
,
12
(
2
),
199
213
.
Daw
,
N.
(
2012
).
How vision works: The physiological mechanisms behind what we see
.
Oxford University Press
.
de Vries
,
S. E. J.
,
Baccus
,
S. A.
, &
Meister
,
M
. (
2011
).
The projective field of a retinal amacrine cell
.
Journal of Neuroscience
,
31
(
23
),
8595
8604
.
Demb
,
J. B.
, &
Singer
,
J. H
. (
2015
).
Functional circuitry of the retina
.
Annual Review of Vision Science
,
24
(
1
),
263
289
.
Diamond
,
J. S.
(
2017
).
Inhibitory interneurons in the retina: Types, circuitry, and function
.
Annual Review of Vision Science
,
24
(
1
).
Ermentrout
,
G. B.
, &
Terman
,
D.
(
2010
).
Foundations of mathematical neuroscience
.
Springer
.
Euler
,
T.
,
Haverkamp
,
S.
,
Schubert
,
T.
, &
Baden
,
T.
(
2014
).
Retinal bipolar cells: Elementary building blocks of vision
.
Nature Reviews Neuroscience
,
15
,
507
519
.
FitzHugh
,
R.
(
1969
).
Mathematical models of excitation and propagation in nerve
.
McGraw-Hill
.
Franke
,
K.
, &
Baden
,
T
. (
2017
).
General features of inhibition in the inner retina
.
Journal of Physiology
,
595
(
16
),
5507
5515
.
Gollisch
,
T.
, &
Meister
,
M
. (
2010
).
Eye smarter than scientists believed: Neural computations in circuits of the retina
.
Neuron
,
65
(
2
),
150
164
.
Hennig
,
M.
,
Adams
,
C.
,
Willshaw
,
D.
, &
Sernagor
,
E.
(
2009
).
Early-stage waves in the retinal network emerge close to a critical state transition between local and global functional connectivity
.
Journal of Neuroscience
,
29
(
4
),
1077
1096
.
Hilgen
,
G.
,
Kartsaki
,
E.
,
Kartysh
,
V.
,
Cessac
,
B.
, &
Sernagor
,
E.
(
2022
).
A novel approach to the functional classification of retinal ganglion cells
.
Open Biology
,
12
(
3
).
Hilgen
,
G.
,
Pirmoradian
,
S.
,
Pamplona
,
D.
,
Kornprobst
,
P.
,
Cessac
,
B.
,
Hennig
,
M. H.
, &
Sernagor
,
E
. (
2017
).
Pan-retinal characterisation of light responses from ganglion cells in the developing mouse retina
.
Scientific Reports
,
7
.
Hodgkin
,
A. L.
, &
Huxley
,
A. F
. (
1952
).
A quantitative description of membrane current and its application to conduction and excitation in nerve
.
Journal of Physiology
,
117
(
4
),
500
544
.
Ichinose
,
T.
,
Fyk-Kolodziej
,
B.
, &
Cohn
,
J.
(
2014
).
Roles of ON cone bipolar cell subtypes in temporal coding in the mouse retina
.
Journal of Neuroscience
,
34
(
26
):
8761
8771
.
Kartsaki
,
E.
(
2022
).
How specific classes of retinal cells contribute to vision: A computational model
.
PhD diss.
,
Université Côte d’Azur, and Newcastle University
.
Karvouniari
,
D.
,
Gil
,
L.
,
Marre
,
O.
,
Picaud
,
S.
, &
Cessac
,
B.
(
2019
).
A biophysical model explains the oscillatory behaviour of immature starburst amacrine cells
.
Scientific Reports
,
9
, 1859.
Manu
,
M.
,
McIntosh
,
L. T.
,
Kastner
,
D. B.
,
Naecker
,
B. N.
, &
Baccus
,
S. A
. (
2022
).
Synchronous inhibitory pathways create both efficiency and diversity in the retina
.
Proceedings of the National Academy of Sciences
,
119
(
4
), e2116589119.
Marr
,
D.
(
1982
).
Vision
.
W. H. Freeman
.
Masland
,
R. H.
(
2012a
).
The neuronal organization of the retina
.
Neuron
,
76
(
2
),
266
280
.
Masland
,
R. H.
(
2012b
).
The tasks of amacrine cells
.
Visual Neuroscience
,
29
(
1
),
3
9
.
McIntosh
,
L. T.
,
Maheswaranathan
,
N.
,
Nayebi
,
A.
,
Ganguli
,
S.
, &
Baccus
,
S. A.
(
2017
).
Deep learning models of the retinal response to natural scenes
. In
D.
Lee
,
M.
Sugiyama
,
U.
Luxburg
,
I.
Guyon
, &
R.
Garnett
(Eds.),
Advances in neural information processing systems
,
29
.
Curran
.
Menz
,
M. D.
,
Lee
,
D.
, &
Baccus
,
S. A.
(
2020
).
Representations of the amacrine cell population underlying retinal motion anticipation
.
bioRxiv
.
Morris
,
C.
, &
Lecar
,
H.
(
1981
).
Voltage oscillations in the barnacle giant muscle fiber
.
Biophysical Journal
,
35
(
1
),
193
213
.
Nagumo
,
J.
,
Arimoto
,
S.
, &
Yoshizawa
,
S.
(
1962
).
An active pulse transmission line simulating nerve axon
.
Proceedings of the IRE
,
50
,
2061
2070
.
Pamplona
,
D.
,
Hilgen
,
G.
,
Hennig
,
M.
,
Cessac
,
B.
,
Sernagor
,
E.
, &
Kornprobst
,
P.
(
2022
).
Receptive field estimation in large visual neuron assemblies using a super-resolution approach
.
Journal of Physiology
,
127
(
5
),
1334
1347
.
Paninski
,
L.
(
2003
).
Convergence properties of three spike-triggered analysis techniques
.
Network: Computation in Neural Systems
,
14
(
3
),
437
464
.
Pillow
,
J. W.
,
Shlens
,
J.
,
Paninski
,
L.
,
Sher
,
A.
,
Litke
,
A. M.
,
Chichilnisky
,
E. J.
, &
Simoncelli
,
E. P.
(
2008
).
Spatio-temporal correlations and visual signalling in a complete neuronal population
.
Nature
,
454
(
7207
),
995
999
.
Protti
,
D.
,
Di Marco
,
S.
,
Huang
,
J.
,
Vonhoff
,
C.
,
Nguyen
,
V.
, &
Solomon
,
S.
(
2014
).
Inner retinal inhibition shapes the receptive field of retinal ganglion cells in primate
.
Journal of Physiology
,
592
(
1
),
49
65
.
Roth
,
B. L.
(
2016
).
DREADDs for neuroscientists
.
Neuron
,
89
(
4
),
683
694
.
Sanes
,
J. R.
, &
Masland
,
R. H.
(
2015
).
The types of retinal ganglion cells: Current status and implications for neuronal classification
.
Annual Review of Neuroscience
,
38
(
1
),
221
246
.
Schreyer
,
H. M.
(
2018
).
Nonlinearities in bipolar cells and their role for encoding visual signals
.
PhD diss.
,
Georg-August Universität
.
Schröder
,
C.
,
Klindt
,
D.
,
Strauss
,
S.
,
Franke
,
K.
,
Bethge
,
M.
,
Euler
,
T.
, &
Berens
,
P.
(
2020
).
System identification with biophysical constraints: A circuit model of the inner retina
.
bioRxiv
,
2020.06.16.154203
.
Schwartz
,
O.
,
Pillow
,
J. W.
,
Rust
,
N. C.
, &
Simoncelli
,
E. P.
(
2006
).
Spike-triggered neural characterization
.
Journal of Vision
,
6
(
4
),
484
507
.
Simoncelli
,
E. P.
,
Paninski
,
L.
,
Pillow
,
J.
, &
Schwartz
,
O.
(
2004
).
Characterization of neural responses with stochastic stimuli
. In
D.
Poeppel
,
G. R.
Mangun
, &
M. S.
Gazzaniga
(Eds.),
The cognitive neurosciences
.
MIT Press
.
Souihel
,
S.
, &
Cessac
,
B.
(
2021
).
On the potential role of lateral connectivity in retinal anticipation
.
Journal of Mathematical Neuroscience
,
11
(
3
).
Urban
,
D. J.
, &
Roth
,
B. L.
(
2015
).
DREADDs (designer receptors exclusively activated by designer drugs): Chemogenetic tools with therapeutic utility
.
Annual Review of Pharmacology and Toxicology
,
55
(
1
),
399
417
.
Wässle
,
H.
(
2004
).
Parallel processing in the mammalian retina
.
Nature Reviews Neuroscience
,
5
(
10
),
747
757
.