Abstract

Motivated by data-rich experiments in transcriptional regulation and sensory neuroscience, we consider the following general problem in statistical inference: when exposed to a high-dimensional signal S, a system of interest computes a representation R of that signal, which is then observed through a noisy measurement M. From a large number of signals and measurements, we wish to infer the “filter” that maps S to R. However, the standard method for solving such problems, likelihood-based inference, requires perfect a priori knowledge of the “noise function” mapping R to M. In practice such noise functions are usually known only approximately, if at all, and using an incorrect noise function will typically bias the inferred filter. Here we show that in the large data limit, this need for a precharacterized noise function can be circumvented by searching for filters that instead maximize the mutual information I[M; R] between observed measurements and predicted representations. Moreover, if the correct filter lies within the space of filters being explored, maximizing mutual information becomes equivalent to simultaneously maximizing every dependence measure that satisfies the data processing inequality. It is important to note that maximizing mutual information will typically leave a small number of directions in parameter space unconstrained. We term these directions diffeomorphic modes and present an equation that allows these modes to be derived systematically. The presence of diffeomorphic modes reflects a fundamental and nontrivial substructure within parameter space, one that is obscured by standard likelihood-based inference.

1.  Introduction

This article discusses a familiar problem in statistical inference but focuses on an understudied limit that is becoming increasingly relevant in the era of large data sets. Consider an experiment having the following form:
formula
1.1
When presented with a signal S, a system of interest applies a deterministic filter , thereby producing an internal representation R of that signal. For each representation R, a noisy measurement M is then generated. The conditional probability distribution from which M is drawn is called the noise function of the system. From data consisting of N signal-measurement pairs, , we wish to reconstruct the filter . This article focuses on how to infer properly in the limit when the noise function is unknown a priori.

All statistical regression problems have this SRM form (Bishop, 2006), but we will focus on two biological applications for which this problem is particularly relevant. In neuroscience, SRM experiments are commonly used to characterize the response of neurons to stimuli (Schwartz, Pillow, Rust, & Simoncelli, 2006). For instance, S may be an image to which a retina is exposed, while M is a binary variable (spike or no spike) indicating the response of a single retinal ganglion cell. It is often assumed that the spiking probability depends on a linear projection R of S. The specific probability of a spike given R is determined by the noise function .

More recently, analogous experiments have been used to characterize the biophysical mechanisms of transcriptional regulation. In the context of work by Kinney, Murugan, Callan, and Cox (2010), S is the DNA sequence of a transcriptional regulatory region, R is the rate of mRNA transcription produced by this sequence, and M is a (noisy) measurement of the resulting level of gene expression. The filter is a function of DNA sequence that reflects the underlying molecular mechanisms of transcript initiation. The noise function accounts for both biological noise1 and instrument noise.

The standard approach for solving inference problems like these is to adopt a specific noise function , then search a space of possible filters for the one filter that maximizes the likelihood , where
formula
1.2
is the per-datum log likelihood. For instance, the method of least squares regression corresponds to maximum likelihood inference assuming a homogeneous gaussian noise function (Bishop, 2006).

Although the correct filter does indeed maximize when the correct noise function is used, full a priori knowledge of this noise function is rare in practice. Often is chosen primarily for computational convenience, as is standard with least-squares regression. This can be problematic because using an incorrect will typically produce bias in the inferred filter , bias that does not disappear in the limit. The reason for this is illustrated in Figure 1.

Figure 1:

Maximizing likelihood with an incorrect noise function will generally bias the inferred filter. The per-datum log likelihood will typically depend on both the filter and the noise function in a correlated manner (left panel). Values of a schematic are illustrated in gray, with darker shades indicating larger likelihood. If the correct noise function is assumed (solid line), maximizing will yield the correct filter (filled circle). However, if an incorrect noise function is assumed (dashed line), maximizing will typically lead to an incorrect filter (open circle).

Figure 1:

Maximizing likelihood with an incorrect noise function will generally bias the inferred filter. The per-datum log likelihood will typically depend on both the filter and the noise function in a correlated manner (left panel). Values of a schematic are illustrated in gray, with darker shades indicating larger likelihood. If the correct noise function is assumed (solid line), maximizing will yield the correct filter (filled circle). However, if an incorrect noise function is assumed (dashed line), maximizing will typically lead to an incorrect filter (open circle).

Sometimes this problem can be partially alleviated by performing a separate calibration experiment in which the noise function is measured directly. For instance, one might be able to make repeated measurements M for a select number of known representations R. However, there will always be residual measurement error in that will propagate to in a manner that is not properly accounted for by simply plugging into likelihood calculations via equation 1.2.

An alternative inference procedure (Sharpee, Rust, & Bialek, 2004; Paninski, 2003; Kinney, Tkačik, & Callan, 2007) that circumvents the need for an assumed noise function is to maximize the mutual information (Cover & Thomas, 1991),
formula
1.3
between predictions R and measurements M.2 Here, p(R, M) is the empirical joint distribution between predictions and measurements and thus depends implicitly on . This method has been proposed, studied, and applied in the specific contexts of receptive field inference (Paninski, 2003; Sharpee et al., 2004, 2006; Pillow & Simoncelli, 2006) and transcriptional regulation (Elemento, Slonim, & Tavazoie, 2007; Kinney, 2008; Kinney et al., 2007, 2010; Melnikov et al., 2012). However, this alternative approach can be applied to a much wider range of statistical regression problems, and a general discussion of how maximizing mutual information relates to maximizing likelihood for arbitrary SRM systems has yet to be presented.

We begin by pointing out that in the limit, maximizing mutual information over alone is equivalent to maximizing likelihood over both and . We then prove that when the correct filter lies within the class of filters being considered, maximizing mutual information is also equivalent to simultaneously maximizing every dependence measure that satisfies the data processing inequality (DPI). However, in the absence of a known noise function , SRM experiments are fundamentally incapable of constraining certain directions in the parameter space of ; we call these directions diffeomorphic modes. An equation for diffeomorphic modes is described and then applied to filters having various functional forms. In particular, our analysis of a linear-nonlinear filter that Kinney et al. (2010) used to model transcriptional regulation demonstrates how model nonlinearities can eliminate diffeomorphic modes in useful and nonobvious ways. This has important consequences for biophysical studies of transcriptional regulation that use recently developed DNA-sequencing-based assays (Kinney et al., 2010; Melnikov et al., 2012).

Throughout this article, we use R to implicitly denote the representation predicted by the filter for signal S; that is, . is used to denote any DPI-satisfying dependence measure. Representations R are assumed to be multidimensional with components and . is used to denote both a filter and the parameters governing that filter. represents both an abstract space of filters and the space of parameters for filters assumed to have a specific functional form. In the latter case, denotes coordinates in parameter space and .

2.  Mutual Information and Likelihood

We begin by discussing the connection between likelihood and mutual information in the limit. In this limit, the per datum log likelihood, equation 1.2, can be rewritten as
formula
2.1
formula
2.2
The first term, , is the mutual information between R and M (see equation 1.3) and is independent of the noise function . The second term,
formula
2.3
is the Kullback-Leibler (KL) divergence between the empirical distribution p(M|R), which results from the choice of , and the assumed noise function . The last term, , is the entropy of the measurements M. H[M] is independent of both and and can thus be ignored in the optimization problem.

The key point is that finding maximally informative filters is equivalent to solving the maximum likelihood problem over both filters and noise functions . This is because if maximizes , simply choosing a noise function that matches the empirical noise function, that is, setting , will minimize and thus maximize .

If one can formalize prior assumptions about the noise function using a Bayesian prior , the relevant objective function becomes the per datum marginal likelihood:
formula
2.4
This is analogous to equation 2.4 computed after all possible noise functions have been integrated out. As has been shown in previous work (Kinney et al., 2007; Rajan, Marre, & Tkačik, 2013), maximizing marginal likelihood and maximizing mutual information are essentially equivalent in the limit. This can be seen by decomposing as
formula
2.5
where
formula
2.6
Under weak assumptions about the prior , as (see appendix  A).3

3.  DPI-Optimal Filters

Mutual information is just one measure among many that satisfy DPI (see appendix  B). In this section, we discuss the importance of DPI for the SRM inference problem and introduce the notion of DPI-optimal filters.

Paninski (2003) has argued as follows for using DPI-satisfying dependence measures as objective functions for inferring filters. If is the correct filter in an SRM experiment, then for every filter ,
formula
3.1
is a Markov chain. This implies for every DPI-satisfying measure . If resides within the space of filters being explored, it must therefore fall within the subset of on which is maximized. As a simple extension of this argument, we point out that because maximizes all DPI-satisfying measures, must actually lie within the intersection of all such sets, that is,
formula
3.2
Filters in can properly be said to be DPI-optimal.
This raises an important question: Would optimizing a variety of different measures , not just mutual information, narrow the search for ? Here we show that the answer is no; when , maximizing mutual information is equivalent to simultaneously maximizing every DPI-satisfying measure, that is,
formula
3.3
To prove this, we first define on the space of all possible filters a weak and strong partial ordering, as well as an equivalence relation. These mathematical structures are a natural consequence of DPI. For any two filters and , we write,4
formula
3.4
formula
3.5
formula
3.6
Note that if is a Markov chain. The set of DPI-optimal filters is the supremum of under this partial ordering. The equivalence , which occurs when , follows directly from the fact, proven in appendix  C, that implies We note that this is not true for all DPI-satisfying measures. For instance, the trivial measure satisfies DPI but reveals no information about whether a given resides in . These results are illustrated in Figure 2.
Figure 2:

Venn diagram illustrating filter sets maximizing different DPI-satisfying measures. In general, different DPI-satisfying dependence measures (e.g., mutual information I and some other measure ) will be maximized by different sets of filters, respectively represented here by and . is the intersection of the optimal sets of all such DPI-satisfying measures. Mutual information has the important property that whenever ; this is not true of all DPI-satisfying measures.

Figure 2:

Venn diagram illustrating filter sets maximizing different DPI-satisfying measures. In general, different DPI-satisfying dependence measures (e.g., mutual information I and some other measure ) will be maximized by different sets of filters, respectively represented here by and . is the intersection of the optimal sets of all such DPI-satisfying measures. Mutual information has the important property that whenever ; this is not true of all DPI-satisfying measures.

4.  Diffeomorphic Modes

Whether or not two filters and satisfy the above equivalence relation (equation 3.6) can depend on the true filter and the specific noise function of the SRM experiment. However, certain pairs of filters will satisfy under all SRM experiments. We will refer to such pairs of filters as being information equivalent. In appendix  D, we prove that two filters are information equivalent if and only if their predicted representations are related by an invertible transformation.

As an objective function, mutual information is inherently incapable of distinguishing between information equivalent filters. In practice, this means that selecting maximally informative filters from a parameterized set of filters can leave some directions in parameter space unconstrained. Here we term these directions diffeomorphic modes.

The diffeomorphic modes of linear filters have an important and well-recognized consequence in neuroscience: the technique of maximally informative dimensions can identify only the relevant subspace of signal space, not a specific basis within that subspace (Sharpee et al., 2004; Paninski, 2003; Pillow & Simoncelli, 2006). However, an interesting twist occurs in applications to transcriptional regulation. Here, linear filters are often used to model the sequence-dependent binding energies of proteins to DNA (Stormo, 2013). Any mechanistic hypothesis about how DNA-bound proteins interact with one another predicts that the transcription rate will depend on these binding energies in a specific nonlinear manner (Bintu et al., 2005; Stormo, 2013). Such upfront knowledge about the nonlinearities of linear-nonlinear filters can eliminate diffeomorphic modes of the underlying linear filters in useful and nonobvious ways (Kinney, 2008; Kinney et al., 2010).

4.1.  An Equation for Diffeomorphic Modes.

Consider a filter , representing a point in , whose parameters are infinitesimally transported along a vector field having components . This yields a new filter with components . If the representation R predicted by for a specified signal S has components in representation space, these will be transformed to .

If the vector field represents a diffeomorphic mode of , this transformation must be invertible, meaning the values cannot depend on S except through the value of R. This is a nontrivial condition because can depend on the underlying signal S in an arbitrary manner. However, if does indeed depend only on the value of R, then
formula
4.1
for some vector function . This is the equation that any diffeomorphic mode must satisfy.

4.2.  General Linear Filters.

We now use equation 4.1 to derive the diffeomorphic modes of general linear filters. By definition, a linear filter yields a representation R that is a linear combination of signal “features” , that is,
formula
4.2
As is standard with regression problems (Bishop, 2006), the term linear describes how R depends on the parameters ; the features need not be linear functions of S.
To find the diffeomorphic modes of these filters, we apply the operator to both sides of equation 4.2. Using equation 4.1, we then find The left-hand side is linear in signal features, so unless something unusual happens,5 must also be a linear function of R, with the form
formula
4.3

The number of diffeomorphic modes is bounded above by the number of independent parameters on which depends (at each ).6 For a general linear filter, we see that there can be no more than diffeomoprhic modes, which is the number of parameters and in equation 4.3. This bound is independent of the number of signal features, that is, the dimensionality of S. In particular, if R is a scalar, then h=a+bR. In this case we observe two diffeomorphic modes, corresponding to additive and multiplicative transformations of R.

4.3.  A Linear-Nonlinear Filter.

Kinney et al. (2010) performed experiments probing the biophysical mechanism of transcriptional activation at the Escherichia coli lac promoter (see Figure 3A). These experiments are of the SRM form where S is the DNA sequence of a mutated lac promoter, M is a measurement of the resulting gene expression, and the mRNA transcription rate T is the internal representation the system. Linear filters were used to model the DNA binding energies Q and P of the two proteins CRP and RNAP. The specific parametric form used for these filters was
formula
4.4
where b indexes the four possible DNA bases (A, C, G, T), l indexes nucleotide positions within the 75 bp promoter DNA region, Sbl=1 if base b occurs at position l, and Sbl=0 otherwise.7
Figure 3:

A linear-nonlinear filter modeling the biophysics of transcriptional regulation at the Escherichia coli lac promoter. (A) The biophysical model inferred by Kinney et al. (2010) from Sort-Seq data. Each signal S is a 75 bp DNA sequence differing from the wildtype lac promoter by approximately 9 randomly scattered substitution mutations. Q and P denote the DNA-sequence-dependent binding energies of the proteins CRP and RNAP to their respective sites on this sequence S; both Q and P were modeled as linear filters of S. is a sequence-independent interaction energy between CRP and RNAP. The resulting transcription rate T, of which the Sort-Seq assay produces noisy measurements M, is assumed to depend on Q, P, and in a specific nonlinear manner dictated by the hypothesized biophysical mechanism (see equation 4.5); all energies are in units of kBT. (B) The linear filter Q is defined by parameters and via equation 4.4. Inferring these parameters by maximizing the mutual information I[Q; M] determines up to an unknown scale and leaves undetermined. (C) Analogous results are obtained for the parameters and when I[P; M] is maximized. (D) Because of the inherent nonlinearity in equation 4.5 (right-hand side), maximizing I[T; M] breaks diffeomorphic modes, fixing the values of , , and in units of kBT. The parameter remains undetermined.

Figure 3:

A linear-nonlinear filter modeling the biophysics of transcriptional regulation at the Escherichia coli lac promoter. (A) The biophysical model inferred by Kinney et al. (2010) from Sort-Seq data. Each signal S is a 75 bp DNA sequence differing from the wildtype lac promoter by approximately 9 randomly scattered substitution mutations. Q and P denote the DNA-sequence-dependent binding energies of the proteins CRP and RNAP to their respective sites on this sequence S; both Q and P were modeled as linear filters of S. is a sequence-independent interaction energy between CRP and RNAP. The resulting transcription rate T, of which the Sort-Seq assay produces noisy measurements M, is assumed to depend on Q, P, and in a specific nonlinear manner dictated by the hypothesized biophysical mechanism (see equation 4.5); all energies are in units of kBT. (B) The linear filter Q is defined by parameters and via equation 4.4. Inferring these parameters by maximizing the mutual information I[Q; M] determines up to an unknown scale and leaves undetermined. (C) Analogous results are obtained for the parameters and when I[P; M] is maximized. (D) Because of the inherent nonlinearity in equation 4.5 (right-hand side), maximizing I[T; M] breaks diffeomorphic modes, fixing the values of , , and in units of kBT. The parameter remains undetermined.

Measurements M were taken for mutant lac promoters S. These data were then used to fit a model for the DNA-sequence-dependent binding energy of CRP. This was done by maximizing I[Q; M]. Because of the diffeomorphic modes of Q, the parameters were inferred up to an unknown scale, and the additive constant was left undetermined. This is shown in Figure 3B. Analogous results were obtained for RNAP (see Figure 3C).

Next, a full thermodynamic model of transcriptional regulation was proposed and fit to the data. Based on the hypothesized biophysical mechanism, the transcription rate T was assumed to depend on S via
formula
4.5
This quantity R is called the regulation factor of the promoter (Bintu et al., 2005). Because R is an invertible function of T, it serves equally well as the representation of the SRM system. In the following analysis, we work with R instead of T due to its simpler functional form.

When the parameters of the linear filters P and Q were simultaneously fit to data by maximizing I[T; M] (or, equivalently, maximizing I[R; M]), three of the four diffeomorphic modes described above were eliminated (see Figure 3D). Specifically, the overall scale of the parameters and was fixed, allowing binding energy predictions for CRP and RNAP in physically meaningful units of kBT. The parameter , corresponding to the intracellular concentration of CRP, was also fixed by the data. The only diffeomorphic mode left unbroken was , corresponding to the intracellular concentration of RNAP.

We now show how the nonlinearity in R was able to break three of the four diffeomorphic modes of P and Q. First, observe that any diffeomorphic mode of a linear-nonlinear filter must also be a diffeomorphic mode of each individual linear filter if, as here, the linear filters are independent functions of S. This means any diffeomorphic mode gi of the full thermodynamic model for R must satisfy
formula
4.6
for coefficients which do not depend on S. Evaluating the right-hand-side derivatives and substituting for P in terms of Q and R, we find
formula
4.7
For gi to be a diffeomorphic mode, the right-hand side must be independent of S for fixed R. The terms dependent on Q must therefore vanish, rendering .8 Any diffeomorphic modes gi must therefore satisfy . Thus, only one mode remains, corresponding to an additive shift in the binding energy P.

5.  Discussion

Likelihood-based inference masks the fundamentally different ways in which data constrain the parameters that lie along diffeomorphic modes versus those that lie along nondiffeomorphic modes. Standard likelihood inference constrains all model parameters, including both diffeomorphic and nondiffeomorphic modes, with error bars that scale as N−1/2.9 These constraints will be consistent with the correct underlying filter when the correct noise function is used (see Figure 4A). However, use of an incorrect noise function will typically cause to fall outside the error bars inferred along both diffeomorphic and nondiffeomorphic modes (see Figure 4B).

Figure 4:

Schematic illustration of constraints placed on diffeomorphic and nondiffeomorphic modes by different objective functions. The filled circle in each panel represents the correct filter ; shades of gray represent the posterior distribution . (A, B) Likelihood (see equation 1.2) places tight constraints (scaling as N−1/2 as ) along both diffeomorphic and nondiffeomorphic modes. (A) will typically lie within error bars if the correct noise function is used. (B) However, if an incorrect noise function is used, will generally violate inferred constraints along both diffeomorphic and nondiffeomorphic modes. (C) Marginal likelihood (see equation 2.4) computed using a sufficiently weak prior will place tight constraints on nondiffeomorphic modes and weak constraints (scaling as N0 as ) along diffeomorphic modes. (D) Mutual information (see equation 1.3) places tight constraints on nondiffeomorphic modes but provides no constraints whatsoever on diffeomorphic modes.

Figure 4:

Schematic illustration of constraints placed on diffeomorphic and nondiffeomorphic modes by different objective functions. The filled circle in each panel represents the correct filter ; shades of gray represent the posterior distribution . (A, B) Likelihood (see equation 1.2) places tight constraints (scaling as N−1/2 as ) along both diffeomorphic and nondiffeomorphic modes. (A) will typically lie within error bars if the correct noise function is used. (B) However, if an incorrect noise function is used, will generally violate inferred constraints along both diffeomorphic and nondiffeomorphic modes. (C) Marginal likelihood (see equation 2.4) computed using a sufficiently weak prior will place tight constraints on nondiffeomorphic modes and weak constraints (scaling as N0 as ) along diffeomorphic modes. (D) Mutual information (see equation 1.3) places tight constraints on nondiffeomorphic modes but provides no constraints whatsoever on diffeomorphic modes.

This problem is rectified if we use a prior that reflects our uncertainty about what the true noise function is. From equation 2.5, it can be seen that using the resulting marginal likelihood to compute a posterior distribution on will constrain diffeomorphic and nondiffeomorphic modes in fundamentally different ways (see Figure 4C). Nondiffeomorphic modes will be constrained by , which remains finite in the large N limit. This produces error bars on nondiffeomorphic modes comparable to those produced by likelihood when the correct noise function is used. However, constraints along diffeomorphic modes will come only from . Because vanishes as N−1, diffeomorphic constraints become independent of N once N is sufficiently large.10

Fortunately, one does not need to posit a specific prior probability over all possible noise functions in order to confidently infer filters from SRM data. Using mutual information as an objective function instead of likelihood, that is, sampling filters according to , will constrain nondiffeomorphic modes the same way that marginal likelihood does while putting no constraints along diffeomorphic modes (see Figure 4D).

One might worry that a large fraction of filter parameters will be diffeomorphic and that the analysis of SRM experiments will require an assumed noise function in order to obtain useful results even if doing so yields unreliable error bars. Such situations are conceivable, but in practice this is often not the case. We have shown that for linear filters, the number of diffeomorphic modes will typically not exceed regardless of how large is. Some of these diffeomorphic modes may also be eliminated if these linear filters are combined using a nonlinearity of known functional form. Indeed, of the 204 independent parameters comprising the biophysical model of transcriptional regulation inferred by Kinney et al. (2010), only one was diffeomorphic.

A bigger concern perhaps is the practical difficulty of using mutual information as an objective function. Specifically, it remains unclear how to compute rapidly and reliably enough to confidently sample from . Still, various methods for estimating mutual information are available (Khan et al., 2007; Panzeri, Senatore, Montemurro, & Petersen, 2007), and the information optimization problem has been successfully implemented using a variety of techniques (Sharpee et al., 2004, 2006; Kinney et al., 2007, 2010; Melnikov et al., 2012). We believe the exciting applications of mutual-information-based inference provide compelling motivation for making progress on these practical issues.

Appendix A:  Marginal Likelihood

In certain cases, can be computed explicitly and thereby be shown to vanish (Kinney et al., 2007). More generally, when is taken to be finite dimensional, a saddle-point computation (valid for large N) gives Here, is the -space Hessian of evaluated at . If and its derivatives are bounded, then the -dependent part of decays as N−1. If is infinite dimensional, this saddle-point computation becomes a semiclassical computation in field theory akin to the density estimation problem studied by Bialek, Callan, and Strong (1996). If this field theory is properly formulated through an appropriate choice of , then may exhibit different decay behavior, but will still vanish as . See also Rajan et al. (2013).

Appendix B:  DPI-Satisfying Measures

DPI is satisfied by all measures of the F-information form (Csiszár & Shields, 2004; Kinney & Atwal, 2013),
formula
B.1
where F(x) is a convex function for . Mutual information corresponds to F(x)=xlog x, whereas yields a more general Rényi information measure (Rényi, 1961) that reduces to mutual information when . DPI-satisfying measures other than mutual information have been used for filter inference in a number of works, including Paninski (2003) and Kouh and Sharpee (2009). A discussion of the differences between DPI-satisfying measures and some non-DPI-satisfying measures can be found in Kinney and Atwal (2013).

Appendix C:  DPI Optimality

Assume by equation 3.5. Because is a Markov chain, the KL divergence between and can be decomposed as If this quantity is zero, then is also Markov chain, implying , a contradiction. This KL divergence must therefore be positive, that is, . So if , then for every , as well. This proves .

Appendix D:  Information Equivalence

First, we observe that if and make isomorphic predictions, then they are information equivalent. This is readily shown from the fact that is invariant under arbitrary invertible transformations of R (Kinney & Atwal, 2013). Next, we show the converse: if and are information equivalent, the predictions R1 and R2 must be isomorphic. Here is the proof. If , then for all , and in particular I[R1; M]=I[R2; M]. In appendix  C, we showed that implies is a Markov chain. Imagining an SRM experiment in which and , we find that is a Markov chain. This implies that the mapping is one-to-one. Similarly, is one-to-one. R1 and R2 are therefore bijective.

Acknowledgments

We thank William Bialek, Curtis Callan, Bud Mishra, Swagatam Mukhopadhyay, Anand Murugan, Michael Schatz, Bruce Stillman, and Gašper Tkačik for helpful conversations. Support for this project was provided by the Simons Center for Quantitative Biology at Cold Spring Harbor Laboratory.

References

References
Bialek
,
W.
,
Callan
,
C.
, &
Strong
,
S.
(
1996
).
Field theories for learning probability distributions
.
Phys. Rev. Lett.
,
77
(
23
),
4693
4697
.
Bintu
,
L.
,
Buchler
,
N.
,
Garcia
,
H.
,
Gerland
,
U.
,
Hwa
,
T.
,
Kondev
,
J.
, &
Phillips
,
R.
(
2005
).
Transcriptional regulation by the numbers: Models
.
Curr. Opin. Genet. Dev.
,
15
(
2
),
116
124
.
Bishop
,
C.
(
2006
).
Pattern recognition and machine learning.
New York
:
Springer
.
Cover
,
T.
, &
Thomas
,
J.
(
1991
).
Elements of information theory.
New York
:
Wiley
.
Csiszár
,
I.
, &
Shields
,
P. C.
(
2004
).
Information theory and statistics: A tutorial
.
Hanover, MA
:
Now Publishers
.
Elemento
,
O.
,
Slonim
,
N.
, &
Tavazoie
,
S.
(
2007
).
A universal framework for regulatory element discovery across all genomes and data types
.
Mol. Cell
,
28
(
2
),
337
350
.
Elowitz
,
M. B.
,
Levine
,
A. J.
,
Siggia
,
E. D.
, &
Swain
,
P. S.
(
2002
).
Stochastic gene expression in a single cell
.
Science
,
297
(
5584
),
1183
1186
.
Khan
,
S.
,
Bandyopadhyay
,
S.
,
Ganguly
,
A.
,
Saigal
,
S.
,
Erickson
III,
D.
,
Protopopescu
,
V.
, &
Ostrouchov
,
G.
(
2007
).
Relative performance of mutual information estimation methods for quantifying the dependence among short and noisy data
.
Phys. Rev. E
,
76
(
2
),
026209
.
Kinney
,
J. B.
(
2008
).
Biophysical models of transcriptional regulation from sequence data.
Doctoral dissertation, Princeton University
.
Kinney
,
J. B.
, &
Atwal
,
G. S.
(
2013
).
Equitability, mutual information, and the maximal information coefficient
.
arXiv:1301.7745
.
Kinney
,
J. B.
,
Murugan
,
A.
,
Callan
,
C. G.
, &
Cox
,
E.
(
2010
).
Using deep sequencing to characterize the biophysical mechanism of a transcriptional regulatory sequence
.
Proc. Natl. Acad. Sci. USA
,
107
(
20
),
9158
9163
.
Kinney
,
J. B.
,
Tkačik
,
G.
, &
Callan
,
C. G.
(
2007
).
Precise physical models of protein-DNA interaction from high-throughput data
.
Proc. Natl. Acad. Sci. USA
,
104
(
2
),
501
506
.
Kouh
,
M.
, &
Sharpee
,
T.
(
2009
).
Estimating linear-nonlinear models using Rényi divergences
.
Network
,
20
(
2
),
49
68
.
Melnikov
,
A.
,
Murugan
,
A.
,
Zhang
,
X.
,
Tesileanu
,
T.
,
Wang
,
L.
,
Rogov
,
P.
, …
Mikkelsen
,
T. S.
(
2012
).
Systematic dissection and optimization of inducible enhancers in human cells using a massively parallel reporter assay
.
Nat. Biotechnol.
,
30
(
3
),
271
277
.
Paninski
,
L.
(
2003
).
Convergence properties of three spike-triggered analysis techniques
.
Network
,
14
(
3
),
437
464
.
Panzeri
,
S.
,
Senatore
,
R.
,
Montemurro
,
M. A.
, &
Petersen
,
R. S.
(
2007
).
Correcting for the sampling bias problem in spike train information measures
.
J. Neurophysiol.
,
98
(
3
),
1064
1072
.
Pillow
,
J. W.
, &
Simoncelli
,
E. P.
(
2006
).
Dimensionality reduction in neural models: An information-theoretic generalization of spike-triggered average and covariance analysis
.
J. Vis.
,
6
(
4
),
414
428
.
Rajan
,
K.
,
Marre
,
O.
, &
Tkačik
,
G.
(
2013
).
Learning quadratic receptive fields from neural responses to natural stimuli
.
Neural Comput.
,
25
(
7
),
1661
1692
.
Rényi
,
A.
(
1961
).
On measures of entropy and information
. In
Proc. 4th Berkeley Symp. Math. Statist. and Prob. 1
(pp.
547
561
).
Berkeley
:
University of California Press
.
Schwartz
,
O.
,
Pillow
,
J.
,
Rust
,
N.
, &
Simoncelli
,
E.
(
2006
).
Spike-triggered neural characterization
.
J. Vis.
,
6
(
4
),
484
507
.
Sharpee
,
T.
,
Rust
,
N.
, &
Bialek
,
W.
(
2004
).
Analyzing neural responses to natural signals: Maximally informative dimensions
.
Neural Comput.
,
16
(
2
),
223
250
.
Sharpee
,
T.
,
Sugihara
,
H.
,
Kurgansky
,
A.
,
Rebrik
,
S.
,
Stryker
,
M.
, &
Miller
,
K.
(
2006
).
Adaptive filtering enhances information transmission in visual cortex
.
Nature
,
439
(
7079
),
936
942
.
Stormo
,
G. D.
(
2013
).
Introduction to protein-DNA interactions: Structure, thermodynamics, and bioinformatics.
New York
:
Cold Spring Harbor Laboratory Press
.

Notes

1

Such as stochastic gene expression (Elowitz, Levine, Siggia, & Swain, 2002).

2

The notation and I[R; M] will be used interchangeably.

3

For example, does not vanish at the true noise function .

4

The subscripts 1 and 2 label two different filters, not two parameters of a single filter.

5

For example, if the various features exhibit complicated interdependencies, either because of their functional form or because signals S are restricted to a particular subspace. We ignore such possibilities here.

6

Technically the number of diffeomorphic modes is the number of independent vector fields gi that correspond to such transformations. However, here we consider only proper diffeomorphic modes, not gauge transformations; as in physics, we define gauge transformations to be vector fields gi along which transformation of leaves all predicted representations invariant.

7

To fix the gauge freedoms of these filters, Kinney et al. (2010) adopted the convention that for all positions l.

8

This assumes , that is, that CRP actually interacts with RNAP. Which is true.

9

In this discussion, we ignore gauge parameters, which do not alter model predictions and are therefore nonidentifiable.

10

More precisely, given any direction i in filter space, for N large enough.