## Abstract

Barlow (1985) hypothesized that the co-occurrence of two events $A$ and $B$ is “suspicious” if $P(A,B)\u226bP(A)P(B)$. We first review classical measures of association for 2 $\xd7$ 2 contingency tables, including Yule's $Y$ (Yule, 1912), which depends only on the odds ratio $\lambda $ and is independent of the marginal probabilities of the table. We then discuss the mutual information (MI) and pointwise mutual information (PMI), which depend on the ratio $P(A,B)/P(A)P(B)$, as measures of association. We show that once the effect of the marginals is removed, MI and PMI behave similarly to $Y$ as functions of $\lambda $. The pointwise mutual information is used extensively in some research communities for flagging suspicious coincidences. We discuss the pros and cons of using it in this way, bearing in mind the sensitivity of the PMI to the marginals, with increased scores for sparser events.

## 1 Introduction

Barlow (1985) hypothesized that “the cortex behaves like a gifted detective, noting suspicious coincidences in its afferent input, and thereby gaining knowledge of the non-random, causally related, features in its environment.” More specifically, he wrote (p. 40):

The coincident occurrence of two events $A$ and $B$ is “suspicious” if they occur jointly more than would be expected from the probabilities of their individual occurrence, i.e. the coincidence $A&B$ is suspicious if $P(A&B)\u226bP(A)\xd7P(B)$.

^{1}Any detective knows that, for a coincidence to be suspicious, the events themselves must be rare ones, and that if they are rare enough, even a single occurrence is significant.

Edelman, Hiles, Yang, and Intrator (2002) refer to the *principle of suspicious coincidences* as where “two candidate fragments $A$ and $B$ should be combined into a composite object if the probability of their joint appearance $P(A,B)$ is much higher than $P(A)P(B)$.”

The fundamental problem here is to detect if there is a significant association between events $A$ and $B$. This can arise in many different contexts—for example:

An animal detecting that eating a certain plant is associated with subsequent illness

Detecting that a certain drug is associated with a particular adverse drug reaction

Detecting the association between a visual stimulus that contains an image of the subject's grandmother or not and the response of a putative “grandmother cell”

Detecting that particular successive words in text are associated more frequently than by chance—called a

*collocation*, an example being the bigram “carbon dioxide”A geneticist determining that two genes are in linkage disequilibrium (Lewontin, 1964)

Detecting that the pattern of two edges in a visual scene making a corner junction occurs more frequently than by chance

Below we review various measures of association from the literature, notably Yule's $Y$ (Yule, 1912), which depends solely on the odds ratio and is invariant to the marginal distributions of the two variables. We then discuss measures of association based on the mutual information and pointwise mutual information, which make use of the ratio $P(A,B)/P(A)P(B)$, as proposed by Barlow and others across diverse literatures. Finally, we consider the pros and cons of using pointwise mutual information (PMI) to flag suspicious coincidences and discuss its estimation from data when (some of) the counts in the table are low.

## 2 $2\xd72$ Contingency Tables

### 2.1 Estimation from Data

Equation 2.1 is given in terms of probabilities such as $p01$. However, observational data do not directly provide such probabilities but counts associated with the corresponding cells. The maximum likelihood estimator (MLE) for $pij$ is, of course, $nij/n$, where $nij$ is the count associated with cell $ij$, and $n$ is the total number of counts. The MLE has well-known issues when (some of) the counts are small. Bayesian approaches to address this are discussed in section 5.

## 3 Classical Measures of Association

For two gaussian continuous random variables, there is a natural measure of their association, the correlation coefficient. This is independent of the individual (marginal) variances of each variable, and lies in the interval $[-1,1]$.

There are a number of desirable properties for a measure of association $\eta $ between binary variables. For example, Hasenclever and Scholz (2016, p. 22) list these:

$\eta $ is zero on independent tables.

$\eta $ is a strictly increasing function of the odds ratio when restricted to tables with fixed margins.

$\eta $ respects the symmetry group $D4$, namely, $\eta $ is symmetric in the markers (i.e., invariant to matrix transposition), and $\eta $ changes sign when the states of a marker are transposed (row or column transposition).

The range of the function is restricted to $(-1,1)$.

As well as Yule's $Y$,^{2} several other measures of association have been proposed; indeed Tan, Kumar, and Srivastva (2004) list 21. Other measures of association include Lewontin's $D'$ (1964), which standardizes $D$ from equation 2.5 by dividing it by the maximum value it can take on, which depends on the marginals of the table, and the binary correlation coefficient $r$, which standardizes $D$ by $p0\xb7p\xb70p1\xb7p\xb71$. For the canonical table, it turns out that $D'=r=Y$.

## 4 Information-Theoretic Measures of Association

*pointwise mutual information*(PMI) in, for example, the statistical natural language processing textbook of Manning and Schütze (1999). In pharmacovigilance, Bate et al. (1998) call $i(x,y)$ the

*information component*(IC), as it is one component of the mutual information calculation in a $2\xd72$ table, and it is also studied in DuMouchel (1999). And in the data mining literature, Silverstein, Brin, and Motwani (1998) define the

*interest*to be the ratio $p(x,y)/(p(x)p(y))$ (i.e., without the $log$).

Note that while $Y$, $D'$, and $r$ consider the difference, $D=p11-p1\xb7p\xb71=p(x,y)-p(x)p(y)$, $i(x,y)$ considers the log ratio of these terms. Thus, $i(x,y)$ considers the ratio of the observed and expected probabilities for the event $(x,y)$, where the expected model is that of independence.

Both PMI and MI as defined above depend on the marginal probabilities in the table. To see this, use $p(x,y)\u2264p(x)$ or $p(x,y)\u2264p(y)$, so $i(x,y)\u2264min(-logp(x),-logp(y))$, that is, favoring “sparsity” (low probability). The MI is maximal for a diagonal (or antidiagonal) table with marginals of $1/2$, the opposite trend to PMI.

There have been various proposals to normalize the PMI and MI to make them fit in the range $[-1,1]$ and $[0,1]$, respectively. For example, Bouma (2009) defined the normalized PMI (NPMI) as $in(x,y)=i(x,y)/h(x,y)$ for $p(x,y)>0$, where $h(x,y)=-logp(x,y)$. NPMI ranges from $+1$ when events $x$ and $y$ only occur together, through 0, when they are independent, to $-1$ when $x$ and $y$ occur separately but not together. Similarly there are a number of proposals for normalizing the mutual information, Bouma (2009) suggests $In(X;Y)=I(X;Y)/H(X,Y)$, where $H(X,Y)$ is the joint entropy of $X$ and $Y$. $In(X;Y)$ (termed the normalized MI or NMI) takes on a value of $+1$ if $X$ and $Y$ are perfectly associated and 0 if they are independent. Alternative normalizations of the MI by $H(X)$ or $H(Y)$ have also been proposed; Press, Teukolsky, Vetterling, and Flannery (2007, sec. 14.7.4) term these the uncertainty coefficients. NMI is not strictly a measure of association as defined above, as it does not take on negative values, but following the construction in Hasenclever and Scholz (2016), one can, for example, define the *signed* NMI as $sign(D)In(X;Y)$.

## 5 Detecting Associations with Pointwise Mutual Information

As we have seen, the raw PMI score is not invariant to the distribution of the marginals. This can be seen in Table 1, which concerns the association between vaccination and death from smallpox; the original proportions in panel a are based on the Sheffield data in Table I of Yule (1912). In panel b, the marginals of the table with regard to vaccination have been adjusted to 50/50 (as may have happened if these data had been collected in a randomized, controlled trial), and in panel c, we have the canonical table where both marginals are 50/50.^{3} Notice that the PMI is highest for the original (unbalanced) table and decreases as the marginals are balanced. Conversely, the MI is lowest in the the original (unbalanced) table and increases as the marginals are balanced. Of course, Yule's $Y$ is constant throughout, by construction.

. | (a) Original Table . | (b) Vaccination Rate 50% . | (c) Canonical Table . | ||||||
---|---|---|---|---|---|---|---|---|---|

. | PMI $=$ 2.300, MI $=$ 0.108 . | PMI $=$ 0.866, MI $=$ 0.205 . | PMI $=$ 0.705, MI $=$ 0.310 . | ||||||

. | Recover . | Die . | Marginals . | Recover . | Die . | Marginals . | Recover . | Die . | Marginals . |

Vaccinated | 0.840 | 0.043 | 0.883 | 0.476 | 0.024 | 0.500 | 0.408 | 0.092 | 0.500 |

Unvaccinated | 0.059 | 0.058 | 0.117 | 0.252 | 0.248 | 0.500 | 0.092 | 0.408 | 0.500 |

Marginals | 0.899 | 0.101 | 0.728 | 0.272 | 0.500 | 0.500 |

. | (a) Original Table . | (b) Vaccination Rate 50% . | (c) Canonical Table . | ||||||
---|---|---|---|---|---|---|---|---|---|

. | PMI $=$ 2.300, MI $=$ 0.108 . | PMI $=$ 0.866, MI $=$ 0.205 . | PMI $=$ 0.705, MI $=$ 0.310 . | ||||||

. | Recover . | Die . | Marginals . | Recover . | Die . | Marginals . | Recover . | Die . | Marginals . |

Vaccinated | 0.840 | 0.043 | 0.883 | 0.476 | 0.024 | 0.500 | 0.408 | 0.092 | 0.500 |

Unvaccinated | 0.059 | 0.058 | 0.117 | 0.252 | 0.248 | 0.500 | 0.092 | 0.408 | 0.500 |

Marginals | 0.899 | 0.101 | 0.728 | 0.272 | 0.500 | 0.500 |

Note: Panel a is the original table based on the data in Yule (1912), panel b adjusts the marginals for vaccinated/unvaccinated to be 50/50, and panel c is the canonical table where the marginals are both 50/50. In all three tables, Yule's $Y=0.630$.

As another example, consider fixing $\lambda $ but adjusting the marginal probabilities of events $x$ and $y$. For example, for $\lambda =16$, PMI takes on the values of 0.678, 1.642, 2.293, 3.642, and 3.958 (using logs to base 2) as $p(x)=p(y)$ varies from 0.5, 0.2, 0.1, 0.01, and 0.001. This is particularly problematic as low counts will give rise to uncertainty in the estimation of the required probabilities (especially of the joint event). In the context of word associations, Manning and Schütze (1999, sec. 5.4) argue that PMI “does not capture the intuitive notion of an interesting collocation very well” and mention work that multiplies it by $p(x,y)$ as one strategy to compensate for the bias in favor of rare events.

Barlow (1985) suggested that sparsity is important for the detection of suspicious coincidences, that is, that “the events themselves must be rare ones.” It is true that a low $p(y)$ gives more “headroom” for the ratio $p(y|x)/p(y)$ to be large. The PMI score is used extensively in pharmacovigilance, where the aim is to detect associations between drugs taken and adverse drug reactions (ADRs). In this context, the ratio $p(x,y)/p(x)p(y)=p(y|x)/p(y)$ is termed the *relative reporting ratio* (RRR) and compares the relative probability of an adverse drug reaction $y$ given treatment with drug $x$, compared to the base rate $p(y)$. Another commonly used measure is the *proportional reporting ratio* (PRR), defined as $p(y|x)/p(y|\xacx)$. A US Food and Drug Administration (FDA) white paper (Duggirala et al., 2018) describes the use of both RRR and PRR for detecting ADRs in routine surveillance activities.

Above, we have described the maximum likelihood estimation for the probabilities in the $2\xd72$ table, based on counts. However, there are well-known issues with the MLE when (some of) the counts are small. This naturally suggests a Bayesian approach, and there is a considerable literature on the Bayesian analysis of contingency tables, as reviewed, for example, in Agresti (2013). There are different sampling models depending on how the data are assumed to be generated, as described in Agresti (2013, sec. 2.1.5). If all four counts are unrestricted, a natural assumption is that each $nij$ is drawn from a Poisson distribution with mean $\mu ij$, which can be given a gamma prior. Alternatively, if $n$ is fixed, the sampling model is a multinomial, and the conjugate prior is a Dirichlet distribution. If one set of marginals is fixed, then the data are drawn from two binomial distributions, each of which can be given a beta prior. If both marginal totals are fixed, this corresponds to Fisher's famous “lady tasting tea” experiment, and the sampling distribution of any cell in the table follows a hypergeometric distribution. Section 3.6 of Agresti (2013) covers Bayesian inference for two-way contingency tables, and Agresti and Min (2005) discuss Bayesian confidence intervals for association parameters, such as the odds ratio.

DuMouchel (1999) applied an empirical Bayes approach to consider sampling variability for PMI (a.k.a. RRR) in the context of adverse drug reactions. He assumed that each $n11$ is a draw from a Poisson distribution with unknown mean $\mu 11$ and that the object of interest is $\rho 11=\mu 11/E11$, where $E11$ is the expected count (assumed known) under the assumption that the variables are independent. Using a mixture of gamma distributions prior for $\rho 11$, DuMouchel obtained the posterior mean $E[log(\rho 11)|n11]$ rather than just considering the sample estimate $n11/E11$. The mixture prior was used to express the belief that when testing many associations, most will have a PMI of near zero, but there will be some with significantly larger values. This method is known as the multi-item gamma Poisson shrinker (MGPS). The value of this approach is that Bayesian shrinkage corrects for the high variability in the RRR sample estimate $n11/E11$ that results from small counts.

## 6 Summary

Motivated by Barlow's hypothesis about suspicious coincidences, we have reviewed the properties of $2\xd72$ contingency tables for association analysis, with a focus on the odds ratio $\lambda $ and Yule's $Y$. We have considered the mutual information and pointwise mutual information as measures of association, along with normalized versions thereof. We have shown that, considered as functions of $\lambda $ in the canonical table, MI and PMI behave similar to $Y$ for $\lambda \u22651$, increasing monotonically with $\lambda $ (and can be made similar for $0>\lambda >1$).

As well as $Y$, the PMI measure $i(x,y)=logp(x,y)/(p(x)p(y)$ can also be used to identify suspicious coincidences, and it is used in practice—for example, in pharmacovigilance. We have discussed the pros and cons of using it in this way, bearing in mind the sensitivity of the PMI to the marginals, with increased scores for sparser events. When some of the counts in the table are low, Bayesian approaches can be useful for estimating PMI from raw counts.

## Notes

^{3}

Yule (1912) comments that on the canonical table, “These are, of course, not the actual proportions, but the proportions that would have resulted if an omnipotent demon of unpleasant character (no relation of Maxwell's friend) could have visited Sheffield …, and raised the fatality rate and the proportion of unvaccinated … to 50 per cent without otherwise altering the facts.”

## Acknowledgments

I thank Peter Dayan and Iain Murray for helpful comments on an early draft of this note and the anonymous reviewers whose comments helped to improve the note.

## References

*Biometrics*

*Models of the visual cortex*

*Matters of Intelligence: Conceptual structures in cognitive neuroscience*

*European Journal of Clinical Pharmacology*

*Proceedings of the Biennial GSCL Conference 2009*

*Comput. Linguist.*

*Data mining at FDA*

*American Statistician*

*Advances in neural information processing systems*

*Journal of the Royal Statistical Society, Series A (General)*

*Open Statistics and Probability Journal*

*Genetics*

*Foundations of statistical natural language processing*

*Numerical recipes: The art of scientific computing*

*Data Mining and Knowledge Discovery*

*Information Systems*

*Phil. Trans. Roy. Soc., A*

*Journal of the Royal Statistical Society*