## Abstract

Humans and other animals base their decisions on noisy sensory input. Much work has been devoted to understanding the computations that underlie such decisions. The problem has been studied in a variety of tasks and with stimuli of differing complexity. However, how the statistical structure of stimuli, along with perceptual measurement noise, affects perceptual judgments is not well understood. Here we examine how correlations between the components of a stimulus—stimulus correlations—together with correlations in sensory noise, affect decision making. As an example, we consider the task of detecting the presence of a single or multiple targets among distractors. We assume that both the distractors and the observer’s measurements of the stimuli are correlated. The computations of an optimal observer in this task are nontrivial yet can be analyzed and understood intuitively. We find that when distractors are strongly correlated, measurement correlations can have a strong impact on performance. When distractor correlations are weak, measurement correlations have little impact unless the number of stimuli is large. Correlations in neural responses to structured stimuli can therefore have a strong impact on perceptual judgments.

## 1 Introduction

The perceptual system has evolved to extract ecologically meaningful information from sensory input. For example, in many mid- to high-level visual tasks, the brain has to make categorical, global judgments based on multiple stimuli where the identity of any individual stimulus is not of direct relevance. In a visual search task, the goal might be to detect whether a predefined target object is present in a scene that contains multiple objects. Complicating such tasks is the fact that noise corrupts sensory measurements, especially when observation time is short or many objects are present.

Much work has been devoted to modeling the decision processes by which the brain converts noisy sensory measurements of a set of stimuli into a judgment about a global world state such as the presence or absence of a target. These models often focus on various decision rules that can be applied to the measurements. By contrast, the measurements themselves are usually modeled in a rather stereotypical fashion namely, as independent and normally distributed (e.g., Peterson, Birdsall, & Fox, 1954; Nolte & Jaarsma, 1967; Pelli, 1985; Graham, Kramer, & Yager, 1987; Palmer, Ames, & Lindsey, 1993; Baldassi & Burr, 2000; Baldassi & Verghese, 2002; van den Berg, Vogel, Josić, & Ma, 2012; Ma, Navalpakkam, Beck, van den Berg, & Pouget, 2011; Mazyar, van den Berg, & Ma, 2012). Both the assumption of independence and the assumption of gaussianity can be questioned. Specifically, neural correlations can extend to distances as long as 4 mm in monkey cortex (Ecker et al., 2010; Cohen & Kohn, 2011). This suggests that sensory measurements can be strongly correlated (Rosenbaum, Trousdale, & Josić, 2010; Chen, Geisler, & Seidemann, 2006). Here we focus on the effects of violation of the assumption of independent measurements on performance in categorical, global perceptual judgments.

To make such perceptual judgments, an observer needs to take into account the statistical structure of the stimuli and the structure of measurements. Consider a search task where a subject is required to detect a target among distractors. The effects of measurement correlations and correlations between the distractors will be intertwined. If the distractors are identical on a given trial, then strong correlations between the measurements will help preserve their perceived similarity. Namely, an observer can group the distractor measurements and identify the target as corresponding to the outlying measurement. By contrast, when distractors are unstructured (independently drawn across locations), strong measurement correlations may have no effect on performance. Thus, measurement and distractor correlations should not be considered in isolation.

Here we examine how measurement and distractor correlations affect the strategy and the performance of an ideal observer in a target detection task. We assume that on half the trials, one or more target stimuli are presented along with a number of distractors, whereas on the other half of trials, only distractors are presented. The task is to infer whether targets are present or not. Importantly, we assume that the distractor stimuli are not drawn independently; for instance, in the extreme case, the targets could be identical. Our ideal observer infers target presence based on measurements of the stimuli. We assume that these measurements are corrupted by correlated noise. In an extreme case, this noise is perfectly correlated and all measurements are perturbed by the same random value.

We provide an analytical study of the optimal decision rule to show that the interplay of measurement and distractor correlations can be intricate. In general, if the distractors are strongly correlated, then measurement correlations can strongly affect the performance of an ideal observer. When distractors are weakly correlated, measurement correlations have a smaller impact.

We expect that these insights hold more generally. In natural search tasks, such as finding edible fruit in a bush or a car in a parking lot, distractors are heterogeneous but highly structured. Line segments belong to contours and form boundaries of shapes. Hence their orientations can be correlated depending on positions (Geisler & Perry, 2009). Thus the distributions corresponding to natural stimuli are concentrated along low-dimensional structures in stimulus space (Geisler, 2008). It is likely that humans and animals take this structure into account in visual search. Indeed in contour integration, observers seem to take into account natural co-occurrence statistics of line elements (Geisler & Perry, 2009), and in change detection, people incorporate knowledge about the large-scale statistical structure of a scene (Brady & Alvarez, 2011; Brady & Tenenbaum, 2013). In such situations, correlations in measurement noise could help in parameters of inference. Ultimately, we thus expect our results to be relevant to modeling perceptual decision making in natural scenes.

## 2 Model Description

To examine how decisions of an ideal observer are determined by the statistical structure of measurements and stimuli, we consider the following task. An observer is asked whether target stimuli are present among a set of distractor stimuli. Each stimulus, *i*, is characterized by a scalar, . The set of *N* stimuli presented on a single trial is characterized by the vector . For instance, stimuli could be pure tones characterized by their frequency, or ellipses characterized by the orientation of their major axis. A target is a stimulus with a particular characteristic, *s _{T}*. A target could be a vertical grating or a pure tone at 440 Hz.

For an example of this task with stimuli that are gratings characterized by their orientations, see Figure 1A. For simplicity, we assume that if *i* is a target, then , and stimulus characteristics are measured relative to that of a target. In the Figure 1A example, we have a single vertical target, and we measure orientations relative to the vertical. Stimuli that are not targets are distractors. We determine the characteristics (orientations) of the distractors by a sample from a gaussian distribution. Marginally, each orientation is vertical on average, and the orientations can be pairwise correlated. Thus, the distractors are nonvertical with probability 1. If there are no targets, we refer to all stimuli as distractors, and their orientations are again determined by a sample from a multivariate normal distribution. We will consider situations with single and multiple targets. The number of targets and stimuli is assumed fixed during an experiment.

*T*= 1 and absence by

*T*= 0. We assume that targets are present with probability 0.5. When , there are no targets, and all stimuli are distractors. Their characteristics are drawn from a multivariate normal distribution with mean and covariance matrix, . The subscript denotes vector length, so has

*N*components. We can therefore write where denotes the density of the normal distribution with mean and covariance . For simplicity, we assume has constant diagonal and off-diagonal terms so that The variance, determines the variability of individual stimulus characteristics. When is smaller, the distractors are closer to each other, and the task becomes more difficult. The correlation coefficient, , determines the relation between the distractors.

If , then targets are present. The number of targets, *n*, and the number of stimuli, is fixed for the duration of the experiment. In this case, we assign, with uniform probability, the target characteristic to *n* out of the total of *N* stimuli. This subset of *n* targets is denoted by *L*. We denote by the collection of all possible choices of the sets *L* of targets.

*N*−

*n*distractors are drawn from a multivariate normal distribution with mean and covariance matrix of dimension . Let denote the target stimuli and the distractors. We can therefore write where denotes the Dirac delta function. We assume that all pairs of distractors are equally correlated, so that the off-diagonal entries in are identical. Moreover, the correlation between distractors is the same whether a target is present or not. We therefore refer to as the distractor correlation.

*x*, of each stimulus,

_{i}*s*. This measurement can be thought of as the estimate of stimulus

_{i}*i*obtained from the activity of a population of neurons that responded to the stimulus. We denote by the vector of

*N*measurements. It is commonly assumed that these measurements are unbiased and corrupted by additive, independent, normally distributed noise, so that Here we consider the more general situation where the measurements are unbiased, but noise could be correlated so that Therefore, we assume that noise equally affects measurements of all stimuli, whether targets or distractors. Marginally, each measurement is additively perturbed by normally distributed noise with variance .

## 3 Results

Our goal is to describe how correlations between stimuli along with correlations between their measurements affect the decisions of an optimal observer in a target detection task. We examine how performance changes as both correlations between distractors and between measurements are varied.

The stimuli, , follow different distributions depending on whether or . Measurement noise increases the overlap between the corresponding measurement distributions, and . The higher the overlap is between these two distributions, the more difficult it is to tell whether a target is present. However, correlations in measurement noise can reduce such overlap (see Figure 2B) even when noise intensity is unchanged. Therefore, the estimate of a parameter from a neural response depends not only on the level, , but also on the structure of measurement noise (Abbott & Dayan, 1999; Sompolinsky, Yoon, Kang, & Shamir, 2001; Averbeck, Latham, & Pouget, 2006; Josić, Shea-Brown, Doiron, & de la Rocha, 2009).

The decision variable thus depends on the sum of the normalized probabilities that a measurement is made, given that the target set is *L*.

Therefore, the decision variable can also be interpreted as a sum of likelihoods that *L* is a target set, given a measurement . Thus, the decision is directly related to the posterior distribution over the target sets *L*. The summands in equation 3.2 correspond to the evidence that *L* is a set of targets.

Normalization of the two multivariate normal distribution requires division by the determinants and , respectively. Their ratio therefore appears as a prefactor in the ratio of the likelihoods.

This decision variable depends on the model parameters and the measurement, . The total number of stimuli, number of targets, the variability, , and the correlation, , between the distractors determine the structure of the stimulus, while the variability, , and correlation, , describe the distribution of sensory measurements. An ideal observer knows all these parameters.

*x*

_{1}, differs sufficiently from 0, then . A measurement close to 0 gives .

The variables *v* and *v _{L}* represent scaled inverse variances corresponding to distractor and target stimuli. The parameters , and are given in equations A.5 and A.6 and are defined in terms of , and . equation 3.4 has a form that can be interpreted intuitively:

Term I contains a sum of squares of individual measurements, over the putative set of targets,

*L*. The smaller this sum is, the more likely that*L*contains targets.Term II contains the sample covariance about the known target value, , that is, . In the absence of measurement correlations, , covariability about the target value has vanishing expectation for target measurements. Therefore, the larger this sum, the less likely it is that the measurements come from a set of target stimuli. In the presence of measurement correlations, , covariability between target measurements is expected. Hence, the prefactor in term II decreases with .

Term III is similar to term II, with the sum representing the sample covariance about the target value between putative target and nontarget stimuli. The larger this covariance is, the less likely it is that

*L*or its complement contain targets.Term IV contains the sample covariance about the mean of measurements outside the putative target set. If there are no targets, then all terms in the sum are expected to be large, regardless of the choice of

*L*. However, if there are targets, then whenever the complement of*L*contains targets, some of the terms in the sum have expectation 0. Hence, the term again makes a smaller contribution if targets are present.

While this provides an intuitive interpretation of the sums in equation 3.4, the expression is complex, and it is difficult to understand precisely how an ideal observer uses knowledge of the generative model and the stimulus measurements to make a decision. We therefore examine a number of cases where equation 3.4 is tractable and all the terms can be interpreted precisely. We also numerically examine performance in a wider range of examples.

### 3.1 Single Target,

We start with the case when a single target is present at one of the *N* locations. This case was considered previously in the absence of correlations between the sensory measurements (Bhardwaj et al., 2015).

We observe in Figures 3A and 4A that the performance of an ideal observer is nearly independent of when external structure is weak (). Performance depends strongly on when distractors are strongly correlated, . An ideal observer performs perfectly when (see Figure 3A).

Increased performance with increasing distractor correlations, , accords with intuition that similar distractors make it easier to detect a target. However, correlations in measurement noise can play an equally important role and significantly improve performance when distractors are identical (see Figure 3B).

Perfect performance when can be understood intuitively. In this case measurements, *x _{i}*, of the stimuli are obtained by adding the same realization of a random variable, that is, identical measurement noise to each stimulus value,

*s*. In target-absent trials, all measurements are identical. If the target is present, measurements contain a single outlier. An ideal observer can thus distinguish the two cases perfectly.

_{i}We examine in more detail the cases of weak measurement noise and then the case of comparable measurement noise and distractor variability.

#### 3.1.1 Weak Measurement Noise, with Highly Correlated Stimuli,

*i*. Hence, to make a decision, the ideal observer subtracts the mean of the measurements of putative distractors from that of the putative target. In section B.1, we show that on target-absent trials, , as , and on target-present trials, , as . Hence, performance improves as measurement correlations increase. This can also be seen in Figure 2B, as the overlap between the distributions and decreases with an increase in . In the degenerate case when , the distractors’ distribution on target absent trials is concentrated on the diagonal. Therefore, an ideal observer can infer that a target is present whenever , that is, the decision boundary also collapses to the diagonal.

*x*, to the sample mean of the remaining target measurements. As shown in section B.1 at intermediate values of measurement correlations, , the ideal observer uses a mixture of these two strategies.

_{i}#### 3.1.2 Weak Measurement Noise, Nonidentical Distractors,

Measurement correlations have little effect on performance when distractor correlations are weaker (see Figure 3A). Consider again , so that measurements are corrupted by adding a random but nearly identical perturbation to the stimuli. An ideal observer uses the knowledge that all measurements are obtained by adding an approximately equal value to the stimulus. However, when distractor correlations are weak, the target measurement is no longer an outlier. Correlations in measurement noise provide little help in this situation.

These observations are reflected in the structure of the decision boundaries () and the distributions of the measurements (see Figure 3B). In the target-absent (left column) and target-present trials (right column), the distributions of measurements, , is shaped predominantly by variability in the stimulus. Measurement correlations have little effect on this shape, and the decision boundary therefore changes little with an increase in . In contrast, when , measurement correlations have a significant impact on the overlap between the distributions and , as shown in Figure 2B.

When the number of distractors becomes larger, equation 3.8 is no longer valid. The observer compares measurements of all stimuli to make a decision. We return to this point below.

#### 3.1.3 Strong Measurement Noise,

Increasing measurement noise trivially degrades performance. However, in the limit of perfect stimulus and measurement correlations, an ideal observer still performs perfectly for the reasons described earlier.

Measurement correlations affect performance differently than in the case of weak measurement noise (see Figures 4A and 4C). Even with uncorrelated stimuli, , performance increases slightly (approximately 6%) with . Surprisingly, for intermediate values of distractor correlations, (e.g., ), measurement correlations have a negative impact on performance. If measurement correlations are fixed at a high value, then the worst performance is observed at an intermediate value, . The reason for this unexpected behavior is unclear, as equation 3.4 is difficult to analyze in this case.

Generally when measurement noise is strong, measurement correlations will change the shape of the measurement distributions and and hence have an impact on decisions and performance. Note that when measurement correlations increase, the region corresponding to is elongated along the diagonal to capture more of the mass of the distribution (see Figure 4C). However, when measurement noise is high, the interactions between measurement and distractor correlations are intricate.

### 3.2 Multiple Targets,

When multiple targets are present, they are all identical and hence perfectly correlated. Thus, regardless of the value of , on half the trials, the stimuli will be strongly structured and the density concentrated on a low-dimensional subspace. As a consequence, measurement correlations always have an impact on performance.

Regardless of distractor correlations, an ideal observer performs perfectly when (see Figure 5A). Even when , performance increases with (see Figure 5A). When , all target measurements are identical. Hence, an ideal observer performs perfectly by checking whether *n* of the measurements, are equal. We analyze only the case , since the case of perfectly correlated distractors is similar.

Interestingly, decisions in this case are based only on measurements of stimuli within the set of putative targets, *L*. In the absence of measurement correlations, a decision is based solely on the sample second moment of the *n* stimulus measurements about the target characteristic, , that is, (the underlined term in equation 3.9). A low value of this sample moment indicates that *L* contains targets.

*L*. If the sample variance of the measurements is small, then it is likely that

*L*is a set of targets. When , an ideal observer performs perfectly (see section B.5 for details).

When , the underlined term in equation 3.9 shows that the ideal observer takes an intermediate strategy by computing a second moment about a point between the target characteristic, *s _{T}*, and the sample mean. Interestingly, the larger the number of targets, the larger the weight on the sample mean, since the prefactor increases with

*n*for fixed .

These observations are reflected in the distributions shown in Figure 5C. The distribution of measurements, , moves closer to the diagonal () as , and the overlap with the distribution of measurements, decreases. In higher dimensions, for *N* stimuli and *n* targets, the measurement distribution, , is concentrated on the union of -dimensional subspaces when . The target measurements lie on a line, while the *N*−*n* distractor measurements are distributed along the remaining directions.

To conclude, when there are multiple targets part of the stimulus set is always perfectly correlated. When measurement correlations are high, the observer checks whether the measurements are similar to each other to make a decision. When measurement correlations are low, the observer compares the measurements to the known target value. Measurement correlations can again decrease the overlap between the conditional distributions of measurements and have a significant impact on decisions and performance. For finite *N*, decisions are based on the comparison of measurements within a putative set of stimuli, *L*. We show next that when *N* is large, this is no longer the case.

### 3.3 Larger Number of Targets and Stimuli

*K*, of the stimuli consists of targets, so that , then equation 3.4 simplifies considerably in the limit of large

*N*. If we let and , the exponential in equation 3.4 has the form (see section B.6) where , , and

*s*

^{2}are the sample variances of measurements from the putative target set,

*L*, outside the putative target set and over all

*N*measurements, respectively.

The different terms in this expression can be interpreted as earlier. Term I is the sample variance of measurements of the putative targets. If this variance is large, then the set *L* is unlikely to contain targets. Term II is the sample variance among all terms. If this term is large, then all stimuli are dissimilar, and there is evidence that targets are present. For example, when distractors are correlated and , then the sample variance of all stimulus measurements is small only in the absence of targets. Finally, term III is the sample variance among putative distractors. If distractors are correlated, this term will be small if *L* contains targets, and hence stimuli outside *L* are distractors. The sign of the three terms agrees with this interpretation: terms I and III are negative and term II is positive.

The main difference between the cases of large *N* and the examples discussed previously is that an observer takes into account putative distractor measurements—measurements outside the putative target set *L*. An exception is the case when measurement correlations are much stronger than distractor correlations, . In this case, the putative targets are more strongly structured, and hence only their measurements are used in a decision. When , or, equivalently, and distractors are strongly correlated, all three terms in equation 3.11 are comparable. In this case, ideal observers base their decision on the similarity, as measured by sample variance, of both putative distractor and target measurements.

Importantly, the decision is made using distractor measurements, even when distractors are not perfectly correlated. Figure 6 shows that intermediate distractor correlations increasingly have an impact on decisions with an increase in distractor number. Indeed, the higher the fraction of distractors, , the more weight is assigned to their sample variance (term III). This is unlike the case of small *N*, where distractor measurements are used only when they are perfectly correlated.

## 4 Discussion

We have shown that in a simple task, the statistical structure of the stimulus as well as that of noise in perceptual measurements determine the strategy and performance of an ideal observer. Correlations in measurement noise can have a significant impact on performance, particularly when distractor correlations are high. When the distribution of stimuli conditioned on the parameter of interest is concentrated in a small volume of stimulus space, the statistical structure of measurement noise can be particularly important (Mazyar et al., 2012, 2013; Bhardwaj et al., 2015).

The impact of noise correlations in neural responses on the inference of a parameter has been studied in detail (Averbeck et al., 2006; Averbeck, 2009; Latham & Nirenberg, 2005; Perkel, Gerstein, & Moore, 1967; Schneidman, Bialek, & Berry, 2003; Sompolinsky et al., 2001). Frequently the parameter of interest was identified with the stimulus, and both were univariate, although more complex and realistic cases have been examined (Montemurro & Panzeri, 2006; Mathis, Herz, & Stemmler, 2013). The estimation of the orientation of a bar in the receptive field of a population of neurons has been a canonical example.

Reality is far more complex. Natural visual and auditory scenes are high dimensional and highly structured. Moreover, only some of the parameters are typically relevant. Intuitively, if noise perturbs measurements along relevant direction (i.e., along the directions of the parameters of interest), then estimates will be corrupted. Perturbations along irrelevant directions in parameter space have little effect (Moreno-Bote et al., 2014). Measurement correlations can channel noise into irrelevant directions without decreasing overall noise magnitude, and thus improve parameter inference.

This is difficult to study using general theoretical models without putting some constraints on the structure of measurement noise. We therefore considered a relatively simple, analytically tractable example where both measurement and stimulus structure are characterized by a small number of parameters. We have used a similar setup to examine decision making in controlled search experiments (Mazyar et al., 2012, 2013; Bhardwaj et al., 2015).

We assumed that measurement noise and measurement correlations can be varied independently. This is not realistic. For instance, it is known that changes in the mean, variability, and covariability of neural responses can be tightly linked (Cohen & Kohn, 2011; de la Rocha, Doiron, Shea-Brown, Josić, & Reyes, 2007; Rosenbaum & Josić, 2011). It is thus likely that the statistics of measurement noise also change in concert. However, this relationship has not yet been well characterized.

More important, noise from the periphery of the nervous system will limit the performance of any observer. It is therefore not possible that a simple change in measurement correlations can lead to perfect performance (Moreno-Bote et al., 2014). To address this question, it would be necessary to provide a more accurate model of both the noise correlations in a recurrent network encoding information about the stimuli (Beck et al., 2011), as well as the resulting measurement correlations. This is beyond the scope of this study.

We also made strong assumptions about the structure of measurement and distractor correlations. We chose to restrict our analysis to positive correlations. The reason is that the requirement that a covariance matrix is positive definite implies restrictions on the range of allowable negative correlations between measurements and stimuli (Horn & Johnson, 2012). These restrictions depend on the number of stimuli, *N*, and complicate the analysis. To make the model tractable, we also assumed that all off-diagonal elements in the stimulus and measurement covariance matrices are identical. While we did not examine it here, heterogeneity in the correlation structure can strongly affect inference (Shamir & Sompolinsky, 2006; Chelaru & Dragoi, 2008; Berens, Ecker, Gerwinn, Tolias, & Bethge, 2011).

In practice, human and animal subjects are unlikely to behave like optimal observers. Indeed suboptimal inference may be a dominant cause of trial–to–trial variability (Beck, Ma, Pitkow, Latham, & Pouget, 2012). However, understanding the computations of an optimal observer still provides valuable information. First, understanding optimal inference provides a bound on what is possible; it is not possible to say whether an observer is suboptimal without it. Second, while computations required to achieve optimal performance may be expensive, biological systems may have evolved to efficiently approximate them. While the performance of humans and animals is likely suboptimal in all but the simplest tasks, in most cases we do not have sufficient information to infer what models they use to make an inference. Approximations and perturbations of the optimal model can serve as plausible models in these cases. However, the optimal model may not explain important aspects of visual information processing in human observers, such as pop-out (Wolfe, 1998).

Visual search is often used to investigate how stimulus number and homogeneity have an impact on inference. Detecting a target among distractors generally becomes more difficult as the number of stimuli increases. Here we followed the assumption made in a number of Bayesian and signal detection models that the precision with which individual stimuli are represented does not depend on stimulus number (Ma et al., 2011; Nolte & Jaarsma, 1967; Palmer et al., 1993; Palmer, Verghese, & Pavel, 2000; Rosenholtz, 2001; Verghese, 2001; Vincent, Baddeley, Troscianko, & Gilchrist, 2009). In this case, increased set size leads to an increase in noise, and makes it more difficult to detect the target unless stimulus correlations are strong. However, attentional resources may be limited (Townsend, 1974). An increase in the number of stimuli may therefore affect the precision with which each stimulus is encoded, and some stimuli may not be detected at all (Palmer, 1990; Shaw, 1980). Recent evidence suggests that precision decreases with set size (Mazyar et al., 2012, 2013). This effect could affect the details but not the general ideas we present here.

To end, we provide another illustration of the fact that measurement correlations can affect the performance of an ideal observer in different ways depending on the task. Suppose an observer is presented with *N*-oriented stimuli, such as Gabor patches. The stimuli and measurements follow the same gaussian distributions introduced earlier in this study. The observer is asked to perform one of the following two tasks: (1) report whether the mean orientation of the stimuli is to the left or right of vertical or (2) report whether a vertically oriented target is present or absent. In this case, a subset of the stimuli has vertical orientation on half the trials.

The first task is a discrimination task, and the observer needs to integrate information from different sources. When measurement correlations are high, it is more difficult to average out the noise between the stimulus measurements (Sompolinsky et al., 2001; Zohary, Shadlen, & Newsome, 1994). The estimate of the average orientation is therefore degraded and performance decreases with an increase in measurement correlations (see Figure 7A and appendix C). The second is a detection task that requires extracting information that is buried in a sea of distractors. As discussed above, in this case, measurement correlations can increase performance if there is more than one target or if the distractors are strongly correlated (see Figure 7B).

In this example, the stimuli and the measurements have the same statistical structure on target-absent trials for the two tasks. However, the parameter of interest differs. In the first task, the observer needs to estimate the average stimulus orientation, and in the second determine whether a target is present. The distributions of measurements conditioned on these parameters are therefore also different and are differently affected by measurement noise.

The question of how measurement correlations affect decision making and performance does not have a simple answer (Hu, Zylberberg, & Shea-Brown, 2014). Correlations in measurement noise can have a pronounced effect when the stimuli themselves are highly correlated (i.e., when they occupy a small volume in stimulus space). We have illustrated how in this case, measurement correlations can help in separating the distribution of measurements conditioned on a parameter of interest. Similar considerations will be important whenever we try to understand how information can be extracted from the collective responses of neural populations to high-dimensional, and highly structured stimuli.

### Appendix A: p Derivation of Equations 3.3 and 3.4

Here we present the details of some of the calculations leading to the results presented in the main text. We follow the assumptions made in section 2. The following derivations involve rearrangement of somewhat complicated algebraic expressions. These steps are best checked using a computer algebra system.

*L*. We compute in equation 3.1 by marginalizing over , We note that Therefore, Similarly, where and . We therefore obtain The determinants of the covariance matrices, and appear in the definition of the multivariate normal distribution, and thus their ratio appears in the likelihood ratio. We note that the determinant of does not depend on the set

*L*since all matrices can be obtained from each other by permuting appropriate rows and columns.

*n*out of

*N*possible locations), we have where and .

### Appendix B: Asymptotic Analysis of Equation 3.4

Here we present some asymptotic results for the decision variable given in equation 3.4. The main results are obtained for small measurement noise, . Equivalent results can be obtained for large external variability, . We let and find the leading terms in equation 3.4 assuming . We consider only terms with larger contribution—terms of the orders of to obtain approximations for in different parameter regimes.

#### B.1 Small Measurement Noise and Identical Distractors,

*i*. In the limiting cases and , we obtain the exponents given in equations 3.6 and 3.7 discussed in the text.

#### B.2 Perfect Performance When

We show that when , an ideal observer performs perfectly in the limit of identical measurement noise. For a fixed number of stimuli, on target-absent trials, , and hence . On the other hand the prefactor, , and hence , as , that is, as . On target-present trials, when stimulus *i* is the target, then , and . In this case, the prefactor is still , and the exponential term dominates. A similar argument works for the summands for which *i* is not a target.

#### B.3 Single Target with Increasing Number of Distractors

We still work under the assumption that measurement noise is relatively weak so that we can use equation B.1. Note that on target-absent trials, , where *s* is the true value of the (identical) distractors. We also have . Hence, the first term in the exponential of equation B.1 is , while the second term is . A similar argument holds in target-present trials.

*N*. We abuse notation slightly and use only order notation on the terms that include measurement noise. As stimuli become more dissimilar to the target (i.e., as

*s*

^{2}increases), decreases exponentially, becomes more negative, and it is hence easier to infer that a target is absent. However, the terms can be both positive and negative. Thus, performance decreases with the number of stimuli. Similarly we can see that when a target is present, again to leading order in

*N*. As measurement noise decreases or

*s*

^{2}increases, the first term in equation B.2 diverges exponentially, and the terms in equation B.3 approach 0 exponentially. As a result, increases. However, the noise term increases with

*N*, and an increase in the number of stimuli again decreases performance.

If , (i.e., measurement noise is strongly correlated), the first term in the exponential of equation B.1 dominates. Thus, when correlations increase faster than the inverse of the number of distractors, performance increases with the number of distractors.

#### B.4 Weak External Structure, , Arbitrary Number of Targets

##### B.4.1 Special Case:

*L*has only one element and . In this case, equation 3.9 reduces to a much simpler expression This expression is independent of . Hence, the decision boundary and the performance of an ideal observer are unaffected by measurement correlations.

#### B.5 Near-perfect Performance with , and

*T*= 1. From equation 3.10, if

*L*is the set of targets, then this expression is approximately zero and the exponential is approximately unity. When

_{T}*L*is a set not consisting of all targets, then the expression in equation 3.10 has expectation greater than zero. Then The prefactor in equation 3.9 diverges since the exponential dominates. If

*T*= 0, then equation 3.10 will be greater than zero for all sets,

*L*. Thus, with .

#### B.6 Asymptotics for Large *N*

*N*, is large. We assume that a fraction

*K*of the stimuli are targets, so that there are targets and distractors. To simplify notation, we write and . We find that We can therefore group the terms in the exponential of the decision variable given by equation 3.4 as A simple reorganization of the terms yields equation 3.11.

### Appendix C: Mean Stimulus Orientation: Left or Right Discrimination Task

*N*stimuli on every trial. For concreteness, we can think of the stimuli as Gabor patches, with orientations . The task is to decide whether the mean orientation of the set is to the left (a condition we denote by ) or right (

*C*= 1) of the vertical. The observer makes a decision based on the measurements, . Stimulus orientations are drawn from a multivariate normal distribution with mean vector, , and covariance matrix, , with defined in equation 2.2. As in the target detection task, we assume the measurements follow a multivariate normal distribution with mean vector and covariance matrix specified in equation 2.3.

*z*is a normalization constant. Similarly, we compute and obtain

_{c}By symmetry, it is easy to see that the decision boundary in the space of measurements is given by the hyperplane , where is the sample mean of the measurement. An ideal observer therefore bases the decision only on the sample mean. Therefore, we have .

## Acknowledgments

K.J. was supported by NSF award DMS-1122094. W.J.M. was supported by award R01EY020958 from the National Eye Institute and award number W911NF-12-1-0262 from the Army Research Office.

## References

*M*orthogonal signals

*Human information processing: Tutorials in performance and cognition*

## Author notes

Samuel Carroll is now at the Department of Mathematics, University of Utah.