Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-2 of 2
Wei Ji Ma
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Neural Computation (2018) 30 (12): 3327–3354.
Published: 01 December 2018
FIGURES
| View All (8)
Abstract
View article
PDF
The Bayesian model of confidence posits that confidence reflects the observer's posterior probability that the decision is correct. Hangya, Sanders, and Kepecs ( 2016 ) have proposed that researchers can test the Bayesian model by deriving qualitative signatures of Bayesian confidence (i.e., patterns that one would expect to see if an observer were Bayesian) and looking for those signatures in human or animal data. We examine two proposed signatures, showing that their derivations contain hidden assumptions that limit their applicability and that they are neither necessary nor sufficient conditions for Bayesian confidence. One signature is an average confidence of 0.75 on trials with neutral evidence. This signature holds only when class-conditioned stimulus distributions do not overlap and when internal noise is very low. Another signature is that as stimulus magnitude increases, confidence increases on correct trials but decreases on incorrect trials. This divergence signature holds only when stimulus distributions do not overlap or when noise is high. Navajas et al. ( 2017 ) have proposed an alternative form of this signature; we find no indication that this alternative form is expected under Bayesian confidence. Our observations give us pause about the usefulness of the qualitative signatures of Bayesian confidence. To determine the nature of the computations underlying confidence reports, there may be no shortcut to quantitative model comparison.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2015) 27 (11): 2318–2353.
Published: 01 November 2015
FIGURES
| View All (62)
Abstract
View article
PDF
Humans and other animals base their decisions on noisy sensory input. Much work has been devoted to understanding the computations that underlie such decisions. The problem has been studied in a variety of tasks and with stimuli of differing complexity. However, how the statistical structure of stimuli, along with perceptual measurement noise, affects perceptual judgments is not well understood. Here we examine how correlations between the components of a stimulus—stimulus correlations—together with correlations in sensory noise, affect decision making. As an example, we consider the task of detecting the presence of a single or multiple targets among distractors. We assume that both the distractors and the observer’s measurements of the stimuli are correlated. The computations of an optimal observer in this task are nontrivial yet can be analyzed and understood intuitively. We find that when distractors are strongly correlated, measurement correlations can have a strong impact on performance. When distractor correlations are weak, measurement correlations have little impact unless the number of stimuli is large. Correlations in neural responses to structured stimuli can therefore have a strong impact on perceptual judgments.