Abstract
The canonical hot hand fallacy result was recently reversed, based largely on a single statistic, and a data set that was underpowered for individual-level testing. Here we perform a more robust analysis, testing whether hot hand performance exists across (i) data sets: four different controlled shooting experiments, (ii) time: multiple sessions per individual spread across a six month gap, and (iii) various (improved) approaches to statistical testing. We find strong evidence of hot hand performance, both across data sets and within individuals across time. Moreover, in a study of beliefs, we find that expert observers can successfully predict which shooters get the hottest.
© 2024 The President and Fellows of Harvard College and the Massachusetts Institute of Technology
2024
The President and Fellows of Harvard College and the Massachusetts Institute of Technology
You do not currently have access to this content.