Abstract
By comparing experimental and propensity-score impact estimates of dropout prevention programs, we examine whether propensity-score methods produce unbiased estimates of program impacts. We find no consistent evidence that such methods replicate experimental impacts in our setting. This finding holds even when the data available for matching are extensive. Our findings suggest that evaluators who plan to use nonexperimental methods, such as propensity-score matching, need to consider carefully how programs recruit individuals and why individuals enter programs, as unobserved factors may exert powerful influences on outcomes that are not easily captured using nonexperimental methods.
This content is only available as a PDF.
© 2004 President and Fellows of Harvard College and the Massachusetts Institute of Technology
2004
You do not currently have access to this content.