Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
Date
Availability
1-3 of 3
Daniel F. McCaffrey
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Education Finance and Policy (2012) 7 (2): 170–202.
Published: 01 April 2012
Abstract
View article
PDF
The Project on Incentives in Teaching (POINT) was a three-year study testing the hypothesis that rewarding teachers for improved student scores on standardized tests would cause scores to rise. Results, as described in Springer et al. (2010b), did not confirm this hypothesis. In this article we provide additional information on the POINT study that may be of particular interest to researchers contemplating their own studies of similar policies. Our discussion focuses on the policy environment in which POINT was launched, considerations that affected the design of POINT, and a variety of lessons learned from the implementation of the experiment.
Journal Articles
Publisher: Journals Gateway
Education Finance and Policy (2009) 4 (4): 439–467.
Published: 01 October 2009
Abstract
View article
PDF
This article develops a model for longitudinal student achievement data designed to estimate heterogeneity in teacher effects across students of different achievement levels. The model specifies interactions between teacher effects and students' predicted scores on a test, estimating both average effects of individual teachers and interaction terms indicating whether individual teachers are differentially effective with students of different predicted scores. Using various longitudinal data sources, we find evidence of these interactions that is of relatively consistent but modest magnitude across different contexts, accounting for about 10 percent of the total variation in teacher effects across all students. However, the amount that the interactions matter in practice depends on the heterogeneity of the groups of students taught by different teachers. Using empirical estimates of the heterogeneity of students across teachers, we find that the interactions account for about 3–4 percent of total variation in teacher effects on different classes, with somewhat larger values in middle school mathematics. Our findings suggest that ignoring these interactions is not likely to introduce appreciable bias in estimated teacher effects for most teachers in most settings. The results of this study should be of interest to policy makers concerned about the validity of value-added teacher effect estimates.
Journal Articles
Publisher: Journals Gateway
Education Finance and Policy (2009) 4 (4): 572–606.
Published: 01 October 2009
Abstract
View article
PDF
The utility of value-added estimates of teachers' effects on student test scores depends on whether they can distinguish between high- and low-productivity teachers and predict future teacher performance. This article studies the year-to-year variability in value-added measures for elementary and middle school mathematics teachers from five large Florida school districts. We find year-to-year correlations in value-added measures in the range of 0.2–0.5 for elementary school and 0.3–0.7 for middle school teachers. Much of the variation in measured teacher performance (roughly 30–60 percent) is due to sampling error from “noise” in student test scores. Persistent teacher effects account for about 50 percent of the variation not due to noise for elementary teachers and about 70 percent for middle school teachers. The remaining variance is due to teacher-level time-varying factors, but little of it is explained by observed teacher characteristics. Averaging estimates from two years greatly improves their ability to predict future performance.