Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-4 of 4
J. R. Lockwood
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Education Finance and Policy (2013) 8 (4): 459–493.
Published: 01 October 2013
FIGURES
| View All (6)
Abstract
View article
PDF
We consider the challenges and implications of controlling for school contextual bias when modeling teacher preparation program effects. Because teachers are not randomly distributed across schools, failing to account for contextual factors in achievement models could bias preparation program estimates. Including school fixed effects controls for school environment by relying on differences among student outcomes within the same schools to identify the program effects, but this specification may be unidentified. Using statewide data from Florida, we examine whether the inclusion of school fixed effects is feasible, compare the sensitivity of the estimates to assumptions underlying for fixed effects, and determine what their inclusion implies about the precision of the preparation program estimates. We discuss the implications of our results on the feasibility, precision, and ranking of programs using the school fixed effect model for policy makers designing teacher preparation program evaluation systems.
Journal Articles
Publisher: Journals Gateway
Education Finance and Policy (2012) 7 (2): 170–202.
Published: 01 April 2012
Abstract
View article
PDF
The Project on Incentives in Teaching (POINT) was a three-year study testing the hypothesis that rewarding teachers for improved student scores on standardized tests would cause scores to rise. Results, as described in Springer et al. (2010b), did not confirm this hypothesis. In this article we provide additional information on the POINT study that may be of particular interest to researchers contemplating their own studies of similar policies. Our discussion focuses on the policy environment in which POINT was launched, considerations that affected the design of POINT, and a variety of lessons learned from the implementation of the experiment.
Journal Articles
Publisher: Journals Gateway
Education Finance and Policy (2009) 4 (4): 439–467.
Published: 01 October 2009
Abstract
View article
PDF
This article develops a model for longitudinal student achievement data designed to estimate heterogeneity in teacher effects across students of different achievement levels. The model specifies interactions between teacher effects and students' predicted scores on a test, estimating both average effects of individual teachers and interaction terms indicating whether individual teachers are differentially effective with students of different predicted scores. Using various longitudinal data sources, we find evidence of these interactions that is of relatively consistent but modest magnitude across different contexts, accounting for about 10 percent of the total variation in teacher effects across all students. However, the amount that the interactions matter in practice depends on the heterogeneity of the groups of students taught by different teachers. Using empirical estimates of the heterogeneity of students across teachers, we find that the interactions account for about 3–4 percent of total variation in teacher effects on different classes, with somewhat larger values in middle school mathematics. Our findings suggest that ignoring these interactions is not likely to introduce appreciable bias in estimated teacher effects for most teachers in most settings. The results of this study should be of interest to policy makers concerned about the validity of value-added teacher effect estimates.
Journal Articles
Publisher: Journals Gateway
Education Finance and Policy (2009) 4 (4): 572–606.
Published: 01 October 2009
Abstract
View article
PDF
The utility of value-added estimates of teachers' effects on student test scores depends on whether they can distinguish between high- and low-productivity teachers and predict future teacher performance. This article studies the year-to-year variability in value-added measures for elementary and middle school mathematics teachers from five large Florida school districts. We find year-to-year correlations in value-added measures in the range of 0.2–0.5 for elementary school and 0.3–0.7 for middle school teachers. Much of the variation in measured teacher performance (roughly 30–60 percent) is due to sampling error from “noise” in student test scores. Persistent teacher effects account for about 50 percent of the variation not due to noise for elementary teachers and about 70 percent for middle school teachers. The remaining variance is due to teacher-level time-varying factors, but little of it is explained by observed teacher characteristics. Averaging estimates from two years greatly improves their ability to predict future performance.