Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
Date
Availability
1-3 of 3
David Blazar
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Education Finance and Policy (2024) 19 (3): 492–523.
Published: 02 July 2024
Abstract
View article
PDF
Instructional coaching is an attractive alternative to one-size-fits-all teacher training and development in part because it is purposefully differentiated: Programming is aligned to individual teachers’ needs and implemented by an individual coach. But, how much of the benefit of coaching as an instructional improvement model depends on the specific coach with whom a teacher works? Collaborating with a national teacher training and development organization, TNTP, we find substantial variability in effectiveness across coaches in terms of changes in preservice teachers’ instructional practice (roughly 0.25 to 0.3 standard deviation from our preferred sample and models). The magnitude of coach effectiveness heterogeneity is quite similar to average coaching program effects on teaching practice identified in other research. Through a set of alternative model specifications and permutation tests, we rule out the possibility that our estimates of coach effectiveness heterogeneity are driven by nonrandom sorting of coaches to teachers, at least on observable characteristics available in our data, as well as the possibility that these estimates are simply statistical noise. These findings suggest that identifying, recruiting, and supporting highly skilled coaches will be key to scaling instructional coaching programs.
Includes: Supplementary data
Journal Articles
Publisher: Journals Gateway
Education Finance and Policy (2020) 15 (3): 397–427.
Published: 01 June 2020
FIGURES
Abstract
View article
PDF
Teacher evaluation reform has been among the most controversial education reforms in recent years. It also is one of the costliest in terms of the time teachers and principals must spend on classroom observations. We conducted a randomized field trial at four sites to evaluate whether substituting teacher-collected videos for in-person observations could improve the value of teacher observations for teachers, administrators, or students. Relative to teachers in the control group who participated in standard in-person observations, teachers in the video-based treatment group reported that post-observation meetings were more “supportive” and they were more able to identify a specific practice they changed afterward. Treatment principals were able to shift their observation work to noninstructional times. The program also substantially increased teacher retention. Nevertheless, the intervention did not improve students’ academic achievement or self-reported classroom experiences, either in the year of the intervention or for the next cohort of students. Following from the literature on observation and feedback cycles in low-stakes settings, we hypothesize that to improve student outcomes schools may need to pair video feedback with more specific supports for desired changes in practice.
Journal Articles
Publisher: Journals Gateway
Education Finance and Policy (2018) 13 (3): 281–309.
Published: 01 July 2018
Abstract
View article
PDF
There is growing interest among researchers, policy makers, and practitioners in identifying teachers who are skilled at improving student outcomes beyond test scores. However, questions remain about the validity of these teacher effect estimates. Leveraging the random assignment of teachers to classes, I find that teachers have causal effects on their students’ self-reported behavior in class, self-efficacy in math, and happiness in class that are similar in magnitude to effects on math test scores. Weak correlations between teacher effects on different student outcomes indicate that these measures capture unique skills that teachers bring to the classroom. Teacher effects calculated in nonexperimental data are related to these same outcomes following random assignment, revealing that they contain important information content on teachers. However, for some nonexperimental teacher effect estimates, large and potentially important degrees of bias remain. These results suggest that researchers and policy makers should proceed with caution when using these measures. They likely are more appropriate for low-stakes decisions—such as matching teachers to professional development—than for high-stakes personnel decisions and accountability.