Abstract

Given the significant growth rate and geographic expansion of private school choice programs over the past two decades, it is important to examine how traditional public schools respond to the sudden injection of competition for students and resources. Although prior studies of this nature have been limited to Florida and Milwaukee, using multiple analytic strategies this paper examines the competitive impacts of the Louisiana Scholarship Program (LSP) to determine its achievement impacts on students in affected public schools. Serving 4,954 students in its first year of statewide expansion, this targeted school voucher program provides public funds for low-income students in low-performing public schools to enroll in participating private schools across the state of Louisiana. Using (1) a school fixed effects approach and (2) a regression discontinuity framework to examine the achievement impacts of the LSP on students in affected public schools, this competitive effects analysis reveals neutral to positive impacts that are small in magnitude. Policy implications are discussed.

1.  Introduction

Private school choice programs, which provide families with public funds to attend private schools, currently operate in twenty-six states plus the District of Columbia (Schultz et al. 2017). While programs date back to 1869 in Vermont, much of the growth in private school choice programs has been recent. The total number of programs in operation nationwide increased by 35 percent in one year alone, from twenty to twenty-seven programs, leading the Wall Street Journal to call 2011 the “year of school choice” (Editorial Board 2011). Recent estimates show there are currently fifty-two programs in operation, serving over 442,000 children (Schultz et al. 2017). Choice proponents argue that such growth could improve the education system as a whole by introducing positive competition between public and private schools. The extent to which we might expect positive competitive effects depends, however, on how traditional public schools respond to the pressure to attract and retain students. In this paper, we exploit variation in the geographic location of private schools in Louisiana to estimate the competitive impact of a private school choice program on public school math and English language arts (ELA) achievement.

The Louisiana Scholarship Program (LSP) is a targeted school voucher program that provides public funds to low-income students in low-performing public schools to enroll in participating religious and nonreligious private schools. Initially piloted in New Orleans in 2008, Act 2 of the 2012 Regular Session expanded the LSP statewide, allowing thousands of public school students to transfer out of their residentially assigned public schools and into private schools across the state of Louisiana. Voucher eligibility was restricted to students with family incomes at or below 250 percent of the federal poverty guidelines. In addition, eligible students had to either attend a public school that received a “C,” “D,” or “F” grade in October 2011 or be applying for entry into kindergarten. In the 2012–13 school year, 9,831 eligible Louisiana students applied for an LSP voucher. Ultimately 4,954 students from low-performing public schools used these vouchers to enroll in private schools in the 2012–13 school year. All of these students were from low-income families and over 80 percent were African American. The value of the scholarship varied by school, depending on the tuition and fees at each participating private school, but it could not exceed 90 percent of the total state and local funding in that student's home school district. The average value of an LSP scholarship in 2012–13 was $5,245, which was approximately $3,000 less than average expenditures in C-, D-, or F-graded public schools that year.

Although all Louisiana private schools experienced some degree of private school competition prior to 2012–13, the statewide expansion of the program constituted a policy shock that arguably increased the competitive pressure experienced by all public schools. For those public schools graded C, D, or F, this shock would have been especially salient, as their students suddenly became eligible to transfer to a private school alternative at state expense. Using an identification strategy that relies on this distinction between eligible and ineligible public schools, we exploit the timing of the voucher policy in Louisiana to estimate the public school response to private school competition. We take advantage of the fact that school performance in the 2010–11 school year was unaffected by the program, which allows us to use this as the baseline year. Letter grades were released in fall 2011 summarizing school performance for the prior year. The legislation expanding the program then passed in spring 2012. We do not use the 2011–12 school year as the outcome year, out of concern that school administrators may have already been talking about potential competitive pressure from the program as policy makers in the Louisiana legislature were deliberating the details of the bill over the course of the regular session, which convened on 12 March 2012 and adjourned on 4 June 2012. Outcomes are therefore examined in 2012–13, which is the first year eligible students across the state were permitted to enroll in a private school with a voucher. The observed impacts are small, but never negative. Two out of four competition measures (density and diversity measures) reveal a positive, statistically significant impact on public school math scores and there are no statistically significant impacts on ELA scores.

The remainder of this paper proceeds as follows. First, we put the LSP in context by presenting statistics on the set of all private school choice programs that currently exist in the United States. We then describe the theoretical framework underlying private school programs and follow with a summary of the literature on the competitive impacts of private school choice programs. Next, we describe the data and research design used in this analysis, followed by a presentation of the results. The paper concludes with a summary of the main findings and a discussion of the policy implications.

2.  Private School Choice Programs in the United States

Private school choice programs generally provide public funds either directly or indirectly to students to attend private schools of their choice. Broadly speaking, private school choice programs are differentiated by the way in which government funds are transferred to families and the restrictions placed on how these funds can be spent.

Vouchers and Education Savings Accounts (ESAs), for example, provide government resources directly to families. ESAs, available in Arizona, Florida, Mississippi, North Carolina, and Tennessee, allow eligible families to access individual savings accounts populated with public funds. Depending on the program's restrictions, families can use funds to purchase supplemental education resources like tutoring, educational resources and services, or enroll directly in private schools. Private school vouchers—the focus of our study—provide government funds directly to families to cover all or a portion of private school tuition via a voucher or scholarship. Unlike ESAs, voucher programs do not transfer money directly to families.

Other programs provide public resources indirectly through tax credits or tax deductions. Tax credit scholarships operate very similarly to private school voucher programs, in that families are provided with scholarships to attend specific private schools rather than readily accessible funds with which to purchase private schooling. Unlike vouchers, however, tax credit scholarship programs raise funds indirectly through voluntary donations by taxpayers who, in turn, receive full or partial tax credits for their donations. Finally, tax laws in Alabama, Iowa, Illinois, Indiana, Louisiana, Minnesota, South Carolina, and Wisconsin allow individual families to deduct all or a portion of their spending on private education and supplemental education expenses each fiscal year.

Today, fifty-two private school choice programs have been enacted in twenty-six states plus the District of Columbia (Schultz et al. 2017). In absolute terms, the two largest voucher programs are Florida's John M. McKay Scholarship for Students with Disabilities Program and Indiana's Choice Scholarship Program. Nonetheless, these enrollment figures pale in comparison to those associated with the largest tax credit scholarship program—Florida's Tax Credit Scholarship program—which enrolled 98,889 students in 2016–17, representing more than two percent of all children in that state. Twenty-three of these programs, including the LSP, are means-tested: requiring applicants’ family income to fall below a given threshold in order for a student to be eligible to participate. With 7,362 participants in 2014–15, the LSP was the tenth largest means-tested program in the country, in terms of the percentage of all school-aged children in the state served. In this regard, an analysis of the LSP offers useful insight into similar programs across the nation.

3.  Theoretical Framework

The theory of action behind market-based school choice programs argues that expanded choice can benefit society through two channels. First, school choice will directly benefit participants by allowing them to seek out environments with offerings that best fit their educational needs and interests (Friedman 1962; Chubb and Moe 1990). Second, choice will improve the education system as choice-induced competition among schools to attract and retain students produces “a rising tide” of school improvement (Hoxby 2001). This paper focuses on the latter channel, typically referred to as the competitive effects of school choice.

Several assumptions must be satisfied for the positive competitive effects argument to hold. First, for an education marketplace to function as theorized, families must have valid and reliable information about their school options so that they select a high-quality school that will be a good fit for their child's needs (Schneider et al. 1998; Schneider, Teske, and Marschall 2000; Van Dunk and Dickman 2002). Second, schools must respond to increased competition in ways that improve their students’ academic achievement. This requires that private and public school providers are able to identify specific characteristics of high-performing schools so they can judge which responses to competition are associated with school success. For example, Bagley (2006) identifies five categories of operational responses by which schools might potentially respond to competition, including substantive changes to curriculum or facilities, in addition to structural changes to school governance. Given this variety of ways in which schools could potentially respond to competition, it is particularly important that school leaders are informed about the factors that are related to school success so they can learn from their competitors. In addition, school leaders must be able to access those resources associated with their competitors’ success and have the flexibility to implement new programs. Even if schools have a good idea of which programs they could implement to improve student academics, leaders often face legal, political, or economic constraints that prevent them from implementing these changes (Anzia and Moe 2013; Egalite et al. 2014).

Given the potential for any or all of these assumptions to fail in practice, some have questioned whether market-based school reforms could have unanticipated consequences, such as diminished resources, racial and economic segregation, and suboptimal academic experiences for the students who remain in public schools (Lankford and Wyckoff 2001; Brunner, Imazeki, and Ross 2010; Altonji, Huang, and Taber 2015). For example, national and state media outlets regularly run opinion columns in which prominent politicians, teachers’ union leaders, and activists accuse private school vouchers of siphoning funds from public schools, arguing that private school choice programs remove financial resources from those public schools that are most in need of revenue in order to improve (McCall 2014; Rich 2014; Schrier 2014).

Moreover, even if these assumptions do not fail, it is still important to acknowledge the empirical challenges that researchers face in trying to isolate pure choice-induced competitive effects. Specifically, it is quite possible for measures of public school performance to improve even in the absence of any school responses to private school competition, which might lead researchers to misattribute the cause of these improvements. For example, average achievement would increase in public schools if academically struggling students leave in large numbers to take advantage of private school choice. In this scenario, we might inappropriately attribute the observed increase in achievement to competition when schools may in fact see the program as a release valve for their more academically challenged students. Any analysis attempting to identify the causal impact of the competitive pressures of private school choice programs must carefully consider such possibilities when determining effects.

In light of these considerations, there are three different results that might occur in the wake of the establishment of private school competition: a positive, negative, or neutral effect on student achievement. Each of these potential results could be driven by a multitude of different school and parent responses to the program that might serve as the mechanisms driving these results. For example, positive impacts might be observed if vouchers provide public schools with a financial incentive to improve their performance. School leaders might work harder to encourage innovation, add or improve school programs, and organize staffing and curricula in a manner that is maximally responsive to student needs. Similarly, teachers and other staff members might exert more effort to tutor students or provide additional assistance where necessary because they are eager to see their school succeed. Furthermore, a minor reduction in student enrollment might ease pressure on overcrowded and under-resourced schools.

A negative impact might be observed if competition from a private school choice program negatively influences teachers’ job satisfaction by removing financial resources that could have otherwise been used to support their instruction, by hurting relations between school staff members and parents, and by negatively impacting teachers’ quality of life by requiring them to work longer hours for the same pay (Ladd and Fiske 2003). Further, such competition might force traditional public schools to offer an overly diverse and thematically incoherent set of courses to appeal to a broad set of student interests (Fiske and Ladd 2000), which may come at the expense of deep instruction in core areas. If public schools respond to decreases in their financial resources by limiting support staff (either instructional or administrative paraprofessionals, for example), this might result in a general lowering of schoolwide academic performance. This could be exacerbated by compositional and resource changes if the highest-achieving and most motivated families were to exit the public schools en masse (Lubienski, Gulosino, and Weitzel 2009). If this were to happen, private school vouchers might rob the public schools of academic and social capital—the positive peer effects of high-achieving classmates and the influence of motivated families who would push for overall school improvements—resulting in a downward spiral for public school performance. Thus, we might expect to see lower average scores and reduced parental involvement in those public schools that experienced the greatest competition (Epple and Romano 1998; McMillan 2000; Ladd 2002).

A neutral impact might be observed if the threat from competition is trivial or schools simply respond with empty, symbolic gestures (Hess 2002; Sullivan, Campbell, and Kisida 2008), focusing on promotional activities and marketing efforts (Loeb, Valant, and Kasman 2011; Jabbar 2015) in lieu of improving academic programming; in such a case, the impact of the choice program will not be detectable in students’ academic outcomes. We might anticipate this scenario's occurrence if the private school voucher program is small in scale, underfunded, or politically unstable. An equally important explanation might be that schools are already maximizing performance given existing financial and physical resources, as well as the human capital available to them.

In sum, it is unclear a priori whether we should anticipate a positive, neutral, or negative impact on student test scores because of the different teacher, parent, and student responses at play and the assumptions required for Hoxby's “rising tide” hypothesis to play out.

4.  Existing Research on the Competitive Effects of Private School Choice

There is a robust empirical literature examining the competitive responses by traditional public schools to private school choice programs. In general, the nineteen published studies on this topic indicate neutral to positive impacts on public school student achievement following the creation or expansion of private school choice programs. What follows is a brief summary of this research. A more detailed description is available in a separate online appendix that can be assessed on Education Finance and Policy’s Web site at https://doi.org/10.1162/edfp_a_00286.

The Florida Opportunity Scholarship Program was established in 1999 as part of the reform program known as the A+ Plan. The program, which ran from June 1999 until the Florida Supreme Court ruled it unconstitutional in January 2006, offered school vouchers to students attending public schools that were designated as failing twice in a four-year period. In total, seven studies have examined the competitive effects of this program using receipt of an F grade to identify voucher-threatened schools, all of which found positive competitive impacts on affected traditional public schools (Greene 2001; Greene and Winters 2004; Figlio and Rouse 2006; West and Peterson 2006; Forster 2008a; Chakrabarti 2013; Rouse et al. 2013). A second program, the Florida Tax Credit Scholarship Program (established in 2001), is a means-tested program providing vouchers to students from low-income families. Figlio and Hart (2014) found that increases in competition as a result of this tax credit program were associated with improvements in student test scores across a variety of competition measures. Finally, the John M. McKay Scholarships for Students with Disabilities Program, established in 1999, currently serves over 30,000 students. Winters and Greene (2011) report increased exposure to this voucher program was associated with substantial improvements in the test scores of students with disabilities who remain in the public school system.

Another highly studied private school choice program is the Milwaukee Parental Choice Program (MPCP), the oldest voucher program currently in operation. Established in 1999, the MPCP provides vouchers to low- and middle-income families to attend private schools at state expense. Three published studies of the competitive effects of the MPCP report positive impacts on student achievement (Hoxby 2003; Carnoy et al. 2007; Greene and Marsh 2009) and the remaining two studies report neutral to positive results (Greene and Forster 2002; Chakrabarti 2008).

Meanwhile, studies of competition effects from school voucher or tuition tax credit programs have also been conducted in Ohio (Forster 2008b; Carr 2011), Texas (Greene and Forster 2002; Gray, Merrifield, and Adzima 2014), and Washington, DC (Greene and Winters 2007). Of these nineteen total studies, only one—an analysis of a voucher program in Washington, DC—reports no impacts across all subjects (Greene and Winters 2007).

Contributions

This paper makes two primary contributions to the existing literature on the competitive effects of private school choice programs. First, we are examining this question in a new educational and policy environment: Louisiana. Although private school choice programs are spreading across the nation, the existing private school choice competitive effects literature is largely limited to programs in Florida and Milwaukee, Wisconsin. Although certain core predictions of market theory should play out across all private choice environments, there is likely to be variation across programs due to differences in education contexts and choice program design. For example, impacts may be less concentrated in a statewide program with capped enrollment, like the LSP, than in a metropolitan-based program like Wisconsin's MPCP.

On the other hand, statewide programs often differ from one another in important ways, which could also impact competitive effects. Studies of Florida's voucher program, for example, are unable to disentangle the accountability effects of the A–F school letter grading policy from the competitive effects of the voucher threat for consistently low-performing schools because both policies were implemented at the same time. Fortunately for our research design, a robust public school accountability system had been in place in Louisiana for several years prior to the statewide expansion of the LSP, allowing for a cleaner separation of the competitive effects from accountability effects.

Another important contextual factor is the nature of program rules, such as the requirement that participating private schools be judged according to LSP student achievement on criterion-referenced exams aligned to state standards. This requirement could influence the types of private school competitors opting to participate in the market. Indeed, research indicates that only a third of Louisiana private schools opted to participate in the program (Kisida, Wolf, and Rhinesmith 2015) and these schools tended to have lower enrollment and lower tuitions than nonparticipants (Sude, DeAngelis, and Wolf 2018).

Each of the design elements described in this section could impact both the presence and magnitude of competitive effects. The first important contribution of this study, therefore, is that it allows us to determine the extent to which competitive effects vary across programs.

Second, we test for the presence of competitive responses by public schools to private school choice using multiple strategies. Previous competitive effects studies have used different methods to measure competition, yet there exists no consensus either on a single preferred measure or group of measures across studies. Our study is unique in that it uses multiple measures of competition as well as two analytical strategies to identify the impacts of competition in Louisiana. Specifically, by applying both a school fixed effects model and a regression discontinuity design, this study takes full advantage of the policy changes that resulted in the introduction of the voucher program across a diverse state, comparing pre-program trends to achievement outcomes after the introduction of the policy, as well as zeroing in on the average treatment effect at the margin separating B and C schools. This use of multiple measures and identification strategies helps to provide a more complete picture of how the LSP's competitive pressures may be affecting traditional public schools.

In sum, this study moves the field forward because it uses multiple analytic approaches to analyze public school responses to private school competition net of the stigma effect of a concurrently implemented accountability policy and it provides an analysis of competitive effects in a previously unstudied context, which allows researchers and policy makers to gain much-needed insight into the relative influence of different features of choice program design.

5.  Research Methodology

Data

The data for this analysis come from several sources. The Louisiana Department of Education (LDOE) provided restricted-use student-level achievement data for Louisiana public school students in grades 3 through 8 on the state's standardized assessments between 2010–11 and 2012–13. We obtained publicly available school-level performance scores and letter grades for 2010–11 from the LDOE Web site. Street addresses, latitude, and longitude for all public schools in Louisiana in 2010–11 were retrieved from the National Center for Education Statistics’ Public Elementary/Secondary School Universe Survey. Finally, private school street addresses and information on religious orientation were retrieved from the National Center for Education Statistics’ Private School Universe Survey, 2011–12.

Sample Selection

We generate our analysis sample by applying a series of filters to the universe of public schools appearing in the NCES data in 2010–11. The first filter keeps only those public schools that could be successfully mapped using ArcGIS software (approximately 90 percent of schools). The second filter requires each school to have a minimum of three students taking the state test—the Louisiana Educational Assessment Program (LEAP) or integrated Louisiana Educational Assessment Program (iLEAP)—in grades 3 through 8, reducing the sample from 1,027 to 981 schools.1 The third and fourth filters exclude charter schools, which already experience competition for enrollment and thus are not relevant for this study, and schools in New Orleans, where a pilot version of the LSP was already operating. This filtering process reduces the final sample to 781,733 students in 939 public schools, of which 676 were voucher-eligible due to their receiving a C, D, or F grade at baseline.

Empirical Approach

Competition Measures

We use a set of geocoded competition measures to capture variation in the level of private school competition experienced by public schools. Specifically, we use measures capturing four dimensions of competition introduced earlier: distance, density, diversity, and concentration.

First, for our distance measure, we calculate the crow's-flight distance for each eligible public school to the nearest private school that was in existence before the announcement of the program. This measure has been previously used in studies of the competition effect of the Florida Tax Credit Scholarship Program and the DC Opportunity Scholarship Program (Greene and Winters 2007; Figlio and Hart 2014). To ease interpretation, we multiply the distance variable by −1 so that a positive coefficient on the distance variable would represent the impact of closer competitors positively impacting student outcomes.

Our second measure of competition quantifies the degree of competition faced by a school by counting the number of private competitors within radii of five and ten miles. Such measures have been previously used in studies of competition effects of the MPCP, the Florida Tax Credit Scholarship Program, and Florida's McKay Scholarship Program for Students with Disabilities (Carnoy et al. 2007; Greene and Marsh 2009; Winters and Greene 2011; Figlio and Hart 2014).

Our third measure quantifies competition by measuring the variety of schooling options available to students. Our diversity measure counts the number of different types of local private schools that are close to a given public school, with private school type defined by their religious affiliation. Thus, a given public school might have a value of 6 on the density measure, but if all six schools were Roman Catholic, it would only score a 1 on the diversity measure. This method has been previously used in a study of the Florida Tax Credit Scholarship Program (Figlio and Hart 2014).

Our final competition measure uses a modified Herfindahl index to capture market concentration. As described by Figlio and Hart (2014), this index is generated by summing the squared market shares held by each private school religious type within a given public school radius. Suppose, for instance, there are five private schools that fall within a ten-mile radius of a given public school—four of these are Catholic schools and one is a Lutheran school. The market share for each school type is calculated as CountrΣRCountr. Catholic school market share, therefore, is .80 (i.e., 4 of 5) and Lutheran market share is .20 (i.e., 1 of 5). The Herfindahl index is the sum of the squares of the market shares held by each school type—in this case (.80)2 + (.20)2 = .68. Lower values of the Herfindahl index are indicative of increased competitive pressure, as a lower concentration of the share of private schools is in the hands of just one particular religious type. To ease interpretation of results, we use 1 minus the Herfindahl index so that a positive coefficient on this variable would mean increased competition is associated with higher student outcomes and a negative coefficient would mean increased competition is associated with lower student outcomes.

In general, for those public schools not matched to a single private school within each radius examined, it is appropriate to assign a zero as the competition measure for all of the competition measures described above except for the concentration measure, where a zero implies a perfectly competitive market. As such, we drop from analyses public schools not matched to a single private school using the modified Herfindahl index. In the results tables presented later in this paper, the sample size is always smaller for those regressions measuring competition with the concentration index.

Before proceeding, it is important to note a number of caveats for the analyses presented below. First, in order to avoid reverse causation bias, all four geocoded variables are generated using data from before the program was announced. It is also important to note that some of these measures are based on private school counts that weight all schools equally, regardless of school size. Such measures were deliberately chosen because one might expect that public school administrators are more likely to be aware of the existence of neighboring private schools than to be knowledgeable about the relative size of different competitors, such as knowing the number of enrollment slots that would be made available to students using a voucher. Finally, a potential criticism of these geocoded competition measures is that they suffer from endogeneity bias because the locations where public schools demonstrate poor performance might be attractive to choice schools with a mission to enroll underserved students. This is more likely to be a problem in studies of competitive effects of charter schools, however, given evidence of the endogeneity of charter school location (Glomm, Harris, and Lo 2005). In Louisiana, the private schools in question—mostly Catholic schools—existed for many years prior to the creation of the voucher program. Indeed, many of these schools were established in response to Catholic doctrine, which dictates that Catholic children should be educated in a Catholic school (Herbermann 1912) and not in response to unsatisfactory public school performance. Thus, we are not as concerned with potential bias resulting from private school location as we might be in the charter school context.

Table 1 summarizes our four competition measures across both the five- and ten-mile radii. The average public school is 6.39 miles from a private competitor, with a standard deviation of just over eight miles. Within a five-mile radius, public schools in Louisiana typically have five private competitors. On average, approximately two religious denominational types are represented and the mean value for the Herfindahl index is .56. The mean values for this set of variables are predictably larger within a ten-mile radius—the average school has eleven private competitors and approximately three religious denominational types are represented. The modified Herfindahl index, meanwhile, has an average value of .50. Just one school has the maximum density value of 100 and a further thirteen schools have density values greater than or equal to 90; all of these schools are located in Jefferson Parish.

Table 1.
Descriptive Statistics of the Competition Measures Used in the School Fixed Effects Analysis
MeanStandard DeviationMinimumMaximum
Distance 6.39 8.23 .04 49.85 
5-mile radius     
Density 5.07 9.10 59 
Diversity 1.93 2.26 
Concentration .56 .30 .12 
10-mile radius     
Density 11.46 19.73 100 
Diversity 2.86 2.58 
Concentration .50 .28 
MeanStandard DeviationMinimumMaximum
Distance 6.39 8.23 .04 49.85 
5-mile radius     
Density 5.07 9.10 59 
Diversity 1.93 2.26 
Concentration .56 .30 .12 
10-mile radius     
Density 11.46 19.73 100 
Diversity 2.86 2.58 
Concentration .50 .28 

Notes: Distance is the number of miles to nearest private school competitor; Density is the number of local private schools falling within a given radius; Diversity is the number of religious denominational types represented; Concentration is calculated as a modified Herfindahl Index.

Source: Public school addresses from the National Center for Education Statistics, Common Core of Data, Public Elementary/Secondary School Universe Survey, 2010—11. Private school addresses from the Private School Universe Survey, 2011—12.

Model

We start by utilizing a school fixed effects model to estimate the effect of private school competition on public school performance, building upon the model estimated by Figlio and Hart (2014). The model takes the form:
Yist=αs+β1Cs*Pt*CDFs+β2Pt*Cs+β3Pt*CDFs+γXit+μSst+δTt+εist,
(1)
where Yist is the standardized math or reading score for student i in school s in year t; αs is a school fixed effect; Cs is the measure of pre-policy competitive pressure facing school s; Pt is an indicator variable identifying the post-policy year; CDFs is an indicator variable identifying those schools that became voucher-eligible because they received a C, D, or F grade from the state in 2011; Xit is a vector of student demographic control variables including gender, race, special education status, an indicator for limited English proficiency (LEP), and eligibility for free or reduced-price lunch (FRPL) for student i in year t; Sst is a vector of time-varying school characteristics (shares of students of each race and gender, the share FRPL-eligible, and the shares classified as LEP or special education); and Tt is a set of dummy variables indicating year. The β1 coefficient on the three-way-interaction of competition measures, post-policy year indicator, and a school's C, D, or F grade is the parameter of interest. Standard errors are clustered at the school level.

We start by running this model excluding all charter schools and all schools in New Orleans. As a robustness check, we then repeat the analysis, keeping these schools in the analysis sample to see if their inclusion results in significant changes to the findings. The primary concern with including charter schools is that such schools are not experiencing a competitive “shock” in the same way that traditional public schools are, given that the theory of action behind charter school authorization in general is to stimulate innovation and improvement in student achievement by setting up autonomous but highly accountable schools that must attract students in order to stay open. The concern with including schools in New Orleans, meanwhile, is that the school system in that city experienced a dramatic overhaul in the wake of Hurricane Katrina that resulted in the creation of a unique, reform-driven educational environment built on accountability, choice, and competition (Jabbar 2015). Schools graded D and F in New Orleans are under intense threat of closure, which is likely to be correlated with the outcome variable. Interacting the four competition measures with an indicator for C-, D-, or F-graded schools is one step toward addressing this potential confound but does not entirely address the problem because there are more private schools in New Orleans than in other parts of the state. Thus, the distance, density, diversity, and concentration measures are correlated with schools being in New Orleans, a city that has experienced significant growth in test scores in recent years (Harris 2015). The final element of the three-way interaction goes a long way toward addressing this confounding factor, however. By comparing schools’ performance before and after the LSP policy change, the model takes advantage of program rules to isolate any changes in achievement that are directly related to the policy implementation, which is similar to the approach used by Figlio and Hart (2014). To test this important assumption, we run a placebo test that changes the “post policy” year to one year earlier in the data (i.e., switching from 2012–13 as the outcome year to 2011–12), when we would not expect to find any significant effects. Results for these robustness checks are available in the online appendix.

A second identification concern arises from Louisiana's application for a waiver from the Elementary and Secondary Education Act, which won approval from the U.S. Department of Education in May 2012. This waiver granted the state flexibility from some of No Child Left Behind's (NCLB) accountability sanctions. In particular, the waiver allowed Louisiana to give districts and schools increased flexibility in how to spend federal education funding of approximately $375 million per year. In return, the state of Louisiana agreed to institute a rigorous accountability system and adopt the Common Core State Standards and aligned assessments. Although there is little empirical evidence available yet on the productivity impact of NCLB waivers upon which to draw, it is unlikely to be a major confounding concern for this analysis because both sets of schools being compared in this study (A- and B-graded schools, compared with C-, D-, and F-graded schools) would have been subject to the same accountability pressure associated with Louisiana's NCLB waiver.

Describing the Analysis Sample

Table 2 presents descriptive statistics of the overall sample, as well as breaking out this information for each group of schools, graded C through F. At 49 percent female, the sample is approximately evenly split between the sexes. Eight percent of students have special educational needs, 2 percent have LEP, and over two thirds (68 percent) qualify for FRPL, a commonly used proxy for poverty. In terms of race or ethnicity, the majority (51 percent) of students are white, 42 percent are black, 4 percent are Hispanic, and 4 percent are classified as “other race.” Student achievement is about average at just 2 percent of a standard deviation above average in both ELA and math.

Table 2.
Descriptive Characteristics of the Analysis Sample Used in the School Fixed Effects Analysis
MeanStandard DeviationMinimumMaximum
Full Sample     
Female 0.49 0.50 0.00 1.00 
Special education 0.08 0.27 0.00 1.00 
Limited English proficiency 0.02 0.14 0.00 1.00 
Free or reduced-price lunch eligibility 0.68 0.47 0.00 1.00 
White 0.51 0.50 0.00 1.00 
Black 0.42 0.49 0.00 1.00 
Hispanic 0.04 0.20 0.00 1.00 
Other race 0.04 0.19 0.00 1.00 
State-standardized ELA achievement, lagged 0.02 0.70 −3.35 2.79 
State-standardized math achievement, lagged 0.02 0.78 −3.58 2.51 
C-Schools     
Female 0.49 0.50 0.00 1.00 
Special education 0.08 0.27 0.00 1.00 
Limited English proficiency 0.02 0.14 0.00 1.00 
Free or reduced-price lunch eligibility 0.67 0.47 0.00 1.00 
White 0.59 0.49 
Black 0.33 0.47 0.00 1.00 
Hispanic 0.04 0.20 0.00 1.00 
Other race 0.04 0.19 0.00 1.00 
State-standardized ELA achievement, lagged 0.04 0.65 −3.33 2.79 
State-standardized math achievement, lagged 0.03 0.72 −3.58 2.51 
D-Schools     
Female 0.49 0.50 0.00 1.00 
Special education 0.08 0.28 0.00 1.00 
Limited English proficiency 0.03 0.16 0.00 1.00 
Free or reduced-price lunch eligibility 0.86 0.34 0.00 1.00 
White 0.26 0.44 0.00 1.00 
Black 0.67 0.47 0.00 1.00 
Hispanic 0.05 0.21 0.00 1.00 
Other race 0.03 0.18 0.00 1.00 
State-standardized ELA achievement, lagged −0.23 0.68 −3.35 2.74 
State-standardized math achievement, lagged −0.25 0.72 −3.58 2.51 
F-Schools     
Female 0.46 0.50 0.00 1.00 
Special education 0.09 0.28 0.00 1.00 
Limited English proficiency 0.01 0.09 0.00 1.00 
Free or reduced-price lunch eligibility 0.95 0.22 0.00 1.00 
White 0.06 0.23 0.00 1.00 
Black 0.92 0.27 0.00 1.00 
Hispanic 0.01 0.11 0.00 1.00 
Other race 0.01 0.11 0.00 1.00 
State-standardized ELA achievement, lagged −0.57 0.76 −3.35 2.69 
State-standardized math achievement, lagged −0.60 0.74 −3.58 2.51 
MeanStandard DeviationMinimumMaximum
Full Sample     
Female 0.49 0.50 0.00 1.00 
Special education 0.08 0.27 0.00 1.00 
Limited English proficiency 0.02 0.14 0.00 1.00 
Free or reduced-price lunch eligibility 0.68 0.47 0.00 1.00 
White 0.51 0.50 0.00 1.00 
Black 0.42 0.49 0.00 1.00 
Hispanic 0.04 0.20 0.00 1.00 
Other race 0.04 0.19 0.00 1.00 
State-standardized ELA achievement, lagged 0.02 0.70 −3.35 2.79 
State-standardized math achievement, lagged 0.02 0.78 −3.58 2.51 
C-Schools     
Female 0.49 0.50 0.00 1.00 
Special education 0.08 0.27 0.00 1.00 
Limited English proficiency 0.02 0.14 0.00 1.00 
Free or reduced-price lunch eligibility 0.67 0.47 0.00 1.00 
White 0.59 0.49 
Black 0.33 0.47 0.00 1.00 
Hispanic 0.04 0.20 0.00 1.00 
Other race 0.04 0.19 0.00 1.00 
State-standardized ELA achievement, lagged 0.04 0.65 −3.33 2.79 
State-standardized math achievement, lagged 0.03 0.72 −3.58 2.51 
D-Schools     
Female 0.49 0.50 0.00 1.00 
Special education 0.08 0.28 0.00 1.00 
Limited English proficiency 0.03 0.16 0.00 1.00 
Free or reduced-price lunch eligibility 0.86 0.34 0.00 1.00 
White 0.26 0.44 0.00 1.00 
Black 0.67 0.47 0.00 1.00 
Hispanic 0.05 0.21 0.00 1.00 
Other race 0.03 0.18 0.00 1.00 
State-standardized ELA achievement, lagged −0.23 0.68 −3.35 2.74 
State-standardized math achievement, lagged −0.25 0.72 −3.58 2.51 
F-Schools     
Female 0.46 0.50 0.00 1.00 
Special education 0.09 0.28 0.00 1.00 
Limited English proficiency 0.01 0.09 0.00 1.00 
Free or reduced-price lunch eligibility 0.95 0.22 0.00 1.00 
White 0.06 0.23 0.00 1.00 
Black 0.92 0.27 0.00 1.00 
Hispanic 0.01 0.11 0.00 1.00 
Other race 0.01 0.11 0.00 1.00 
State-standardized ELA achievement, lagged −0.57 0.76 −3.35 2.69 
State-standardized math achievement, lagged −0.60 0.74 −3.58 2.51 

Notes: n = 781,733; ELA = English Language Arts.

6.  Results

The estimates reported in table 3 represent the estimated β1 coefficient on the three-way interaction between competition measure, post-policy year, and C, D, or F school grade. The top panel displays the main results for all eligible public schools graded C through F in Louisiana. Within a ten-mile radius, competition is shown to have a statistically significant positive impact in math for two of the four measures used—density and diversity. As the radius narrows to five miles, we continue to observe statistically significant positive impacts in math with the density and diversity measures. There are no statistically significant impacts on ELA performance.

Table 3.
The Impact of Louisiana Scholarship Program Competition on Traditional Public School Achievement, Fixed Effects Results
10-Mile Radius5-Mile Radius
Distance (r) (1)Density (2)Diversity (3)Concentration (r) (4)Density (5)Diversity (6)Concentration (r) (7)
Main Results C through F Schools 
ELA −.10 .02 .12 −.08 .00 .07 −.21 
 (.07) (.03) (.22) (2.51) (.06) (.31) (1.92) 
Observations 781,703 781,703 781,703 639,533 781,703 781,703 532,363 
Unique schools 939 939 939 721 939 939 587 
Adj. R-squared .27 .27 .27 .28 .27 .27 .29 
Math .05 .11*** .92*** 3.05 .23*** 1.18*** 3.96 
 (.10) (.04) (.30) (3.08) (.08) (.41) (3.45) 
Observations 781,733 781,733 781,733 639,562 781,733 781,733 532,386 
Unique schools 939 939 939 721 939 939 587 
Adj. R2 .25 .25 .25 .26 .25 .25 .28 
C Schools Only 
ELA −.15** .00 −.14 −1.96 −.08 −.39 −1.48 
 (.07) (.04) (.23) (2.71) (.07) (.35) (2.02) 
Observations 499,695 499,695 499,695 398,588 499,695 499,695 316,119 
Unique schools 560 560 560 411 560 560 312 
Adj. R2 .23 .23 .23 .24 .23 .23 .25 
Math .04 .08 .62* −1.45 .13 .58 2.46 
 (.10) (.05) (.33) (3.45) (.10) (.48) (4.00) 
Observations 499,719 499,719 499,719 398,609 499,719 499,719 316,136 
Unique schools 560 560 560 411 560 560 312 
Adj. R2 .21 .21 .21 .22 .21 .21 .24 
D Schools Only 
ELA −.05 .03 .37 1.79 .03 .41 −.53 
 (.10) (.03) (.27) (2.87) (.07) (.38) (2.62) 
Observations 512,648 512,648 512,648 439,607 512,648 512,648 372,420 
Unique schools 607 607 607 490 607 607 406 
Adj. R-squared .30 .30 .30 .30 .30 .30 .31 
Math .01 .12*** 1.11*** 6.67* .26*** 1.46*** 2.51 
 (.13) (.04) (.35) (3.47) (.09) (.48) (3.76) 
Observations 512,664 512,664 512,664 439,624 512,664 512,664 372,433 
Unique schools 607 607 607 490 607 607 406 
Adj. R2 .28 .28 .28 .29 .28 .28 .30 
F Schools Only 
ELA .04 −.15 −.79 −7.92 −.20 −1.27 6.57 
 (.36) (.10) (.81) (9.92) (.22) (1.11) (5.84) 
Observations 263,146 263,146 263,146 230,092 263,146 263,146 187,526 
Unique schools 298 298 298 248 298 298 197 
Adj. R2 .29 .29 .29 .30 .29 .29 .33 
Math 2.19*** .08 .80 −4.57 .26 1.04 29.81*** 
 (.74) (.13) (1.02) (8.80) (.27) (1.25) (9.67) 
Observations 263,160 263,160 263,160 230,105 263,160 263,160 187,537 
Unique schools 298 298 298 248 298 298 197 
Adj. R-squared .27 .27 .27 .28 .27 .27 .30 
Placebo Test 
ELA .07 −.04 −.18 .63 −.03 −.23 .19 
 (.07) (.03) (.23) (2.44) (.05) (.32) (2.22) 
Observations 781,703 781,703 781,703 639,533 781,703 781,703 532,363 
Unique schools 939 939 939 721 939 939 587 
Adj. R2 .27 .27 .27 .28 .27 .27 .27 
Math −.01 −.05 −.49 −3.11 −.10 −.65* −1.62 
 (.08) (.03) (.30) (3.17) (.07) (.39) (3.33) 
Observations 781,733 781,733 781,733 639,562 781,733 781,733 532,386 
Unique schools 939 939 939 721 939 939 587 
Adj. R2 .25 .25 .25 .26 .25 .25 .27 
10-Mile Radius5-Mile Radius
Distance (r) (1)Density (2)Diversity (3)Concentration (r) (4)Density (5)Diversity (6)Concentration (r) (7)
Main Results C through F Schools 
ELA −.10 .02 .12 −.08 .00 .07 −.21 
 (.07) (.03) (.22) (2.51) (.06) (.31) (1.92) 
Observations 781,703 781,703 781,703 639,533 781,703 781,703 532,363 
Unique schools 939 939 939 721 939 939 587 
Adj. R-squared .27 .27 .27 .28 .27 .27 .29 
Math .05 .11*** .92*** 3.05 .23*** 1.18*** 3.96 
 (.10) (.04) (.30) (3.08) (.08) (.41) (3.45) 
Observations 781,733 781,733 781,733 639,562 781,733 781,733 532,386 
Unique schools 939 939 939 721 939 939 587 
Adj. R2 .25 .25 .25 .26 .25 .25 .28 
C Schools Only 
ELA −.15** .00 −.14 −1.96 −.08 −.39 −1.48 
 (.07) (.04) (.23) (2.71) (.07) (.35) (2.02) 
Observations 499,695 499,695 499,695 398,588 499,695 499,695 316,119 
Unique schools 560 560 560 411 560 560 312 
Adj. R2 .23 .23 .23 .24 .23 .23 .25 
Math .04 .08 .62* −1.45 .13 .58 2.46 
 (.10) (.05) (.33) (3.45) (.10) (.48) (4.00) 
Observations 499,719 499,719 499,719 398,609 499,719 499,719 316,136 
Unique schools 560 560 560 411 560 560 312 
Adj. R2 .21 .21 .21 .22 .21 .21 .24 
D Schools Only 
ELA −.05 .03 .37 1.79 .03 .41 −.53 
 (.10) (.03) (.27) (2.87) (.07) (.38) (2.62) 
Observations 512,648 512,648 512,648 439,607 512,648 512,648 372,420 
Unique schools 607 607 607 490 607 607 406 
Adj. R-squared .30 .30 .30 .30 .30 .30 .31 
Math .01 .12*** 1.11*** 6.67* .26*** 1.46*** 2.51 
 (.13) (.04) (.35) (3.47) (.09) (.48) (3.76) 
Observations 512,664 512,664 512,664 439,624 512,664 512,664 372,433 
Unique schools 607 607 607 490 607 607 406 
Adj. R2 .28 .28 .28 .29 .28 .28 .30 
F Schools Only 
ELA .04 −.15 −.79 −7.92 −.20 −1.27 6.57 
 (.36) (.10) (.81) (9.92) (.22) (1.11) (5.84) 
Observations 263,146 263,146 263,146 230,092 263,146 263,146 187,526 
Unique schools 298 298 298 248 298 298 197 
Adj. R2 .29 .29 .29 .30 .29 .29 .33 
Math 2.19*** .08 .80 −4.57 .26 1.04 29.81*** 
 (.74) (.13) (1.02) (8.80) (.27) (1.25) (9.67) 
Observations 263,160 263,160 263,160 230,105 263,160 263,160 187,537 
Unique schools 298 298 298 248 298 298 197 
Adj. R-squared .27 .27 .27 .28 .27 .27 .30 
Placebo Test 
ELA .07 −.04 −.18 .63 −.03 −.23 .19 
 (.07) (.03) (.23) (2.44) (.05) (.32) (2.22) 
Observations 781,703 781,703 781,703 639,533 781,703 781,703 532,363 
Unique schools 939 939 939 721 939 939 587 
Adj. R2 .27 .27 .27 .28 .27 .27 .27 
Math −.01 −.05 −.49 −3.11 −.10 −.65* −1.62 
 (.08) (.03) (.30) (3.17) (.07) (.39) (3.33) 
Observations 781,733 781,733 781,733 639,562 781,733 781,733 532,386 
Unique schools 939 939 939 721 939 939 587 
Adj. R2 .25 .25 .25 .26 .25 .25 .27 

Notes: The dependent variable is the standardized math or English language arts (ELA) score. Standard errors clustered by school in parentheses. Variables followed by (r) are reverse-coded to ease interpretation; each cell represents the coefficient estimate on the interaction between the competition measure, being a C, D, or F school in October 2011, and a post-policy indicator. Coefficients are multiplied by 100 for interpretability. Controls at the student level include indicators for gender, race, subsidized lunch eligibility, limited English proficiency (LEP), and special education. Controls at the school level include percent male, percent of each race, percent of student body eligible for subsidized lunch, percent LEP, and percent special education. Models also include school and year fixed effects; New Orleans’ schools and charter schools are excluded from this sample. Adj. = adjusted.

*p < 0.10; **p < 0.05; ***p < 0.01.

Looking first at the density measure, each additional private school located within a ten-mile radius of a given public school is associated with a .0011 standard deviation (SD) increase in math performance.2 Adding nine nearby private schools (roughly a 1 SD increase) increases test scores by .0100 SD. As we might expect, the competitive effect is stronger as the radius narrows. Within a five-mile radius, each additional private school is associated with a .0023 SD increase in math performance. In terms of diversity, the addition of one private religious school within a ten-mile radius is associated with a .0092 SD increase in math performance. Adding two additional types of nearby private schools (approximately a 1 SD increase) is associated with an increase of .0208 of a standard deviation. Again, that effect size grows larger when we narrow the radius within which we measure competition. Within a five-mile radius, the addition of one private religious school is associated with a .0118 SD increase in math performance.

In sum, our results indicate modest positive competitive effects on math scores, at best, resulting from the 2012–13 expansion of the LSP statewide, with a one-unit increase in competition associated with a null to .0118 SD increase in math performance. Although modest in magnitude, the estimated effects are consistent with the prior literature on the competitive effects of private school choice programs.3

We next examine the extent to which effects varied with public school quality, as measured by the state's school accountability letter grading system. The next three panels in table 3 compare only a segment of all voucher-eligible public schools at a time. The second panel of table 3 compares A- and B-graded public schools with just C-graded schools, which produced around 18 percent of the total voucher winners (not including students coming from charter schools or schools within New Orleans, which are excluded from this analysis). Given the small number of voucher users previously attending C-graded public schools, we do not expect to find strong evidence of a competitive response by this group of schools.4 Indeed, we find mostly null impacts on math scores, with the exception of a marginally significant .0062 SD increase associated with the diversity measure in a ten-mile radius. Effects for ELA are similarly null for three of four competition measures, independent of radii examined. We do find that student ELA achievement is negatively associated with competition using the distance measure, with an effect size of −.0015 SD, but it's unclear how to interpret findings for this small subgroup.

The third panel in table 3 compares D-graded public schools with A and B schools. With approximately three quarters of voucher-winners in our sample previously attending “D”-graded public schools, we expect to find a competitive response to the LSP, if there were one. Indeed, the math results observed for this group of schools are statistically significant and positive for three out of four measures—density, diversity, and concentration. The ELA results, meanwhile, are consistently not statistically significant. In sum, competitive effects for D schools—those schools which experienced the largest departure of students via the LSP—range from null to 7 percent of a standard deviation, with effects exclusively concentrated in math.

Finally, effects for F schools indicate statistically significant, positive effects on students’ math achievement for two of the four competition measures examined, and exclusively null effects for ELA achievement. Moreover, we observe the largest statistically significant competitive effects for math in this group of schools. For every mile the nearest private school moves closer, public school student math score performance in the period after the enactment of the LSP policy increases by .0219 SD. The magnitude of this positive and statistically significant effect size is consistent in both models—the one relying on a ten-mile radius and the one relying on a five-mile radius. The largest result, by far, is associated with the concentration measure of competition. A one-unit increase in the Herfindahl index in a five-mile radius is associated with a .2981 SD increase in student math achievement. Translating this finding to the effect size associated with a 1 SD increase in the concentration measure reveals a .0894 SD rise in scores. These effects are noticeably larger than those observed for C- and D-graded schools, which is somewhat surprising given that only 8 percent of voucher winners came from F-graded public schools (representing approximately five applicants and two winners per school). Overall, the results for D- and F-graded schools in Louisiana suggest that those schools in the sample experiencing the greatest loss of students to the LSP responded with a modest yet positive increase in students’ math outcomes.

The final panel of table 3 displays the results of a placebo test we conduct that changes the “post-policy year” indicator from 2013 to 2011, before the LSP expansion was actually enacted. We expect to find null results associated with the competition measures in this model as it examines a time period prior to the LSP's statewide expansion and, with one exception, that appears to be the case. The only statistically significant coefficient reported from the sixteen regressions represented in the final panel of table 3 is a negative coefficient of −.0065 SD associated with the diversity measure in a five-mile radius, which is marginally significant at p < 0.10. Overall, given that none of the coefficients in the final panel is statistically significant at conventional levels (p < 0.05), this builds confidence in the validity of the empirical model used.

Robustness Checks

There are three important follow-up analyses to consider. First, we search for evidence to support the hypothesis that traditional public schools might be using the voucher program as a release valve to counsel out voucher-eligible, low-performing students. If this were the case, the test score improvements we observe might be driven by compositional changes in the student body, as opposed to real school improvements brought about by the competitive threat. Here's what we find: Among schools in the analysis sample that lost at least one student to the LSP, the median number of voucher winners was two, the mean was 2.77, and the standard deviation was 2.11. Thus, the number of departing students by school is actually quite small. Even looking at the highest number, we see that the maximum number of voucher winners in a single school in the analysis sample was ten, which we suspect is too small of a number to greatly influence overall school achievement scores.

We also search for evidence to support the reverse “cream-skimming hypothesis,” which proposes that the most disadvantaged students would actually end up departing or being counselled out of the traditional public school once a feasible and affordable alternative schooling option becomes available. An examination of the characteristics of voucher winners demonstrates some differences in the demographic characteristics and prior test scores of voucher winners, which we detail in the online appendix. Nevertheless, the small number of student transfers by school gives us some assurance that the positive effects we observe are not entirely driven by changes in the types of students left behind in public schools.

For the second robustness check, we add charter schools to the analysis sample. Even though charter schools already operate in a competitive marketplace, it is possible that the voucher program's expansion served as a mild competitive shock by introducing a new set of private school competitors that would previously have been financially unattainable for the majority of students they serve. Our analyses (available in the online appendix) appear to confirm this intuition. The coefficients associated with the density and diversity competition measures are statistically significant, yet smaller in magnitude than those reported in previous models. As before, that effect size is larger when we narrow the radius within which we measure competition.

For the third robustness check, we add both charter schools and schools in New Orleans to the analysis sample. This check requires two changes to the analysis sample because the majority (84 percent) of public schools in New Orleans are charter schools, not traditional public schools (Arce-Trigatti et al. 2015). Thus, we repeat the statewide analysis described in equation 1, this time including all charter schools and all public schools (traditional and charter) in New Orleans for which we have data. We find a small, positive, statistically significant effect on students’ ELA achievement using the density measure in a ten-mile radius and null effects otherwise. When we look at math performance, the coefficients associated with the impact of competition density and diversity remain statistically significant and similar in magnitude to those reported in the primary analysis, which excluded New Orleans and charter schools. A more detailed presentation is available in the online appendix.

Secondary Empirical Approach: Regression Discontinuity Design

Given the LSP's design limited participation to students in public schools receiving a C, D, or F grade at baseline, it is possible to run a secondary analysis to confirm the findings reported above. Specifically, we use an alternative identification strategy that compensates for having reduced overall statistical power by having stronger internal validity at the C/B-grade margin—a regression discontinuity (RD) design—to see if the main results can be replicated. If the estimates obtained from the RD analysis are largely consistent with the measured estimates for C-graded schools in the primary analysis, we can be more confident in the validity of the primary results. Thus, while useful for validating our primary findings, this secondary approach cannot replace our preferred identification strategy because it only allows us to estimate marginal effects concentrated at the C/B cutoff, where competitive effects may be harder to detect and where the smaller sample size limits statistical power.

The LSP is an ideal situation to apply an RD analysis because school exposure to competition from the LSP depends upon ratings from the Louisiana letter grade system for public schools, part of the school and district accountability system. Letter grades are determined by a continuous measure known as the school performance score, which is an index of proficiency status in ELA, math, science, and social studies, and expected normative student longitudinal growth. Intervals along the school performance score continuum equate to a given letter grade. Low-income students wishing to participate in the LSP were required to have attended a public school that received a letter grade of C, D, or F for the most prior school year. It is reasonable to expect that schools scoring at the lowest threshold for receiving a B grade do not differ in substantial ways from those schools scoring at the highest threshold for receiving a C grade, allowing for direct comparisons between schools in these two groups.

One of the primary differences between these two groups of schools is that only those schools that received a C grade or lower in October 2011 were directly exposed to vouchers for their low-income students. A subset of “high-C” schools, therefore, constitutes the treatment group for the RD competitive effects analysis of the LSP. It is important to note, in contrast to the primary analysis, that high-C schools in the RD are assumed to experience the threat of competition even if there are no private schools nearby. The RD model operates on the argument that schools receiving low B grades had a school performance score that was close to the C schools, but they were not directly treated by the program because they were just above the cutoff point. A subset of “low-B” schools, therefore, constitutes the control group. Another important consideration here is that the RD relies on the assumption that any observed differences between these two groups are attributable to the threat of competition but it is also possible that stigma could be playing a role, which would positively bias the RD results. For example, Figlio and Rouse (2006) show that Florida school responses to the Opportunity Scholarship program was attributable to the stigma associated with the receipt of an F grade more than the voucher threat. Although a C grade doesn't carry the same stigma as an F grade, we acknowledge this possibility.

RD Sample Selection

To generate the RD analysis sample, we start with the universe of students in the state's testing file in 2010–11 and merge this information with the state's school performance score file for 2010–11, generating an initial sample of 307,772 students. Next, we apply a series of filters to pare down the initial sample to our final analytical sample. The first filter keeps those students who have taken the state test (the LEAP or iLEAP) as opposed to an alternative assessment such as those used by students with special educational needs, reducing the sample to 298,868. The second filter excludes those students who won the LSP voucher lottery, reducing the sample to 297,766. This ensures that the sample is capturing those public school students who remained in the public school system. The third and fourth filters exclude students attending charter schools, which already experience competition for enrollment and thus are not relevant for this study, and students attending New Orleans public schools, where a pilot version of the LSP was already operating.5 This leaves 276,616 students, of which 64,952 attended a B-graded school and 88,923 attended a C-graded school in October 2011. The final analysis sample will be chosen from these 153,875 students, depending on the bandwidth selected for the RD analysis, which is explained in greater detail in the next section.

RD Research Design

To estimate the competitive impact of the LSP, a reduced-form regression is used,
Aijt=τ+θDijt+ρPjt+ω1Xijt+ω2Sjt+ζjt
(2)
where Aijt is the average achievement of student i, in school j in year t; Dijt is an indicator for attending a school that experienced the threat of competition—that is, it is a binary variable that takes on a value of one if the school performance score (SPS) of the school attended is 105 or lower and zero otherwise;6Pjt is the SPS used to assign school grades; Xijt is a vector of student level covariates including lagged achievement, gender, race, LEP, FRPL-eligibility, special education status, and grade; Sjt is a vector of school level covariates including school percent female/black/Hispanic/special education/limited English proficient, and percent qualifying for FRPL. Finally, ζjt is an idiosyncratic error term that accounts for clustering of students within schools. A quartic polynomial is included in Pjt to control for the functional form of the SPS. The estimated impact of competition from private schools through the LSP, θ, can be interpreted as causal under the assumption, conditional on the school performance score, the assignment of grades is uncorrelated with the error term ζjt.

The strength of the RD is that it does not incorporate all eligible public schools—only a narrow set of schools above and below the 105-point SPS cutoff that distinguishes C schools from B schools. The more similar the SPS score of the B and C schools on either side of this cutoff, the more similar one might expect these schools to be in both observable and unobservable ways, strengthening the internal validity of the analysis. In selecting the width of the “window” of observations to be used for the RD, we start by using the smallest bandwidth feasible, which is one point above and one point below the B/C cutoff. We also experiment with wider bandwidths of five and ten points above and below the cutoff, which allow for the creation of larger analytical samples.

Results of RD Analysis

Table 4 presents the results of student-level regressions in the form of equation 2. Standardized scores in ELA and math are regressed on the school performance score, an indicator for experiencing the threat of competition from the LSP (i.e., attending a high-C school), and student- and school-level demographic control variables. Standard errors are clustered at the school level. The first two columns display the results of the first placebo test in which we examine test scores from 2010–11, before the introduction of the LSP. We expect to find no significant differences in student outcomes, which is confirmed by the data. When we examine test scores from 2012–13—when the competitive threat was present—there is still no difference in test scores between high-C and low-B schools, conditional on the school performance score. As a robustness check and to maximize the power of the RD, we also increase the size of the bandwidth to five and ten points above and below the cutoff to see if the inclusion of more observations alters the results. We observe no noticeable changes.

Table 4.
The Impact of Louisiana Scholarship Program Competition on Traditional Public School Achievement, Regression Discontinuity Results
2010—11 (Baseline)2012—13
ELAMathELAMath
Bandwidth1 point (1)1 point (2)1 point (3)5 points (4)10 points (5)1 point (6)5 points (7)10 points (8)
High-C Schools −.05 −.02 .03 −.01 −.01 −.00 −.02 −.01 
 (.03) (.02) (.03) (.02) (.02) (.04) (.04) (.03) 
Schools 48 48 44 201 382 44 201 382 
Observations 10,438 10,435 8,612 39,205 70,863 8,610 39,202 70,859 
Placebo Test: A v. B Schools 
High-B Schools   −.08 −.05 −.03 −.08 −.06 −.03 
   (.16) (.05) (.03) (.21) (.06) (.05) 
Schools   17 64 147 17 64 147 
Observations   3,511 12,441 25,410 3,513 12,443 25,413 
Placebo Test: C v. D Schools 
High-D Schools   .04 −.02 −.02 −.09 −.02 −.02 
   (.06) (.03) (.02) (.06) (.04) (.03) 
Schools   47 185 381 47 185 381 
Observations   8,269 29,667 63,803 8,268 29,669 63,802 
Placebo Test: D v. F Schools 
High-F Schools   −.02 .10 .03 .18 .07 −.06 
   (.09) (.07) (.05) (.14) (.11) (.08) 
Schools   16 53 134 16 53 134 
Observations   2,819 6,739 20,234 2,820 6,744 20,244 
2010—11 (Baseline)2012—13
ELAMathELAMath
Bandwidth1 point (1)1 point (2)1 point (3)5 points (4)10 points (5)1 point (6)5 points (7)10 points (8)
High-C Schools −.05 −.02 .03 −.01 −.01 −.00 −.02 −.01 
 (.03) (.02) (.03) (.02) (.02) (.04) (.04) (.03) 
Schools 48 48 44 201 382 44 201 382 
Observations 10,438 10,435 8,612 39,205 70,863 8,610 39,202 70,859 
Placebo Test: A v. B Schools 
High-B Schools   −.08 −.05 −.03 −.08 −.06 −.03 
   (.16) (.05) (.03) (.21) (.06) (.05) 
Schools   17 64 147 17 64 147 
Observations   3,511 12,441 25,410 3,513 12,443 25,413 
Placebo Test: C v. D Schools 
High-D Schools   .04 −.02 −.02 −.09 −.02 −.02 
   (.06) (.03) (.02) (.06) (.04) (.03) 
Schools   47 185 381 47 185 381 
Observations   8,269 29,667 63,803 8,268 29,669 63,802 
Placebo Test: D v. F Schools 
High-F Schools   −.02 .10 .03 .18 .07 −.06 
   (.09) (.07) (.05) (.14) (.11) (.08) 
Schools   16 53 134 16 53 134 
Observations   2,819 6,739 20,234 2,820 6,744 20,244 

Notes: Estimates presented in standard deviation units. Robust standard errors clustered at the school level in parentheses. All models include controls for lagged achievement, gender, race, limited English proficiency, free or reduced-price lunch eligibility, special education status, student grade, and school average measures of each of the demographic variables; charter schools and New Orleans schools are excluded from this sample. ELA = English language arts.

Table 4 also displays the results of additional placebo tests, conducted by running the RD with different school letter grade combinations. Instead of comparing high-C and low-B schools, we compare all other grade band combinations (i.e., A vs. B schools, C vs. D schools, and D vs. F schools). None of the estimates in any of these models is statistically significant, increasing confidence in the validity of this approach. A more detailed analysis, along with additional robustness checks, is available in the online appendix.

7.  Conclusion

The results presented in this paper show that public school performance in Louisiana was either unaffected or modestly improved as a result of competition. Two empirical specifications are used—a school fixed effects model utilizing information about private school competition from geocoded measures of distance, density, diversity, and concentration of competition, and a regression discontinuity design that tests whether students in high-C schools who are exposed to competition from the LSP realize greater performance gains than their peers in low-B public schools (which are similar in many respects but are unaffected by competition from the program). The overall results from the school fixed effects analysis reveal null impacts in ELA and positive impacts in two out of four specifications in math. The RD estimates of a subsample of C- and B-graded schools indicate null effects across both math and ELA. In general, although some models reported null effects across the board and the ELA findings were almost always insignificant, positive impacts were observed in math. Moreover, the lowest-graded public schools experienced the largest response to the injection of competition.

The policy implications of this research concern private school participation in publicly funded choice programs. That is, to encourage positive competitive effects, policy makers may need to concern themselves with questions of how to promote and maintain a high-quality, supply-side response in the context of statewide private school choice programs, such as the LSP (Egalite 2014; Kisida, Wolf, and Rhinesmith 2015). For instance, to attract high-quality private schools to participate in such a program, policy makers need to consider where to set the voucher value, whether or not to allow private schools to impose admissions restrictions on applications, and how to balance mandates for testing and reporting requirements with the desire for private schools to preserve autonomy.

There are a number of limitations that the reader should consider with regard to this quantitative analysis. First, this study examines the extent to which variation in empirical measures of private-school competition is associated with student ELA and math performance on Louisiana state standardized tests. Although performance on these assessments is an important proxy for student academics, standardized test scores are not the only outcomes that families, practitioners, and policy makers care about. Indeed, public schools may respond to the competitive pressure of the LSP in other ways that are not captured by these test score gains and thus would not show up in this type of analysis. For example, it could be the case that public schools focused on science, art, music, sports, or social studies as a mechanism to retain students who would be eligible for the voucher program. Alternatively, they could have responded to family preferences by offering more diverse electives, for example, or by conducting renovations on school facilities or taking active steps to better market their school (Holley, Egalite, and Lueken 2013). Our study's focus on standardized test outcomes could easily be missing competitive effects that manifest on these other, highly relevant variables.

Moreover, our analysis is limited to short-run impacts, designed to identify competitive effects arising in response to the LSP's first year of statewide implementation. It may take time, however, for public schools’ responses to increased competition to translate into observed changes in student test scores. It is also possible that the effects of competition might be felt in long-term outcomes (such as improvements in student graduation rates or college enrollment) if public schools respond to the competitive threat by sharpening their focus on attainment goals for students and cultivating an environment that prepares students for long-term success. While we focus on the 2012–13 school year because of the dramatic expansion in potential private school pressure, this focus on short-run outcomes may fail to adequately capture competitive responses.

A third limitation of our study is that it cannot identify specifically how public school administrators responded to the increased competitive threat caused by the LSP's statewide expansion, which may be relevant if those actions need a longer time horizon to bring about change. As such, the results presented here may underrepresent the true effect of competition if district- and school-level administrators were slow to respond to the program. It is also plausible that public school district responses were muted in the first year of the LSP's statewide expansion by pending litigation against the program, which called into question the program's longevity. Moreover, although student departures to private schools are arguably most salient at the school level, it is unclear how much power a traditional public school principal really has to respond to competition (Sullivan, Campbell, and Kisida 2008). In many schools, budget setting, policy development, and hiring decisions are made at the district level, leaving the principal with few assets to utilize in ways that might measurably impact student performance in a short time frame. While district- and school-level administrators may respond more actively over time to the competitive pressures created by the LSP, our current analysis cannot speak to such actions.

Finally, this voucher program is means-tested, meaning it is designed to target low-income students, not the universe of public school students. This feature of program design may significantly shape the public schools’ perception of the program. It is even possible that public schools could be supportive of a program that attracts their poorest and possibly hardest-to-educate students. Instead of viewing this targeted voucher program as a threat to which they must respond, public schools may actually view it as a release valve and welcome the program as a positive outlet to which they can direct struggling students. A descriptive analysis of departing students offers suggestive evidence that they are more disadvantaged and lower-performing than their peers at the sending public school. As the program grows and the number of students departing traditional public schools grows, this would be an important area for future research to focus.

The findings from this competitive effects analysis indicate that public school performance in ELA and math was either unaffected or modestly improved in response to competition from the LSP. The primary contribution of this study is that it analyzes competitive effects in a new context, using multiple identification strategies. The results presented here are consistent with the hypothesis that voucher programs such as the LSP increase student performance by “lifting all boats” (Friedman 1962; Hoxby 2003). The competitive threat of the LSP ranges from negligible to modestly positive in the public schools exposed to the threat of competition, with effect sizes that are neutral or positive. As large-scale school voucher programs continue to expand across the country, policy makers who are hopeful for the potential for market-based reforms to improve student outcomes across all sectors should be heartened by these findings.

Notes

1. 

Schools screened out by this filter are high schools, alternative schools, and correctional institutes.

2. 

Coefficients in the table are multiplied by 100 to assist with the interpretation of very small effects.

3. 

For example, Figlio and Hart's (2014) analysis of the Florida Tax Credit Scholarship Program reported increases of .0008 SD for each additional type of nearby private school and .0015 SD for every mile the nearest private school moves closer.

4. 

This distribution is likely explained by the LSP matching algorithm, which prioritized students attending lower-performing public schools.

5. 

The sample size reduction associated with excluding New Orleans schools is small because the majority of New Orleans schools were already excluded by the charter school screen.

6. 

The reader should note that none of the spatial measures of competition used thus far is reflected in this indicator.

Acknowledgments

For helpful comments, feedback, and assistance, we thank Kate Dougherty, Patrick Wolf, Jay Greene, Robert Costrell, Doug Harris, as well as seminar participants at the annual meetings of the Association for Public Policy Analysis and Management, the Association for Education Finance and Policy, and the American Educational Research Association. We also thank the Louisiana Department of Education for their cooperation and assistance with providing the necessary data to conduct these analyses.

REFERENCES

Altonji
,
Joseph G.
,
Ching-I
Huang
, and
Christopher R.
Taber
.
2015
.
Estimating the cream skimming effect of school choice
.
Journal of Political Economy
123
(
2
):
266
324
.
Anzia
,
Sarah F.
, and
Terry M.
Moe
.
2013
.
Collective bargaining, transfer rights, and disadvantaged schools
.
Educational Evaluation and Policy Analysis
36
(
1
):
83
111
.
Arce-Trigatti
,
Paula
,
Douglas N.
Harris
,
Huriya
Jabbar
, and
Jane Arnold
Lincove
.
2015
.
Many options in the New Orleans choice system
.
Education Next
15
(
4
):
25
33
.
Bagley
,
Carl
.
2006
.
School choice and competition: A public-market in education revisited
.
Oxford Review of Education
32
(
3
):
347
362
.
Brunner
,
Eric J.
,
Jennifer
Imazeki
, and
Stephen L.
Ross
.
2010
.
Universal vouchers and racial and ethnic segregation
.
The Review of Economics and Statistics
92
(
4
):
912
927
.
Carnoy
,
Martin
,
Frank
Adamson
,
Amita
Chudgar
,
Thomas F.
Luschei
, and
John F.
Witte
.
2007
.
Vouchers and public school performance: A case study of the Milwaukee Parental Choice Program
.
Washington, DC
:
Economic Policy Institute
.
Carr
,
Matthew
.
2011
.
The impact of Ohio's EdChoice on traditional public school performance
.
Cato Journal
31
(
2
):
257
284
.
Chakrabarti
,
Rajashri
.
2008
.
Can increasing private school participation and monetary loss in a voucher program affect public school performance? Evidence from Milwaukee
.
Journal of Public Economics
92
(
5–6
):
1371
1393
.
Chakrabarti
,
Rajashri
.
2013
.
Impact of voucher design on public school performance: Evidence from Florida and Milwaukee voucher programs
.
The B.E. Journal of Economic Analysis & Policy
13
(
1
):
349
394
.
Chubb
,
John E.
, and
Terry M.
Moe
.
1990
.
Politics, markets, & America's schools
.
Washington, DC
:
Brookings Institution Press
.
Editorial
Board
.
2011
.
The year of school choice
.
Wall Street Journal
,
5 July
.
Egalite
,
Anna J.
2014
. Choice program design and school supply. In
New and better schools: The supply side of school choice
,
edited by
Michael Q.
McShane
, pp.
163
184
.
Lanham, MD
:
Rowman & Littlefield Publishing Group, Inc
.
Egalite
,
Anna J.
,
Laura I.
Jensen
,
Thomas
Stewart
, and
Patrick J.
Wolf
.
2014
. Finding the right fit: Recruiting and retaining teachers in Milwaukee choice schools.
Journal of School Choice
8
(
1
):
113
140
.
Epple
,
Dennis
, and
Richard E.
Romano
.
1998
.
Competition between private and public schools, vouchers, and peer-group effects
.
American Economic Review
88
(
1
):
33
62
.
Figlio
,
David N.
, and
Cassandra M. D.
Hart
.
2014
.
Competitive effects of means-tested school vouchers
.
American Economic Journal: Applied Economics
6
(
1
):
133
156
.
Figlio
,
David N.
, and
Cecilia E.
Rouse
.
2006
.
Do accountability and voucher threats improve low-performing schools
?
Journal of Public Economics
92
(
1–2
):
239
255
.
Fiske
,
Edward B.
, and
Helen F.
Ladd
.
2000
.
When schools compete: A cautionary tale
.
Washington, DC
:
Brookings Institution Press
.
Forster
,
Greg
.
2008a
.
Lost opportunity: An empirical analysis of how vouchers affected Florida public schools
.
Indianapolis, IN
:
Friedman Foundation for Educational Choice
.
Forster
,
Greg
.
2008b
.
Promising start: An empirical analysis of how EdChoice vouchers affect Ohio public schools
.
Indianapolis, IN
:
The Friedman Foundation for Educational Choice
.
Friedman
,
Milton
.
1962
.
Capitalism and freedom
.
Chicago
:
University of Chicago Press
.
Glomm
,
Gerhard
,
Douglas N.
Harris
, and
TeFen
Lo
.
2005
.
Charter school location
.
Economics of Education Review
24
(
4
):
451
457
.
Gray
,
Nathan L.
,
John D.
Merrifield
, and
Kerry A.
Adzima
.
2014
.
A private universal voucher program's effects on traditional public schools
.
Journal of Economics and Finance
40
(
2
):
319
344
.
Greene
,
Jay P.
2001
.
An evaluation of the Florida A-Plus accountability and school choice program
.
Available
https://media4.manhattan-institute.org/pdf/cr_aplus.pdf.
Accessed 21 April 2020
.
Greene
,
Jay P.
, and
Greg
Forster
.
2002
. Rising to the challenge: The effect of school choice on public schools in Milwaukee and San Antonio.
Civic Bulletin No. 27
.
New York
:
Manhattan Institute for Policy Research
.
Greene
,
Jay P.
, and
Ryan H.
Marsh
.
2009
.
The effect of Milwaukee's Parental Choice Program on student achievement in Milwaukee public schools
.
Available
https://files.eric.ed.gov/fulltext/ED530091.pdf.
Accessed 21 April 2020
.
Greene
,
Jay P.
, and
Marcus A.
Winters
.
2004
.
Competition passes the test
.
Education Next
4
(
3
):
66
71
.
Greene
,
Jay P.
, and
Marcus A.
Winters
.
2007
.
An evaluation of the effect of DC's voucher program on public school achievement and racial integration after one year
.
Catholic Education: A Journal of Inquiry and Practice
11
(
1
):
83
101
.
Harris
,
Doug
.
2015
.
Good news for New Orleans
.
Education Next
15
(
4
):
8
--
15
.
Herbermann
,
Charles G.
1912
.
The Catholic encyclopedia: An international work of reference on the constitution, doctrine, discipline, and history of the Catholic church
.
New York
:
Robert Appleton Company
.
Hess
,
Frederick M.
2002
.
Revolution at the margins: The impact of competition on urban school systems
.
Washington, DC
:
Brookings Institution Press
.
Holley
,
Marc J.
,
Anna J.
Egalite
, and
Martin F.
Lueken
2013
.
Competition with charters motivates districts
.
Education Next
13
(
4
):
28
35
.
Hoxby
,
Caroline M.
2001
.
Rising tide
.
Education Next
1
(
4
):
68
74
.
Hoxby
,
Caroline M.
2003
. School choice and school productivity. In
The economics of school choice
,
edited by
Caroline M.
Hoxby
, pp.
287
342
.
Chicago
:
University of Chicago Press
.
Jabbar
,
Huriya
.
2015
.
Competitive networks and school leaders’ perceptions: The formations of an education marketplace in post-Katrina New Orleans
.
American Educational Research Journal
52
(
6
):
1093
1131
.
Kisida
,
Brian
,
Patrick J.
Wolf
, and
Evan
Rhinesmith
.
2015
.
Views from private schools: Attitudes about school choice programs in three states
.
Washington, DC
:
American Enterprise Institute
.
Ladd
,
Helen, F.
2002
.
School vouchers: A critical view
.
Journal of Economic Perspectives
16
(
4
):
3
24
.
Ladd
,
Helen F.
, and
Edward B.
Fiske
.
2003
.
Does competition improve teaching and learning? Evidence from New Zealand
.
Educational Evaluation and Policy Analysis
25
(
1
):
97
112
.
Lankford
,
Hamilton
, and
James
Wyckoff
.
2001
.
Who would be left behind by enhanced private school choice
?
Journal of Urban Economics
50
(
2
):
288
312
.
Loeb
,
Susanna
,
Jon
Valant
, and
Matthew
Kasman
.
2011
.
Increasing choice in the market for schools: Recent reforms and their effects on student achievement
.
National Tax Journal
64
(
1
):
141
--
164
.
Lubienski
,
Christopher
,
Charisse
Gulosino
, and
Peter
Weitzel
.
2009
.
School choice and competitive incentives: Mapping the distribution of educational opportunities across local education markets
.
American Journal of Education
115
(
4
):
601
647
.
McCall
,
Joanne
.
2014
.
Lawmakers should keep public funds in public schools
.
South Florida Sun Sentinel
,
25 March
.
McMillan
,
Robert
.
2000
.
Competition, parental involvement and public school performance
.
National Tax Association Proceedings
93
:
150
155
.
Rich
,
Motoko
.
2014
.
Bill to offer an option to give vouchers
.
New York Times
,
27 January
.
Rouse
,
Cecilia
,
Jane
Hannaway
,
Dan
Goldhaber
, and
David N
.
Figlio
.
2013
.
Feeling the Florida heat: How low performing schools respond to voucher and accountability pressure
.
American Economic Journal: Economic Policy
5
(
2
):
251
281
.
Schneider
,
Mark
,
Paul
Teske
, and
Melissa
Marschall
.
2000
.
Choosing schools: Consumer choice and the quality of American schools
.
Princeton, NJ
:
Princeton University Press
.
Schneider
,
Mark
,
Paul
Teske
,
Melissa
Marschall
, and
Christine
Roch
.
1998
.
Shopping for schools: In the land of the blind, the one-eyed parent may be enough
.
American Journal of Political Science
42
(
3
):
769
793
.
Schrier
,
Diane
.
2014
.
State funding of private schools threatens public education
.
The Ocala Star Banner
,
4 May
.
Schultz
,
Tommy
,
Krista
Carney
,
Whitney
Marcavage
,
Nicole
Jackson
,
Elisa
Clements
,
Paul
Dauphin
, and
Kim
Martinez
.
2017
.
School choice yearbook 2016–17
.
Washington DC
:
American Federation for Children Growth Fund
.
Sude
,
Yujie
,
Corey A.
DeAngelis
, and
Patrick J.
Wolf
.
2018
.
Supplying choice: An analysis of school participation decisions in voucher programs in Washington, DC, Indiana, and Louisiana
.
Journal of School Choice
12
(
1
):
8
33
.
Sullivan
,
Margaret D.
,
Dean B.
Campbell
, and
Brian
Kisida
.
2008
.
The muzzled dog that didn't bark: Charters and the behavioral response of D.C. public schools.
Fayetteville, AR: University of Arkansas, School Choice Demonstration Project
.
Van Dunk
,
Emily
, and
Anneliese
Dickman
.
2002
.
School choice accountability: An examination of informed consumers in different choice programs
.
Urban Affairs Review
37
(
6
):
844
856
.
West
,
Martin R.
, and
Paul E.
Peterson
.
2006
.
The efficacy of choice threats within school accountability systems
.
Economic Journal
116
(
510
):
48
62
.
Winters
,
Marcus, A.
, and
Jay P.
Greene
.
2011
.
Public school response to special education vouchers: The impact of Florida's McKay Scholarship program on disability diagnosis and student achievement in public schools
.
Educational Evaluation and Policy Analysis
33
(
2
):
138
158
.

Supplementary data