Abstract

In the academic and policy debates over the merits of charter schools, two things are clear: First, they are here to stay and, second, their quality varies widely. Policy makers therefore need to understand how to design charter laws that promote the creation of high-performing schools. Crucial to this discussion is the charter authorizing process, which varies across the nation. In some states, authorizing power is held exclusively by local school districts, whereas other states allow a range of authorizers that may include not only local districts, but also nonprofit organizations, counties, higher educational institutions, or a state agency. In this paper we use individual student-level data from Ohio, which permits a wide range of organizations to authorize charter schools, to examine the relationship between type of authorizer and charter-school effectiveness as measured by students’ achievement trajectories.

1.  Introduction

Nearly two decades after the establishment of the first charter schools, debate continues over whether, on average, they are more or less effective than conventional public schools in raising the achievement of the students they serve (see, e.g., Betts and Tang 2008; Abdulkadiroglu et al. 2009; CREDO 2009; Hoxby, Murarka, and Kang 2009; Zimmer et al. 2009; Gleason et al. 2010; Fryer 2011; Angrist et al. 2011).1 Differences in the results of different studies could be attributable to varying methods for estimating impacts (see the debate between CREDO 2009 and Hoxby 2009), to variation in outcomes measured (Booker et al. 2011; Imberman 2011), to diversity in charter laws and policies across states producing real differences in average effectiveness of charter schools (Buddin and Zimmer 2005), or to a combination of these factors. Despite this ongoing debate, however, one area of clear consensus has emerged: Variation in the performance of individual charter schools is wide, with some performing admirably and others struggling (see CREDO 2009; Zimmer et al. 2009).

The debate over the average effectiveness of charter schools may not be settled anytime soon. In the meantime, researchers can help inform policy makers and educators by examining if there are factors associated with the observed variation of charter schools. Our aim is to examine whether the authorizers of charter schools could be an explanatory factor of the discrepancy in charter school performance. This could have two important implications. First, policy makers may want to consider modifying charter laws and regulations to favor authorizer types associated with higher-performing schools. Second, parents could be armed with better information in making educational choices for their children.

To answer this question we will use both matching and fixed effect approaches. We will discuss the details of these approaches later, but it is important to acknowledge the limitations of our research approaches upfront. We believe that our designs are likely to successfully control for selection bias in the enrollment of students into the various types of charter schools, thereby potentially providing valid causal inferences about the effects of the schools. The designs cannot, however, account for potential selection bias in the sorting of schools into authorizers. We therefore present the results as showing associations between authorizer type and charter-school performance, which may or may not be causal. We nonetheless believe the associational information is important. For parents who have little information when making enrollment decisions for their child, this analysis may provide a useful signal of whether a school is likely to increase their child's academic success. From a policy maker's perspective, the distinction between causation and correlation in authorizer effects is more important, but in the absence of any existing causal information about authorizer type, an examination of a correlation can provide useful information about possible causal factors of school effectiveness.

Previously, Carlson, Lavery, and Witte (2012) examined charter authorizers in Minnesota, finding no variation in performance across authorizers. Their analysis utilized school-level data, however, which makes it more difficult to tease out differential effects across authorizers. Therefore, it is challenging to know whether there is truly no differential effect across authorizers or whether, instead, the use of aggregate data obscured an effect. Our analysis uses longitudinal, student-level data, which provides a greater chance to detect differential effects and allows us to better control for student characteristics and selection in assessing the performance of charter schools. Subsequent sections describe in greater detail the role of authorizers as well as our data, research approach, results, and conclusions.

2.  The Role of Authorizers

Prospective charter-school operators must petition an authorizing organization for the right to open a school, and must re-apply for renewal at the conclusion of the charter term. “Through the granting and monitoring of charter schools, authorizers are administrative and accountability gatekeepers in states that have enacted charter school legislation” (Anderson and Finnigan 2001, p. 6). The types of authorizers vary across states and can include local school boards, postsecondary educational institutions, the state department of education, an independent state charter board, county educational agencies, or nonprofit groups (Palmer 2007).

Theoretically, authorizers fulfill three key roles: (1) they decide which charter schools will be permitted to open, (2) they monitor each school's performance and determine when to offer help, and (3) they decide which schools deserve reauthorization (Vergari 2001). Many authorizers consider the first role the most important. “[There] is a belief, at least among staff at some of the larger authorizers, that if they make the process rigorous enough at the beginning, then they won't need to ‘worry’ as much about the school in practice” (Bulkley 2001, p. 7). The initial authorizing process usually includes performance expectations in the form of a contract. When contract expectations are not met, schools can face sanctions and, in the most serious cases, closure (Bulkley 2001).

There is a debate about which types of authorizing agencies have the proper incentives to fulfill these roles effectively. For example, because charter schools compete with district schools for students and resources, local school districts typically have little incentive to authorize charter schools and may not provide the best support to charter schools. Other authorizers, such as nonprofits and universities, may have less of a conflict of interest, but they, too, can possibly have distorted incentives created by the fees they charge for authorizing charter schools. These fees may encourage the authorization of charter schools but create little incentive to scrutinize charter applications or performance of schools.2 Finally, authorizers of all types may be reluctant to take on the challenge of shutting down a low-performing charter school that has its own political constituency (Bulkley 2001).

In addition to challenges with incentives, authorizing agencies can have varying capacity to fulfill their roles effectively. For instance, some districts or nonprofit agencies may be too small, with limited staff to sufficiently vet charter proposals or to provide appropriate support or oversight to charter schools. In contrast, Anderson and Finnigan (2001, p. 9) argue that larger authorizers can and typically do require a more rigorous selection process suggesting “[they] may have learned from experience to be clear about their expectations at the beginning of their relationship with charter schools.” These larger authorizers may also have more “developed accountability systems” (Anderson and Finnigan 2001, p. 6). Smaller authorizers, meanwhile, may vary significantly in selection and requirements (Bulkley 2001). Even apart from size, the capacity of authorizers may be limited by the fact that the authorizing function is often quite different from their core activities: Universities and nonprofit agencies, for example, may have no prior experience relevant to the tasks of authorizing, overseeing, and supporting charter schools; school districts, by contrast, may be more likely to have relevant expertise. Furthermore, the funding available to support authorizers may be very limited.

Finally, the matching process between potential charter schools and authorizers could affect the quality of the schools and ultimately the achievement of their students. For instance, some authorizers may take the process very seriously and only authorize schools after a thorough review process in which they can be confident these schools will be successful. These same authorizers may be more likely to provide the support schools need to be successful and close unsuccessful schools. In contrast, other authorizers may be more lax in the authorization and may provide less oversight and support. Prospective charter-school founders may be aware of these differences, in which case they may select into the authorizers they deem most likely to approve them.

In sum, differences in the interest and capacity of authorizers to approve, monitor, support, and regulate charter schools might be expected to lead to considerable variation in the effectiveness of charter schools. The importance of effective authorizing is now commonly recognized—there even exists a national association of authorizers that seeks to improve authorizer practices. This paper takes on the question by examining the relationship between student achievement and attendance at charter schools authorized by the various types of authorizers in Ohio.3 Ohio represents one of the most flexible states in terms of authorizers: The state, local districts, county educational service centers (ESCs), and nonprofit organizations have all authorized charter schools. The Center for Education Reform (CER), which is an advocate for charter schools, gives the state relative high marks for flexibility of types of authorizers.4 In the next section, we describe types of authorizers and discuss the evolution of charter authorizers in Ohio over time.

3.  Chartering Authorities in Ohio and Across States

Currently, forty-two states plus the District of Columbia allow charter schools. Charter laws vary across a number of dimensions, including the type of agencies that can authorize charter schools (Consoleti 2011). Table 1 highlights the breakdown of chartering authorities across locations. The most common chartering authority is local school districts, which are granted authorizing authority in thirty-five locations. State education agencies have authorizing authority in twenty-one states. Nine states allow higher education institutions and eight states allow independent charter boards to authorize charter schools. Only two states allow nonprofit institutions to authorize charter schools. Additionally, one state each allows mayors or city councils to authorize charter schools.

Table 1.
Types of Active Chartering Authorities Across States
AuthorizerNumber of States
Local School Board 35 
Independent State Charter Board 
Postsecondary Education Institution 
State Board of Education/Commissioner of Education 21 
Mayor 
City Council 
Nonprofit Organizations 
Regional School Districts 
AuthorizerNumber of States
Local School Board 35 
Independent State Charter Board 
Postsecondary Education Institution 
State Board of Education/Commissioner of Education 21 
Mayor 
City Council 
Nonprofit Organizations 
Regional School Districts 
Source:NAPCS (2013), Dashboard information through 2008.

Authorizing charter schools in Ohio has evolved over time. The state passed legislation permitting charter schools in June 1997. Authority to sponsor start-up charters was initially given to the University of Toledo and Lucas County ESC through a pilot program. In August of that same year, Ohio's eight urban districts (the “Ohio eight”) and the State Board of Education were given the authority to sponsor schools as well. In 1999, new legislation expanded charter school authorization authority to the twenty-one largest districts (Palmer et al. 2006). The following year, legislation expanded charter schools even further to any school district determined to be in “academic emergency.” With these expansions, traditional public school districts could sponsor charter schools in their own or in any other district in their county (ODE 2007). In 2002, a state auditor's report raised questions about a possible conflict of interest for the state being both a promoter and regulator of these schools, and the State Board ended its sponsorship (ODE 2007). Instead, the State Board and Ohio Department of Education (ODE) became an “authorizer of sponsors” (Palmer et al. 2006). As a result, dozens of charter schools previously authorized by the state had to transition to other authorizers by 30 June 2005.5

The legislation ending State Board sponsorship allowed for four types of authorizers: (1) public school districts, (2) county-based educational service centers,6 (3) thirteen state universities, and (4) qualified tax-exempt entities under Section 501(c)(3) of the Internal Revenue Code (ODE 2007). Under the new law, prospective new authorizers were (and still are) required to gain approval from the state.7 Not a single university was authorizing K–8 charter schools in the timeframe of our data, however, so we do not estimate an effect for university-authorized charter schools.

Currently, sponsors are accountable to the Office of Community Schools (OCS), are responsible for adhering to their agreement, and must write to the ODE annually about yearly changes in authorized schools ten days before the school year begins. Sponsors must also monitor bimonthly the fiscal and academic performance of the schools (OCS 2010) and submit reports to the parents of all students and OCS at least once per year. Sponsors also have responsibility for intervening in the charter school's operations and imposing any sanctions (e.g., declare school on probation, suspend operations, or terminate contract). In starting a charter school, school developers must finalize a conceptual design of a proposed school and evaluate data to determine need for a school. Each sponsor is permitted to authorize no more than 50 schools. Sponsors are not permitted to charge fees of more than 3 percent of the total amount of payments for operating expenses that the sponsored school receives from the state. In the charter agreement, the sponsor may offer additional services and associated fees, but these fees cannot be a precondition for sponsorship. As noted previously, the charter school–authorizer match could be important to the quality of schools, and charter school applicants could conceivably shop around for authorizers based on fees charged, quality of support, or ease of oversight. A representative of Ohio's Department of Education noted in a phone conversation that he has no evidence that schools shop around for authorizers with the most lax oversight.8 Nevertheless, the inability to account for the matching process limits our ability to make causal claims about the effects of authorizers.

In sum, Ohio remains unusual in the wide range of different types of organizations it permits to authorize charter schools. Consequently, Ohio is a uniquely valuable site in which authorizer types used in states across the country can be examined in relation to student achievement outcomes.

4.  Data

To address our research question, we collected statewide longitudinally linked student-level data from Ohio's Department of Education from school years 2004–05 through 2007–08 in elementary and middle grades.9 The data set provided by the state includes both raw and scaled test scores. For any particular year, over 98 percent of students had consistent outcome measures (i.e., at least 98 percent of students tested had either the raw score or 98 percent of students tested had the scaled score for that year). Therefore, for fewer than 2 percent of students in a particular year we had either the raw or scaled scores (not both). Nevertheless, to have the scores on a consistent scale, we normalized the raw and scaled scores by year and grade. More specifically, we converted all raw and scaled test-score results into rank-based z-scores, by year and grade, and by subject, with a mean of zero and a standard deviation of one.10 Raw and scaled scores had a high correlation of at least 0.87 across the years included in the data set for students that had both scores.11 Using these test data, individual student demographic data (including race and gender), and unique student identifiers, we created a statewide longitudinal student-level data set that follows students as they move from traditional public schools (TPSs) to charter schools and vice versa. This is important because it allows us to examine the performance of students before, during, and after attending a charter school.

Table 2 highlights the descriptive characteristics of noncharter students as well as charter students by the various types of chartering authorities across the 2004–05 through 2007–08 school years. The table includes only students who have test scores in either math or reading, because these are the students ultimately included in the achievement analysis. The table excludes charter high schools (because we do not have test scores for these students) and virtual charter schools (because, as we describe later, we are not confident we could effectively control for the unobservables of students attending these schools). We categorize authorizer type for schools based on the original authorizer (some charter schools switch authorizers over time, which we will discuss later).

Table 2.
Descriptive Statistics of Ohio's Tested Students, 2004–05 through 2007–08 School Years
TraditionalDistrictESC AuthorizedNonprofitState Authorized
PublicAuthorizedCharterAuthorizedCharter
SchoolsCharter SchoolsSchoolsCharter SchoolsSchools
Proportion Black 0.15 0.43 0.65 0.72 0.81 
Proportion White 0.78 0.50 0.29 0.19 0.13 
Proportion Hispanic 0.02 0.02 0.03 0.04 0.03 
Proportion Other 0.03 0.04 0.03 0.04 0.03 
Proportion Male 0.51 0.47 0.54 0.49 0.48 
Math Z Score 0.02 −0.34 −0.75 −0.88 −0.63 
Reading Z Score 0.01 −0.23 −0.67 −0.75 −0.58 
Average Years of NA 3.61 4.23 3.30 6.43 
Operation      
TraditionalDistrictESC AuthorizedNonprofitState Authorized
PublicAuthorizedCharterAuthorizedCharter
SchoolsCharter SchoolsSchoolsCharter SchoolsSchools
Proportion Black 0.15 0.43 0.65 0.72 0.81 
Proportion White 0.78 0.50 0.29 0.19 0.13 
Proportion Hispanic 0.02 0.02 0.03 0.04 0.03 
Proportion Other 0.03 0.04 0.03 0.04 0.03 
Proportion Male 0.51 0.47 0.54 0.49 0.48 
Math Z Score 0.02 −0.34 −0.75 −0.88 −0.63 
Reading Z Score 0.01 −0.23 −0.67 −0.75 −0.58 
Average Years of NA 3.61 4.23 3.30 6.43 
Operation      

Note: Table only contains students who have math and reading scores.

On average, charter schools affiliated with all authorizer types have a larger share of black students and a smaller share of white students than TPSs in Ohio. These differences are most likely related to the disproportionate representation of charter schools in urban districts, which have a disproportionate share of the state's black students. Across the various authorizer types, charter schools have substantially lower average test score levels than TPSs. Nonprofit-authorized schools have the lowest average test scores of all the authorizer groups. These descriptive averages, however, do not account for student differences and should not be viewed as evidence of effectiveness. Finally, among the authorities, nonprofit-authorized charter schools have been in operation the shortest amount of time and state-authorized charter schools have been in operation the longest.

Table 3 shows the total number of elementary and middle charter schools authorized by authorizer type and year. The information on type of authorizer for each school was collected with the help of the Fordham Foundation using the ODE's Annual Reports and Ohio Education Directory System Redesign.12 The table excludes all schools for which we cannot observe test score data—notably, charter high schools. As the table indicates, the number of schools authorized by nonprofit organizations increased rapidly over time. In 2007–08, there were 130 schools authorized by nonprofit organizations compared with 24 in 2004–05. In contrast, the number of charter schools authorized by districts showed no overall increase and the number of charter schools authorized by ESCs has slightly declined. Finally, while the State Department of Education authorized 46 schools in 2004–05, the change in legislation took the department out of the authorizing business, which largely explains the dramatic increase in the number of charter schools sponsored by nonprofits in the 2005–06 school year.

Table 3.
Number of Charter Schools Authorized by Type of Authorizer by Year (Cumulative Totals)
Type of Authorizer2004–052005–062006–072007–08
District 46 56 47 46 
Educational Service Centers 85 97 74 70 
Nonprofit 24 83 112 130 
State 46 
Type of Authorizer2004–052005–062006–072007–08
District 46 56 47 46 
Educational Service Centers 85 97 74 70 
Nonprofit 24 83 112 130 
State 46 

In the 2007–08 school year, the most recent year of data in our analysis, the average number of charter schools authorized by a district was only 1.4 schools. In contrast, ESC and nonprofit authorizers, on average, oversaw 14.3 and 21.1 schools, respectively. Overall, individual school districts had the least amount of experience of authorizing charter schools. It is unclear, however, whether this lack of experience is offset by the experience districts have in overseeing and supporting schools.

5.  Research Design

Our design aims to address the question of whether students attending schools affiliated with different authorizer types experience different achievement effects. We cannot make causal claims between any observed associations between authorizer types and student impacts—charter schools were not randomly assigned to charter authorizers and we have no information that would allow us to control for the process of matching schools to authorizers. Therefore, our results are necessarily correlational for the question of why there is variation across the types of authorizer. Nonetheless, correlational information about the relationship between authorizer type and school-specific impacts on students is potentially useful for parents as a signal of school quality and for policy makers as a first step toward identifying effective authorizer types.

Whether researchers can make causal claims about the performance of charter schools has been at the center of the debate for the more general question of whether charter schools are effective in improving student achievement. When examining charter school effectiveness, researchers often worry about differences between students who self select into charter schools versus students who do not (Hoxby and Murarka 2006; Ballou, Teasley, and Zeidner 2007). Students choosing charters might be more motivated with more engaged parents, or they might be students who are looking for alternatives because they have struggled in TPSs. Either way, these students may be different from a random set of TPS students in unobserved ways that could bias the results through a selection bias. The most rigorous research has used lottery data (to simulate an experimental design) to account for any unobserved differences between charter and noncharter students (Hoxby and Rockoff 2004; Hoxby and Murarka 2006; Abdulkadiroglu et al. 2009; Gleason et al. 2010). When lottery data are not available, researchers have often used a student fixed-effect approach, which examines whether each student's achievement is higher while attending a charter school relative to the same student's achievement while in a TPS (Zimmer et al. 2003; Hanushek et al. 2007; Betts, Tang, and Zau 2006; Zimmer and Buddin 2006; Sass 2006; Bifulco and Ladd 2006; Witte et al. 2007; Booker et al. 2007; Imberman 2011; Zimmer et al. 2012). This approach compares students to themselves over time, which to the extent that students’ unobservable characteristics remain constant over time, should minimize selection bias in the analysis.

Selection bias is something we want to guard against as well in addressing the question of whether a student would do better in one type of authorized charter school relative to others. As noted previously, a randomized design is the preferred method of controlling for self-selection. We do not have access to admissions lottery data for Ohio charter schools. Moreover, even if lottery data did exist, not all charter schools are oversubscribed, so the method would not allow a comprehensive evaluation of all schools across all chartering authorities. The often used student fixed-effect approach, meanwhile, has recently been challenged because it includes only a subset of charter students (which raises external validity concerns) and because prior achievement trajectories may not provide good evidence on future achievement trajectories for the subset of students who can be included (which raises internal validity concerns) (Hoxby and Murarka 2006). Zimmer et al. (2009) developed a modified version of the student fixed-effect approach that addresses these criticisms by focusing on schools in which the lowest grade starts above the lowest grade tested. By doing so, the analysis includes nearly all students attending these schools because nearly all students “switch” into these schools from elementary (for charter middle schools) or from middle schools (for charter high schools). In Ohio, however, this approach would require a focus exclusively on middle schools, because students are not tested in consecutive grades at the high school grades. This would require restricting our attention to only fifteen district-authorized, twenty-eight ESC-authorized, fourteen nonprofit-authorized, and six state-authorized charter schools—collectively a small minority of all Ohio charter schools, not sufficient to allow us to detect moderately sized effects of authorizer type. Therefore, we use a fixed effect strategy only as a secondary analysis and we use the strategy of including all schools—elementary and middle schools—with the full understanding that the approach has some inherent weaknesses.

For our primary analysis, we use two approaches in combination to minimize selection issues while including charter schools serving elementary as well as middle-school grades. First, we bypass the problem of comparing choosers to non-choosers by constraining the comparison groups to consist entirely of other students in charter schools. We conduct separate analyses for each authorizer type, in each of which the comparison students are drawn from charter schools authorized by any of the other authorizer types. Consequently, we need not make the strong assumption that choosers are similar to non-choosers; our treatment groups and comparison groups consist entirely of choosers. Indeed, we directly address the internal validity concern of Hoxby and Murarka (2006) by ensuring that both the charter students in the treatment group and the charter students in the comparison group were in TPSs prior to entering charters. The fact that the treatment students have changed schools will therefore not bias expectations about their future performance because the comparison students have likewise transferred.13

Second, among the population of charter students potentially available as comparison students, we use a matching approach popularized by Rubin (1977) and Rosenbaum and Rubin (1983) to develop a counterfactual. Although matching procedures can take many forms, we use a propensity-score matching algorithm to create a control group having the same distribution of covariates as the treatment group. We then estimate the difference between those in treatment relative to this counterfactual control group (Smith and Todd 2001). Formally, this can be specified as:
formula
This approach can give causal results in estimating a “treatment” when observable characteristics (x) are sufficient to make the counterfactual outcome y0 independent of z:
formula

In our case, by restricting the population of students who all chose charter schools, we have a stronger argument that conditional on the vector of covariates x, we can make a case of independence between z and y0. For the propensity-score matching approach, a vector of observable characteristics (x) are used to predict the probability of treatment p = Pr(z = 1|x) (Heckman, Ichimura, and Todd 1998; Doyle 2009). Matching is achieved by gaining a similar propensity of participation at a type of authorized charter school such that we gain a balance of the observable characteristics between the pool of treatment and control students. Finally, once we create a pool of treatment and control students, we conduct an ordinary least squares (OLS) regression to control for any remaining differences between the treatment group and the matched comparison group and to improve precision.

A similar approach (without a prior restriction to a charter population) was used in a recent report on charter middle schools affiliated with the Knowledge Is Power Program (Tuttle et al. 2010). Creation of a carefully matched comparison group has been shown in some circumstances to produce impact estimates that replicate the findings of randomized experiments (Cook, Shadish, and Wong 2008). More specifically, recent research has suggested that a matching strategy can replicate randomized design results when examining school choice programs (Bifulco 2010; Furgeson et al. 2012).

Some charter schools changed authorizers during the period included in our data. Our analysis permanently assigns each school to the authorizer type that was first observed for that school. We keep schools attached to their first observed authorizer on the assumption that schools that change authorizers are not likely to immediately change their performance to reflect the influence of the new authorizer. This approach is also consistent with the view that an authorizer's most important function is the initial screening of charter applicants. Nevertheless, we conducted a sensitivity analysis of dropping all charter schools that switched authorizers over time. These results are similar to the primary results (later presented in table 6) with no substantive differences.

In addition, some students switched among charter schools during the time frame of our data set. Therefore, it is possible that some students could end up in the control group for their previous treatment. To prevent this, we permanently assign students to their initial treatment. The analysis is therefore an “intent-to-treat” analysis at both the school (related to authorizer treatment) and student (related to school treatment) levels.14 In a sensitivity analysis, we relaxed the permanent assignment of students to treatment, and the estimates are almost identical for district-, ESC-, and nonprofit-authorized schools, with slightly more positive estimates for state-authorized schools.

Finally, the analysis excludes “virtual” charter schools—that is, schools that deliver their educational services primarily via communication technology to students in their own homes, rather than in a conventional school building. Virtual schools tend to serve quite different populations than do other charter schools, and it is not clear that their performance can be evaluated using the same methods used for other charter schools (Zimmer et al. 2009). Virtual schools are unevenly distributed across the authorizers: Of the observed forty virtual schools in the analysis, thirty-three have been authorized by districts, and ESCs and nonprofits have authorized four and three, respectively.

Analytic Details for Propensity-Score Matching

To create the comparison groups for our primary analysis, we first restricted the potential matches to students within the same grade and same year. For example, in examining the effect of an ESC-authorized charter student, we identified a matched control student in the same grade and year in a school authorized by one of the three other types of authorizers. From this sample, we created a match for the treatment students based upon students’ observable characteristics in the year prior to entering a charter school. The observable characteristics include math and reading test scores, gender, and race.15 Using these observable characteristics, logistic regressions were conducted separately for each treatment (i.e., district-authorized, ESC-authorized, nonprofit-authorized, state-authorized), by each year (2005 through 2007), by each grade (grades 3 through 7), and with the treatment variable serving as the dependent variable in each logistic regression.16 Using the model-generated propensity scores for the likelihood of participating in each treatment, we created a one-to-one match with no replacement of a control student for each treatment student.17 After creating a match for a particular treatment for a particular grade and year, we pooled for each authorizer the treatment and matched control students across grades and years (we had four different pooled data sets—one for each type of charter authorizer). It should be noted that we also conducted a sensitivity analysis by creating a one-to-one nearest neighbor match of caliper within 0.01 of a standard deviation. This restriction dropped less than 1 percent of the observations and produced nearly identical results.18

Table 4 displays the observable characteristics of students for each authorizer type relative to the matched control students. Comparing the descriptive statistics suggests that the matching procedure created close matches on observable characteristics as none of the observed student characteristics are statistically different between matched and control students.19 Moreover, because our pool of matches includes only charter students, our matched comparison groups are similar to the treatment groups not only in terms of demographic factors and baseline achievement levels, but also in the fact that they chose to enroll in charter schools.

Table 4.
Quality of Matches, by Authorizer Type
DistrictESCNonprofitState
DistrictMatchesESCMatchesNonprofitMatchesStateMatches
Math Z-Scores −0.41 −0.43 −0.75 −0.74 −0.78 −0.78 −0.81 −0.80 
Reading Z-Scores −0.35 −0.36 −0.66 −0.66 −0.71 −0.71 −0.76 −0.76 
Proportion Black 0.44 0.45 0.73 0.72 0.72 0.73 0.76 0.77 
Proportion White 0.49 0.48 0.21 0.22 0.21 0.21 0.19 0.18 
Proportion Hispanic 0.03 0.02 0.04 0.04 0.05 0.04 0.03 0.03 
Proportion Male 0.49 0.49 0.52 0.51 0.49 0.49 0.50 0.48 
DistrictESCNonprofitState
DistrictMatchesESCMatchesNonprofitMatchesStateMatches
Math Z-Scores −0.41 −0.43 −0.75 −0.74 −0.78 −0.78 −0.81 −0.80 
Reading Z-Scores −0.35 −0.36 −0.66 −0.66 −0.71 −0.71 −0.76 −0.76 
Proportion Black 0.44 0.45 0.73 0.72 0.72 0.73 0.76 0.77 
Proportion White 0.49 0.48 0.21 0.22 0.21 0.21 0.19 0.18 
Proportion Hispanic 0.03 0.02 0.04 0.04 0.05 0.04 0.03 0.03 
Proportion Male 0.49 0.49 0.52 0.51 0.49 0.49 0.50 0.48 

After identifying the matched comparison group, to increase precision and to control for any remaining observable differences between treatment and comparison students (Ho et al. 2007; Abdulkadiroglu et al. 2009),20 we used a “value-added” OLS model examining student math and reading test scores (separately) as the outcome measures and controlling for students’ observable characteristics, including prior year test scores, whether the student transferred from one year to the next, and how long the school has been in operation.

The formal model is represented by:
formula
1
where, Yi, j, t is the math or reading test score (run in separate models) for student i in subject j; Yi, j, t−1 is a vector of the prior year math and reading test scores for student i in subject j; Xi, t is a vector of controls for individual student characteristics (i.e., black, white, Hispanic, “other” race, and gender); Mobi, t is an indicator of whether student i transferred to a new school in the tth year; OpYeari, t is a vector of dummy variables indicating whether the charter school has been operating for one year, two years, or three or more years in year t; Ti, t is a binary variable indicating what type of charter school a student i attends in the tth year (i.e., a binary variable for district-authorized, ESC-authorized, or nonprofit-authorized charter school);21 and ei, j, t is the error term. In the model, we account for the lack of independence of student observations within schools by creating Huber–White “sandwich” standard errors (also known as robust standard errors).

If selection on unobservable characteristics is in fact comparable in charter schools across authorizer types (conditional on our matches and OLS adjustments), then an analysis of student outcomes using a matched comparison group and additional controls for remaining observable characteristics should produce an unbiased estimate of the effect of the charter schools associated with each authorizer type. Because we cannot be assured that there are not unobserved differences between the population of students attending schools authorized by the various chartering authorities, we conducted a falsification test.

The falsification test examines whether charter authorizer type is associated with differential gains for students in the years before they enter the charter schools. If we detect spurious effects of charter type prior to entering the charter school, the success of our strategy in dealing with selection bias at the student level would be undermined. In fact, as table 5 indicates, across reading and math outcomes for all four authorizer types, we find no cases in which the falsification test detects significant spurious impacts for students in the year prior to entering the charter school. This provides some reason for confidence that our method of estimating the effects of charter schools affiliated with different authorizer types is not substantially biased by unobserved student selection.

Table 5.
Falsification Test
District AnalysisESC AnalysisNonprofit AnalysisState Analysis
VariableMathReadingMathReadingMathReadingMathReading
District −0.07 −0.03       
 (0.07) (0.08)       
ESC   0.04 0.03     
   (0.05) (0.05)     
Nonprofit     −0.02 −0.06   
     (0.05) (0.05)   
State       0.02 0.05 
       (0.07) (0.08) 
Black −0.24* −0.31* −0.26* −0.22* −0.39* −0.35* −0.15 −0.12 
 (0.08) (0.08) (0.06) (0.06) (0.06) (0.06) (0.12) (0.10) 
Hispanic 0.08 −0.23 −0.05 −0.16 −0.18 −0.13 −0.01 0.22 
 (0.43) (0.58) (0.09) (0.11) (0.10) (0.11) (0.28) (0.30) 
Other −0.24 −0.14 −0.21 −0.23 −0.15 −0.18 −1.03 −0.57 
 (0.39) (0.30) (0.18) (0.18) (0.21) (0.19) (0.53) (0.48) 
Male 0.12 −0.07 −0.06 −0.09 0.02 −0.10* 0.05 0.03 
 (0.07) (0.08) (0.05) (0.05) (0.05) (0.05) (0.07) (0.07) 
Mover 0.00 −0.04 −0.10 −0.04 −0.01 0.03 −0.05 −0.10 
 (0.08) (0.09) (0.05) (0.06) (0.05) (0.06) (0.07) (0.08) 
Prior Math 0.51* 0.24* 0.49* 0.27* 0.47* 0.28* 0.43* 0.35* 
Z-Score (0.05) (0.05) (0.04) (0.04) (0.04) (0.04) (0.05) (0.07) 
Prior Reading 0.21* 0.42* 0.19* 0.40* 0.21* 0.41* 0.26* 0.34* 
Z-Score (0.05) (0.05) (0.03) (0.04) (0.03) (0.04) (0.06) (0.06) 
Constant 0.19 0.15* 0.00 0.20 0.26* 0.55* −0.14 −0.15 
 (0.12) (0.12) (0.16) (0.26) (0.13) (0.15) (0.31) (0.43) 
Grade-Year YES YES YES YES YES YES YES YES 
Fixed Effect         
OBS 357 357 728 728 718 718 289 289 
R-Squared 0.50 0.44 0.50 0.44 0.55 0.52 0.53 0.49 
District AnalysisESC AnalysisNonprofit AnalysisState Analysis
VariableMathReadingMathReadingMathReadingMathReading
District −0.07 −0.03       
 (0.07) (0.08)       
ESC   0.04 0.03     
   (0.05) (0.05)     
Nonprofit     −0.02 −0.06   
     (0.05) (0.05)   
State       0.02 0.05 
       (0.07) (0.08) 
Black −0.24* −0.31* −0.26* −0.22* −0.39* −0.35* −0.15 −0.12 
 (0.08) (0.08) (0.06) (0.06) (0.06) (0.06) (0.12) (0.10) 
Hispanic 0.08 −0.23 −0.05 −0.16 −0.18 −0.13 −0.01 0.22 
 (0.43) (0.58) (0.09) (0.11) (0.10) (0.11) (0.28) (0.30) 
Other −0.24 −0.14 −0.21 −0.23 −0.15 −0.18 −1.03 −0.57 
 (0.39) (0.30) (0.18) (0.18) (0.21) (0.19) (0.53) (0.48) 
Male 0.12 −0.07 −0.06 −0.09 0.02 −0.10* 0.05 0.03 
 (0.07) (0.08) (0.05) (0.05) (0.05) (0.05) (0.07) (0.07) 
Mover 0.00 −0.04 −0.10 −0.04 −0.01 0.03 −0.05 −0.10 
 (0.08) (0.09) (0.05) (0.06) (0.05) (0.06) (0.07) (0.08) 
Prior Math 0.51* 0.24* 0.49* 0.27* 0.47* 0.28* 0.43* 0.35* 
Z-Score (0.05) (0.05) (0.04) (0.04) (0.04) (0.04) (0.05) (0.07) 
Prior Reading 0.21* 0.42* 0.19* 0.40* 0.21* 0.41* 0.26* 0.34* 
Z-Score (0.05) (0.05) (0.03) (0.04) (0.03) (0.04) (0.06) (0.06) 
Constant 0.19 0.15* 0.00 0.20 0.26* 0.55* −0.14 −0.15 
 (0.12) (0.12) (0.16) (0.26) (0.13) (0.15) (0.31) (0.43) 
Grade-Year YES YES YES YES YES YES YES YES 
Fixed Effect         
OBS 357 357 728 728 718 718 289 289 
R-Squared 0.50 0.44 0.50 0.44 0.55 0.52 0.53 0.49 

Notes: Robust standard errors are in parentheses.

*Statistically significant at the 5% level.

As a secondary approach to our primary analysis, we also used a student fixed effect approach using the full sample of K–8 charter schools and TPSs across Ohio. Although we have noted the limitations of this approach earlier, we include it because it has become common practice in nonexperimental charter studies. We discuss the details and present the results of the student fixed effect analysis in the appendix and present the results for our main analysis in the next section. The results for the fixed effect approach are largely consistent with the matching results, which provide some sense of robustness to our results given that we are using different comparison groups across the two approaches.

Results

Table 6 presents results of our primary analysis using the matched comparison groups with the value-added regression. As a reminder, we ran the analyses separately for each authorizer type for each subject. Estimated achievement effects of district-, ESC-, and state-authorized charter schools are statistically indistinguishable from those of other charter schools in both reading and math, based on the matched comparison analysis. In contrast, our analysis suggests that student achievement gains for charter schools authorized by nonprofits are lower, falling short by statistically significant margins in both math and reading.

Table 6.
Primary Results
District AnalysisESC AnalysisNonprofit AnalysisState Analysis
VariableMathReadingMathReadingMathReadingMathReading
District 0.07 0.08       
 (0.06) (0.06)       
ESC   0.00 0.00     
   (0.03) (0.02)     
Nonprofit     −0.09* –0.09*   
     (0.03) (0.03)   
State       0.04 0.04 
       (0.03) (0.03) 
Black −0.07 −0.05 −0.11* −0.09* −0.12* −0.10* −0.09* −0.09* 
 (0.04) (0.04) (0.03) (0.02) (0.03) (0.02) (0.03) (0.03) 
Hispanic −0.25* −0.17* −0.16* −0.13* −0.20* −0.14* −0.10 −0.09* 
 (0.08) (0.07) (0.05) (0.03) (0.04) (0.04) (0.06) (0.04) 
Other −0.13* −0.03 −0.13* −0.04* −0.10* −0.05 −0.05 0.03 
 (0.05) (0.06) (0.04) (0.04) (0.05) (0.06) (0.05) (0.06) 
Male 0.03 −0.17* 0.01 −0.11* 0.03* −0.08* 0.02 −0.10* 
 (0.02) (0.03) (0.01) (0.01) (0.01) (0.02) (0.02) (0.02) 
Mover −0.11* −0.11* −0.07* −0.07* −0.07* −0.06* 0.00 −0.02 
 (0.03) (0.04) (0.02) (0.02) (0.02) (0.02) (0.02) (0.02) 
Prior Math 0.52* 0.26* 0.46* 0.22* 0.46* 0.21* 0.42* 0.20* 
Z-Score (0.03) (0.02) (0.01) (0.01) (0.01) (0.01) (0.02) (0.01) 
Prior Reading 0.23* 0.52* 0.22* 0.51* 0.20* 0.51* 0.23* 0.51* 
Z-Score (0.02) (0.02) (0.01) (0.01) (0.01) (0.01) (0.01) (0.02) 
Attend school in −0.13* −0.07 −0.02 −0.01 −0.03 −0.05 0.04 0.06 
second year (0.06) (0.07) (0.04) (0.03) (0.04) (0.05) (0.04) (0.05) 
of operation         
Attend school −0.10 −0.13* −0.04 −0.05 −0.06 −0.07* 0.02 0.01 
in third or (0.05) (0.05) (0.03) (0.03) (0.03) (0.03) (0.04) (0.03) 
more years         
of operation         
Constant 0.28 –0.01 0.36* 0.53* –0.25* 0.50* 0.19 0.47* 
 (0.21) (0.34) (0.10) (0.11) (0.11) (0.13) (0.13) (0.16) 
Grade-Year YES YES YES YES YES YES YES YES 
Fixed Effect         
OBS 2,906 2,921 13,734 13,739 8,522 8,513 7,126 7,120 
R-Squared 0.56 0.57 0.45 0.49 0.44 0.48 0.41 0.46 
District AnalysisESC AnalysisNonprofit AnalysisState Analysis
VariableMathReadingMathReadingMathReadingMathReading
District 0.07 0.08       
 (0.06) (0.06)       
ESC   0.00 0.00     
   (0.03) (0.02)     
Nonprofit     −0.09* –0.09*   
     (0.03) (0.03)   
State       0.04 0.04 
       (0.03) (0.03) 
Black −0.07 −0.05 −0.11* −0.09* −0.12* −0.10* −0.09* −0.09* 
 (0.04) (0.04) (0.03) (0.02) (0.03) (0.02) (0.03) (0.03) 
Hispanic −0.25* −0.17* −0.16* −0.13* −0.20* −0.14* −0.10 −0.09* 
 (0.08) (0.07) (0.05) (0.03) (0.04) (0.04) (0.06) (0.04) 
Other −0.13* −0.03 −0.13* −0.04* −0.10* −0.05 −0.05 0.03 
 (0.05) (0.06) (0.04) (0.04) (0.05) (0.06) (0.05) (0.06) 
Male 0.03 −0.17* 0.01 −0.11* 0.03* −0.08* 0.02 −0.10* 
 (0.02) (0.03) (0.01) (0.01) (0.01) (0.02) (0.02) (0.02) 
Mover −0.11* −0.11* −0.07* −0.07* −0.07* −0.06* 0.00 −0.02 
 (0.03) (0.04) (0.02) (0.02) (0.02) (0.02) (0.02) (0.02) 
Prior Math 0.52* 0.26* 0.46* 0.22* 0.46* 0.21* 0.42* 0.20* 
Z-Score (0.03) (0.02) (0.01) (0.01) (0.01) (0.01) (0.02) (0.01) 
Prior Reading 0.23* 0.52* 0.22* 0.51* 0.20* 0.51* 0.23* 0.51* 
Z-Score (0.02) (0.02) (0.01) (0.01) (0.01) (0.01) (0.01) (0.02) 
Attend school in −0.13* −0.07 −0.02 −0.01 −0.03 −0.05 0.04 0.06 
second year (0.06) (0.07) (0.04) (0.03) (0.04) (0.05) (0.04) (0.05) 
of operation         
Attend school −0.10 −0.13* −0.04 −0.05 −0.06 −0.07* 0.02 0.01 
in third or (0.05) (0.05) (0.03) (0.03) (0.03) (0.03) (0.04) (0.03) 
more years         
of operation         
Constant 0.28 –0.01 0.36* 0.53* –0.25* 0.50* 0.19 0.47* 
 (0.21) (0.34) (0.10) (0.11) (0.11) (0.13) (0.13) (0.16) 
Grade-Year YES YES YES YES YES YES YES YES 
Fixed Effect         
OBS 2,906 2,921 13,734 13,739 8,522 8,513 7,126 7,120 
R-Squared 0.56 0.57 0.45 0.49 0.44 0.48 0.41 0.46 

Notes: Robust standard errors are in parentheses.

*Statistically significant at the 5% level.

One threat to the validity of these findings as reflecting the effectiveness of authorizers is the possibility that authorizer type is correlated with the length of time the charter schools have been operating. Many prior studies have found evidence that the performance of the typical charter school is weakest in its first year of operation (Bifulco and Ladd 2006; Sass 2006; Hanushek et al. 2007). If nonprofit authorizers had a disproportionate share of newly opened charter schools in the years included in our data, the lower apparent effectiveness of their schools would not be attributable to the authorizers. We conducted a sensitivity analysis that is identical to our primary analysis except that it excludes all charter schools in their first or second year of operation. The results of the sensitivity analysis of charter schools with a minimum of three years of experience are presented in table 7. The point estimates are generally comparable to those in our primary analysis. None of the estimates for the subsample of experienced schools achieves statistical significance but in the case of the nonprofit authorizers this is likely to be the result of limited statistical power. The point estimates for nonprofit-authorized schools are only slightly less in the experienced subsample as in the full sample. Although we cannot completely rule out the possibility that the nonprofit negative effect is smaller for a more experienced school, the sign and magnitudes of the coefficients are at least similar.

Table 7.
Sensitivity Analysis of Students in Charter Schools That are in Third or More Years of Operation
District AnalysisESC AnalysisNonprofit AnalysisState Analysis
VariableMathReadingMathReadingMathReadingMathReading
District 0.06 0.04       
 (0.07) (0.08)       
ESC   −0.04 −0.03     
   (0.04) (0.03)     
Nonprofit     −0.06 −0.05   
     (0.04) (0.04)   
State       0.05 0.05 
       (0.04) (0.03) 
Black −0.01 −0.03 −0.09* −0.08* −0.08* −0.08* −0.08* −0.08* 
 (0.07) (0.06) (0.03) (0.03) (0.03) (0.03) (0.04) (0.03) 
Hispanic −0.10 −0.08 −0.16* −0.13* −0.10* −0.11 −0.09* −0.15* 
 (0.10) (0.08) (0.04) (0.04) (0.05) (0.06) (0.07) (0.07) 
Other −0.06 −0.06 −0.08 −0.01 0.00 0.05 −0.03 −0.01 
 (0.12) (0.12) (0.05) (0.04) (0.07) (0.06) (0.05) (0.05) 
Male 0.04 −0.15* 0.00 −0.11* 0.00 −0.12* 0.01 −0.09* 
 (0.04) (0.05) (0.01) (0.02) (0.02) (0.02) (0.02) (0.02) 
Mover −0.12* −0.15* −0.06* −0.06* −0.07* −0.08 −0.03 −0.04 
 (0.04) (0.05) (0.02) (0.02) (0.03) (0.03) (0.02) (0.02) 
Prior Math 0.48* 0.25* 0.43* 0.21* 0.46* 0.21* 0.40* 0.20* 
Z-Score (0.03) (0.03) (0.01) (0.01) (0.02) (0.02) (0.02) (0.01) 
Prior Reading 0.28* 0.53* 0.23* 0.50* 0.20* 0.50* 0.25* 0.50* 
Z-Score (0.03) (0.02) (0.01) (0.01) (0.02) (0.02) (0.01) (0.02) 
Constant 0.09 −0.06 0.32* 0.47* 0.28 0.59* 0.18 0.49* 
 (0.42) (0.33) (0.12) (0.12) (0.14) (0.20) (0.12) (0.13) 
Grade-Year YES YES YES YES YES YES YES YES 
Fixed Effect         
OBS 1,318 1,335 8,735 8,744 4,165 4,166 6,708 6,707 
R-Squared 0.54 0.57 0.42 0.46 0.42 0.46 0.39 0.43 
District AnalysisESC AnalysisNonprofit AnalysisState Analysis
VariableMathReadingMathReadingMathReadingMathReading
District 0.06 0.04       
 (0.07) (0.08)       
ESC   −0.04 −0.03     
   (0.04) (0.03)     
Nonprofit     −0.06 −0.05   
     (0.04) (0.04)   
State       0.05 0.05 
       (0.04) (0.03) 
Black −0.01 −0.03 −0.09* −0.08* −0.08* −0.08* −0.08* −0.08* 
 (0.07) (0.06) (0.03) (0.03) (0.03) (0.03) (0.04) (0.03) 
Hispanic −0.10 −0.08 −0.16* −0.13* −0.10* −0.11 −0.09* −0.15* 
 (0.10) (0.08) (0.04) (0.04) (0.05) (0.06) (0.07) (0.07) 
Other −0.06 −0.06 −0.08 −0.01 0.00 0.05 −0.03 −0.01 
 (0.12) (0.12) (0.05) (0.04) (0.07) (0.06) (0.05) (0.05) 
Male 0.04 −0.15* 0.00 −0.11* 0.00 −0.12* 0.01 −0.09* 
 (0.04) (0.05) (0.01) (0.02) (0.02) (0.02) (0.02) (0.02) 
Mover −0.12* −0.15* −0.06* −0.06* −0.07* −0.08 −0.03 −0.04 
 (0.04) (0.05) (0.02) (0.02) (0.03) (0.03) (0.02) (0.02) 
Prior Math 0.48* 0.25* 0.43* 0.21* 0.46* 0.21* 0.40* 0.20* 
Z-Score (0.03) (0.03) (0.01) (0.01) (0.02) (0.02) (0.02) (0.01) 
Prior Reading 0.28* 0.53* 0.23* 0.50* 0.20* 0.50* 0.25* 0.50* 
Z-Score (0.03) (0.02) (0.01) (0.01) (0.02) (0.02) (0.01) (0.02) 
Constant 0.09 −0.06 0.32* 0.47* 0.28 0.59* 0.18 0.49* 
 (0.42) (0.33) (0.12) (0.12) (0.14) (0.20) (0.12) (0.13) 
Grade-Year YES YES YES YES YES YES YES YES 
Fixed Effect         
OBS 1,318 1,335 8,735 8,744 4,165 4,166 6,708 6,707 
R-Squared 0.54 0.57 0.42 0.46 0.42 0.46 0.39 0.43 

Notes: Robust standard errors are in parentheses.

*Statistically significant at the 5% level.

We also examine whether there is any evidence that schools affiliated with particular authorizer types produce differential effects for students of different racial/ethnic groups. We use the model expressed in equation 1 using the matched sample of comparison students, adding an interaction term between racial/ethnic status and charter authorizer. The results are shown in table 8. Not surprisingly, given that the majority of students in charter schools are African American, the results for African American students across the different authorizers are pretty consistent with the overall results across the authorizers. We observe a negative achievement effect in reading and math for African Americans attending nonprofit authorized charter schools; we observe no other significant effects for African American students in charter schools affiliated with the other authorizers. For white and Hispanic students, the estimates are generally imprecise (most likely because of smaller samples) and statistically insignificant. The only statistically significant effect is for white students attending ESC authorized schools, where the estimated effect is negative in reading.

Table 8.
Matching Results by Race
District AnalysisESC AnalysisNonprofit AnalysisState Analysis
VariableMathReadingMathReadingMathReadingMathReading
District*Black 0.01 0.05       
 (0.08) (0.08)       
District*Hispanic 0.08 0.29       
 (0.16) (0.16)       
District*White 0.11 0.08       
 (0.07) (0.06)       
ESC*District   0.02 0.02     
   (0.04) (0.03)     
ESC*Hispanic   0.09 0.02     
   (0.08) (0.06)     
ESC*White   −0.06 −0.09*     
   (0.04) (0.03)     
Nonprofit*Black     −0.11* −0.11*   
     (0.03) (0.03)   
Nonprofit*Hispanic     −0.11 −0.07   
     (0.08) (0.07)   
Nonprofit*White     −0.03 −0.04   
     (0.05) (0.04)   
State*Black       0.06 0.04 
       (0.04) (0.04) 
State*Hispanic       −0.03 0.01 
       (0.09) (0.06) 
State*White       −0.02 0.04 
       (0.06) (0.05) 
Observable YES YES YES YES YES YES YES YES 
Student         
Characteristics         
Grade-Year YES YES YES YES YES YES YES YES 
Fixed Effects         
OBS 2,906 2,921 13,734 19,322 8,522 8,513 7,126 7,120 
R-Squared 0.56 0.57 0.45 0.50 0.44 0.48 0.41 0.46 
District AnalysisESC AnalysisNonprofit AnalysisState Analysis
VariableMathReadingMathReadingMathReadingMathReading
District*Black 0.01 0.05       
 (0.08) (0.08)       
District*Hispanic 0.08 0.29       
 (0.16) (0.16)       
District*White 0.11 0.08       
 (0.07) (0.06)       
ESC*District   0.02 0.02     
   (0.04) (0.03)     
ESC*Hispanic   0.09 0.02     
   (0.08) (0.06)     
ESC*White   −0.06 −0.09*     
   (0.04) (0.03)     
Nonprofit*Black     −0.11* −0.11*   
     (0.03) (0.03)   
Nonprofit*Hispanic     −0.11 −0.07   
     (0.08) (0.07)   
Nonprofit*White     −0.03 −0.04   
     (0.05) (0.04)   
State*Black       0.06 0.04 
       (0.04) (0.04) 
State*Hispanic       −0.03 0.01 
       (0.09) (0.06) 
State*White       −0.02 0.04 
       (0.06) (0.05) 
Observable YES YES YES YES YES YES YES YES 
Student         
Characteristics         
Grade-Year YES YES YES YES YES YES YES YES 
Fixed Effects         
OBS 2,906 2,921 13,734 19,322 8,522 8,513 7,126 7,120 
R-Squared 0.56 0.57 0.45 0.50 0.44 0.48 0.41 0.46 

Notes: Robust standard errors are in parentheses.

*Statistically significant at the 5% level.

We also examine whether there is differential variance in school performance across the authorizers. We use the same matched sample of students used to estimate the primary analysis reported in table 6 for each authorizer type. In this model, however, we include a school fixed effect. After estimating the model, we recover the individual school effects, permitting us to compare the standard deviation of the school effects within each authorizer type. A concern with this approach is that we may observe greater variance in some authorizers if they happen to have smaller schools that are more susceptible to noisy estimates. Therefore, when examining the standard deviations of each authorizer type, we weight the standard deviations by the size of the schools, with small schools getting a small weight and large schools getting a large weight. We do a similar procedure for all noncharter schools in the state to create a point of reference. The results are presented in table 9.

Table 9.
Comparisons of the Standard Deviations of School Effects by Authorizer
District AuthorizedESC AuthorizedNonprofit AuthorizedState AuthorizedStatewide Non-Charter Schools
MathReadingMathReadingMathReadingMathReadingMathReading
0.29 0.29 0.29 0.21 0.29 0.28 0.25 0.23 0.14 0.11 
District AuthorizedESC AuthorizedNonprofit AuthorizedState AuthorizedStatewide Non-Charter Schools
MathReadingMathReadingMathReadingMathReadingMathReading
0.29 0.29 0.29 0.21 0.29 0.28 0.25 0.23 0.14 0.11 

The results suggest that each of the authorizer types has greater variance in school performance than is evident in TPSs. Among the authorizers, district-, ESC-, and nonprofit-authorized schools generally have greater variance than state-authorized schools, but state-authorized schools have much higher levels of variance than TPSs. Therefore, authorizer-type cannot completely explain the higher level of variance in charter schools than traditional public schools. As for the schools that showed the weakest level of performance in our primary analysis—nonprofit-authorized charter schools—these schools exhibit as much variance as any other set of authorized schools.

6.  Discussion and Conclusions

As other research (e.g., Hanushek et al. 2007; CREDO 2009; Zimmer et al. 2009) has shown, charter schools vary widely in academic performance. Authorizer type is only one factor among many that may contribute to the variation in performance among charter schools; it is surely not the most important factor. It is likely that high and low performers exist among the schools authorized by each type of authorizers. Nonetheless, we find that students attending Ohio charters that were originally authorized by nonprofit organizations experience, on average, lower achievement gains (both in math and reading) than those of students in other charter schools. From a parent perspective, a charter-school's authorizer type provides some signal about the likely effectiveness of the school. From a policy perspective, these results warrant further investigation to examine whether nonprofit authorizers simply attract less-effective schools, or whether nonprofit authorizers have weaker oversight and less ability to provide appropriate support, as some commentators have suggested (Palmer et al. 2006; Ryan and Partin 2008). Even in the absence of strong causal evidence, the existence of the correlation suggests that policy makers may want to take a closer look at the oversight and support that authorizers provide to charter schools.

Notes

1. 

For a recent literature review of charter schools, see Betts and Tang (2011).

2. 

To be sure, some authorizers may be reluctant to authorize schools solely for the fee, as this could be detrimental to the organization's reputation.

3. 

In Ohio, charter schools are called community schools. To be consistent with the general literature, we refer to these schools as charter schools.

4. 

Despite this flexibility, CER only gave the state a C rating overall because of other constraints the state puts on charter schools, including number of schools allowed, autonomy of operation, and funding. Full report at: www.edreform.com/2012/01/26/2011-chartter-school-laws-from-across-the-states-rankings/.

5. 

Recently, Ohio has reversed its policy on State Board sponsorship and expects the ODE to sponsor up to twenty charter schools per year beginning with the 2012–13 academic year. The ODE plans to create a separate entity, The Office of School Sponsorship, to handle all authorizer responsibilities. More information can be found at www.ode.state.oh.us/GD/Templates/Pages/ODE/ODEDetail.aspx?page=3&TopicRelationID=662&ContentID=112236&Content=141961.

6. 

Educational service centers are county-based organizations and act much like intermediate districts in other states, providing professional development programs and other services, including special education services. Although much of their financial support comes from state and federal sources, a substantial portion of their budget is derived from fees in the services they provide. More information on ESCs is available at: www.oesca.org/vnews/display.v/ART/47bb7a71896f5.

7. 

However, entities authorizing charter schools prior to 2003 were excluded from this new legislation and grandfathered as sponsors of charter schools.

8. 

Personal communication, Steve Tate, Ohio Department of Education, 1 April 2011.

9. 

Most students were tested in the spring but a small portion were tested in the fall, and because this would have meant that we would have inconsistent intervals of testing periods for all students, we eliminated the fall test from our data set.

10. 

In cases where a common vertically equated test is not available across grades or an administered test within a district or state changes over time, researchers often standardize test scores into a common metric (Kirby et al. 2002;,Gill et al. 2005). For examples of papers that use these conversions, see Peterson and Chingos (2009) and de la Torre et al. (2012). As noted in Kirby et al. (2002), however, meaningfully interpreting the z scores requires an assumption that the underlying distribution of the test scores are normally distributed, which may not always be the case. As an alternative, Kirby et al. suggest that researchers use a “rank-based” test score. This approach ranks all scores from smallest to largest and then the rank for each score is divided by n + 1 to get all values between 0 and 1. A Gaussian cumulative distribution can then be applied to these ranks to obtain range analogous to z scores, but avoids the assumption that underlying distribution is normal.

11. 

In some years, the correlation was as high as 0.92 for reading. To put this in perspective, the correlation across years for scaled scores ranges from 0.70 to 0.75.

13. 

This does not directly address the concern about external validity (i.e., students transferring into charter schools may differ from those who enroll in charters beginning in kindergarten). But we see no reason to believe that the differences between transfer students and kindergarten entrants systematically differ in charter schools authorized by different authorizers. So the fact that we are matching charter treatment students to charter comparison students is likely to minimize the external validity problem.

14. 

One benefit that comes of out this approach is, unlike a treatment on the treated analysis, we don't assume that the benefits of attending a charter school immediately end as the student exits the school.

15. 

Unfortunately, that state did not provide data on the free and reduced lunch (FRL) status of students because of privacy concerns. Therefore, we could not match students on their poverty status using FRL. The improvement in matches if we had FRL may be minimal, however, given that we are already matching on race/ethnicity and prior test scores, two variables highly correlated with FRL status. We also lack data on students’ home addresses, so we cannot assess the extent to which students faced differences in local choice sets.

16. 

To produce the propensity scores, we use Stata's psmatch2 procedure with the logit option, which uses a logistic regression.

17. 

We chose to use a no replacement approach so that we did not put too much pressure on individual students being matches for multiple treatment students.

18. 

As an additional sensitivity analysis, we used a common support restriction for each treatment in which we dropped any observation whose propensity score is smaller than the minimum and larger than the maximum in the opposite group. This dropped less than 1 percent of observations with very similar results.

19. 

We also compare the observable student characteristics at the 25th and 75th percentiles of the distribution. Again, the student characteristics are similar across treatment and control groups.

20. 

As a sensitivity analysis, we examined the achievement differences without covariate adjustments, which resulted in the same substantive conclusions but with larger standard errors.

21. 

As noted previously, the analysis for each type of authorizer is run in a separate model, therefore each model has as a dummy variable for a particular type of authorizer.

Acknowledgments

The work reported here was partially supported by the Bill and Melinda Gates Foundation and by a Pre-Doctoral Training Grant from the Institute of Education Sciences, U.S. Department of Education (award no. R305B090011) to Michigan State University. The opinions expressed here are those of the authors and do not represent the views of the Bill and Melinda Gates Foundation or the U.S. Department of Education.

REFERENCES

Abdulkadiroglu
,
Alita
,
Joshua D.
Angrist
,
Sarah
Cohodes
,
Sue
Dynarski
,
Jon
Fullerton
,
Thomas
Kane
, and
Parag
Pathak
.
2009
.
Informing the debate: Comparing Boston's charter, pilot and traditional schools
.
Boston, MA
:
The Boston Foundation
.
Anderson
,
Lee
, and
Kara
Finnigan
.
2001
.
Charter school authorizers and charter school accountability
.
Paper presented at the Annual Meeting of the American Education Research Association, Seattle, WA, April
.
Angrist
,
Joshua D.
,
Sue
Dynarski
,
Thomas J.
Kane
,
Parag
Pathak
, and
Christopher R.
Walters
.
2011
.
“Who Benefits from KIPP?”
NBER Working Paper No. 15740
.
Ballou
,
Dale
,
Bettie
Teasley
, and
Tim
Zeidner
.
2007
.
Charter schools in Idaho
. In
Charter school outcomes
,
edited by Mark Berends, Matthew G. Springer, and Herbert J. Walberg
, pp.
221
241
.
New York
:
L. Erlbaum Associates
.
Betts
,
Julian R.
,
Y. Emily
Tang
, and
Andrew C.
Zau
.
2006
.
Madness in the method? A critical analysis of popular methods of estimating the effect of charter schools on student achievement
.
Paper presented at the Annual Meetings of the American Educational Research Association, Chicago, April
.
Betts
,
Julian R.
, and
Y. Emily
Tang
.
2008
.
Value-added and experimental studies of the effect of charter schools on student achievement: A literature review
.
Bothell, WA
:
National Charter School Research Project, Center on Reinventing Public Education
.
Betts
,
Julian R.
, and
Y. Emily
Tang
.
2011
.
The effect of charter schools on student achievement: A meta-analysis of the literature
.
Bothell, WA
:
National Charter School Research Project, Center on Reinventing Public Education
.
Bifulco
,
Robert
.
2010
.
Can propensity score analysis replicate estimation based on random assignment in evaluation of school choice? A within-study comparison
.
Working paper, Syracuse University
.
Bifulco
,
Robert
, and
Helen F.
Ladd
.
2006
.
The impacts of charter schools on student achievement: Evidence from North Carolina.
Education Finance and Policy
1
(
1
):
50
90
. doi:10.1162/edfp.2006.1.1.50
Booker
,
Kevin
,
Scott M.
Gilpatric
,
Timothy
Gronberg
, and
Dennis
Jansen
.
2007
.
The impact of charter school student attendance on student performance.
Journal of Public Economics
91
(
5–6
):
849
876
. doi:10.1016/j.jpubeco.2006.09.011
Booker
,
Kevin
,
Tim R.
Sass
,
Brian
Gill
, and
Ron
Zimmer
.
2011
.
The effects of charter high schools on educational attainment.
Journal of Labor Economics
29
(
2
):
377
415
. doi:10.1086/658089
Buddin
,
Richard
, and
Ron
Zimmer
.
2005
.
Student achievement in charter schools: A complex picture.
Journal of Policy Analysis and Management
24
(
2
):
351
371
. doi:10.1002/pam.20093
Bulkley
,
Katrina
.
2001
.
Educational performance and charter school authorizers: The accountability bind.
Education Policy Analysis Archives
9
(
37
):
1
22
.
Carlson
,
Deven
,
Lesley
Lavery
, and
John
Witte
.
2012
.
Charter school authorizers and student achievement.
Economics of Education Review
31
(
2
):
254
267
. doi:10.1016/j.econedurev.2011.03.008
Cook
,
Thomas D.
,
William R.
Shadish
, and
Vivian C.
Wong
.
2008
.
Three conditions under which experiments and observational studies produce comparable causal estimates: New findings from within-study comparisons.
Journal of Policy Analysis and Management
27
(
4
):
724
750
. doi:10.1002/pam.20375
Consoleti
,
Alison
.
2011
.
The state of charter schools: What we know—and what we don't—about performance and accountability
.
Washington, DC
:
Center for Education Reform
.
Center for Research on Education Outcomes (CREDO)
.
2009
.
Multiple choice: Charter school performance in 16 states
.
Stanford, CA
:
CREDO
.
de la Torre
,
Marissa
,
Elaine
Allensworth
,
Sanja
Jagesic
,
James
Sebastian
,
Malcolm
Salmonowicz
,
Coby
Meyers
, and
R.
Dean Gerdeman
.
2012
.
Turning around low-performing schools in Chicago: Summary report
.
Chicago
:
University of Chicago Consortium on Chicago School Research
.
Doyle
,
William R.
2009
.
The effect of community college on bachelor's degree completion.
Economics of Education Review
28
(
2
):
199
206
. doi:10.1016/j.econedurev.2008.01.006
Fryer
,
Roland
.
2011
.
Injecting successful charter school strategies into traditional public schools: Early results from an experiment in Houston
.
NBER Working Paper No. 17494
.
Furgeson
,
Joshua
,
Brian
Gill
,
Joshua
Haimson
,
Alexandra
Killewald
,
Moira
McCullough
,
Ira
Nichols-Barrer
,
Teh
Bing-ru
, et al
2012
.
Charter school management organizations: Diverse strategies and diverse student impacts
.
Washington, DC
:
Mathematica Policy Research
.
Gill
,
Brian
,
Laura
Hamilton
,
J. R.
Lockwood
,
Julie
Marsh
,
Ron W.
Zimmer
,
Deanna
Hill
, and
Shana
Pribesh
.
2005
.
Inspiration, perspiration, and time: Operations and achievement in Edison schools
.
Santa Monica, CA
:
RAND Corporation
.
Gleason
,
Philip
,
Melissa
Clark
,
Christina C.
Tuttle
, and
Emily
Dwoyer
.
2010
.
The evaluation of charter school impacts: Final report
,
2010
4029
.
Washington, DC
:
U.S. Department of Education, NCEE
. doi:10.1037/e598992011-001
Hanushek
,
Eric
,
John F.
Kain
, and
Steven G.
Rivkin
.
2004
.
Disruption versus Tiebout improvement: The costs and benefits of switching schools.
Journal of Public Economics
88
(
9–10
):
1721
1746
. doi:10.1016/S0047-2727(03)00063-X
Hanushek
,
Eric
,
John F.
Kain
,
Steven G.
Rivkin
, and
Geoffrey F.
Branch
.
2007
.
Charter school quality and parental decision making with school choice.
Journal of Public Economics
91
(
5–6
):
823
848
. doi:10.1016/j.jpubeco.2006.09.014
Heckman
,
James J.
,
Hidehiko
Ichimura
, and
Petra
Todd
.
1998
.
Matching as an econometric evaluation estimator.
Review of Economic Studies
65
(
2
):
261
294
. doi:10.1111/1467-937X.00044
Ho
,
Daniel E.
,
Kosuke
Imai
,
Gary
King
, and
Elizabeth A.
Stuart
.
2007
.
Matching as nonparametric preprocessing for reducing model dependence in parametric causal inference.
Political Analysis
15
:
199
236
. doi:10.1093/pan/mpl013
Hoxby
,
Caroline M.
2009
.
A statistical mistake in the CREDO study of charter schools
.
Working paper, Stanford University
.
Hoxby
,
Caroline M.
, and
Jonah E.
Rockoff
.
2004
.
The impact of charter schools on student achievement
.
Unpublished paper, Harvard University
.
Hoxby
,
Caroline M.
, and
Sonali
Murarka
.
2006
.
Methods of assessing the achievement of students in charter schools
.
Paper presented at the National Conference on Charter School Research, Vanderbilt University, September
.
Hoxby
,
Caroline M.
,
Sonali
Murarka
, and
Jenny
Kang
.
2009
.
Charter schools in New York City: Who enrolls and how they affect their students’ achievement
.
NBER Working Paper No. 14852
.
Imberman
,
Scott
.
2011
.
Achievement and behavior in charter schools: Drawing a more complete picture.
Review of Economics and Statistics
93
(
2
):
416
435
. doi:10.1162/REST_a_00077
Kirby
,
Sheila N.
,
Daniel F.
McCaffrey
,
J. R.
Lockwood
,
Jennifer S.
McCombs
,
Scott
Naftel
, and
Heather
Barney
.
2002
.
Using state school accountability data to evaluate federal programs: A long uphill road.
Peabody Journal of Education
99
(
4
):
122
145
. doi:10.1207/S15327930PJE7704_6
National Alliance for Public Charter Schools (NAPCS)
.
2013
.
Public charter schools dashboard
.
Washington, DC
:
NAPCS
.
Available
http://dashboard.publiccharters.org/dashboard/policy/page/auth/year/2009.
Accessed 4 May 2013
.
Office of Community Schools (OCS)
.
2010
. Community schools operations annual timeline, FY
2011. Available
www.ercoinc.org/newsletter/documents/fy11communityschoolannualtimeline.pdf.
Accessed 13 May 2013
.
Ohio State Department of Education (ODE)
.
2007
.
Community school legislative history
.
Columbus, OH
:
ODE
.
Palmer
,
Louann B.
2007
.
The potential of “alternative” charter school authorizers.
Phi Delta Kappan
89
(
4
):
304
309
.
Palmer
,
Louann B.
,
Michelle G.
Terrell
,
Bryan C.
Hassel
, and
C.
Peter Svahn
.
2006
.
Turning the corner to quality: Policy guidelines for strengthening Ohio's charter school
.
Washington, DC
:
Thomas B. Fordham Institute
.
Peterson
,
Paul E.
, and
Matthew M.
Chingos
.
2009
.
For-profit and nonprofit management in Philadelphia schools.
Education Next
9
(
2
):
64
70
.
Rosenbaum
,
Paul R.
, and
Donald B.
Rubin
.
1983
.
The central role of the propensity score in observational studies for causal effects.
Biometrica
70
(
1
):
41
50
. doi:10.1093/biomet/70.1.41
Rubin
,
Donald B.
1977
.
Assignment to treatment group on the basis of a covariate.
Journal of Educational Statistics
2
(
1
):
1
26
. doi:10.2307/1164933
Ryan
,
Terry
, and
Emmy L.
Partin
.
2008
.
Accelerating student learning in Ohio: Five policy recommendations for strengthening public education in the Buckeye state
.
Dayton, OH
:
Thomas B. Fordham Institute
.
Sass
,
Tim
.
2006
.
Charter schools and student achievement in Florida.
Education Finance and Policy
1
(
1
):
91
122
. doi:10.1162/edfp.2006.1.1.91
Smith
,
Jeffrey A.
, and
Petra E.
Todd
.
2001
.
Reconciling conflicting evidence on the performance of propensity-score matching methods.
American Economic Review
91
(
2
):
112
118
. doi:10.1257/aer.91.2.112
Tuttle
,
Christina
,
Teh
Bing-ru
,
Ira
Nichols-Barrer
,
Brian P.
Gill
, and
Philip
Gleason
.
2010
.
Student characteristics and achievement in 22 KIPP middle schools
.
Washington, DC
:
Mathematica Policy Research
.
Vergari
,
Sandra
.
2001
.
Charter school authorizers, public agents for holding charter schools accountable.
Education and Urban Society
33
(
2
):
129
140
. doi:10.1177/0013124501332003
Witte
,
John F.
,
David L.
Weimer
,
Arnold
Shober
, and
Paul
Schlomer
.
2007
.
The performance of charter schools in Wisconsin.
Journal of Policy Analysis and Management
25
(
3
):
557
573
. doi:10.1002/pam.20265
Zimmer
,
Ron
,
Richard
Buddin
,
Derrick
Chau
,
Glenn
Daley
,
Brian
Gill
,
Cassandra
Guarino
,
Laura
Hamilton
, et al
2003
.
Charter school operations and performance: Evidence from California
.
Santa Monica, CA
:
RAND Corporation
. doi:10.1037/e527072012-001
Zimmer
,
Ron
, and
Richard
Buddin
.
2006
.
Charter school performance in two large urban districts.
Journal of Urban Economics
60
(
2
):
307
326
. doi:10.1016/j.jue.2006.03.003
Zimmer
,
Ronald
,
Brian
Gill
,
Kevin
Booker
,
Stefan
Lavertu
,
Tim
Sass
, and
John
Witte
.
2009
.
Charter schools in eight states: Effects on achievement, attainment, integration, and competition
.
Santa Monica, CA
:
RAND
. doi:10.1037/e530832010-001
Zimmer
,
Ron
,
Brian
Gill
,
Kevin
Booker
,
Stefan
Lavertu
, and
John
Witte
.
2012
.
Examining charter student achievement effects across seven states.
Economics of Education Review
31
(
2
):
213
224
. doi:10.1016/j.econedurev.2011.05.005

Appendix

Although there are both internal and external validity concerns of a fixed effect approach for estimating the achievement effects for various charter authorizers, it has been widely used by researchers to estimate charter effects (Zimmer et al. 2003; Hanushek, Kain, and Rivkin 2004; Zimmer and Buddin 2006; Sass 2006; Bifulco and Ladd 2006; Booker et al. 2007; Zimmer et al. 2009; Imberman 2011; Zimmer et al. 2012). Therefore, it is prudent to include the fixed effect approach used by these previous researchers as a sensitivity analysis. We should note that we restricted virtual school students from the analysis but otherwise included the full sample of K–8 charter schools and TPSs across Ohio.

In the model, we used annual gains in math and reading test scores as the dependent variable. The formal mode is specified as:
formula
A.1
where Ajt – Ajt-1 is a measure of the achievement gain of the jth student in the tth year; Cjt is vector of dummy variables for whether student j attended a type of authorized charter school in the tth year; Mobjt is an indicator of whether student j transferred to a new school in the tth year; μj captures the individual student fixed effect; θgt captures the grade-by-year fixed effect; and vjt is the random disturbance term. As in the main analysis, we create Huber–White “sandwich” standard errors by clustering student observations by school identifiers.

As in our main analysis, all test scores have been normalized by grade and year with mean of zero and a standard deviation of one. Again, like our main analysis, we assigned the original authorizer for each school when creating the dummy variable for each authorizer. As a result, this can be viewed as an intent-to-treat analysis. Table A.1 shows the results. We display the coefficient estimates for the four different authorized schools, which is identified by the changing performance of students in charter schools relative to traditional public schools.

The substantive results for the nonprofit-authorized schools remain true as the math and reading achievement estimates are negative and statistically significant. In addition, the reading analysis suggests significant coefficients for state-authorized (positive) schools. Although these coefficients were not statistically significant in our main analysis, the coefficient estimates do have the same sign. All other estimates are statistically indistinguishable from zero. Overall, this suggests that our results using a matching strategy are robust to the fixed effect specification for the nonprofit-authorized schools in our main analysis.

Table A.1.
Fixed Effect Sensitivity Analysis
VariablesMathReading
District 0.02 0.10 
 (0.09) (0.06) 
ESC −0.05 −0.06 
 (0.05) (0.04) 
Nonprofit −0.18* −0.10* 
 (0.04) (0.03) 
State 0.09 0.11* 
 (0.06) (0.04) 
Mover −0.05* −0.05* 
 (0.01) (0.01) 
Grade-Year Fixed Effect YES YES 
OBS 1,510,242 1,601,126 
R-Squared 0.46 0.34 
VariablesMathReading
District 0.02 0.10 
 (0.09) (0.06) 
ESC −0.05 −0.06 
 (0.05) (0.04) 
Nonprofit −0.18* −0.10* 
 (0.04) (0.03) 
State 0.09 0.11* 
 (0.06) (0.04) 
Mover −0.05* −0.05* 
 (0.01) (0.01) 
Grade-Year Fixed Effect YES YES 
OBS 1,510,242 1,601,126 
R-Squared 0.46 0.34 

Notes: Robust standard errors are in parentheses.

*Statistically significant at the 5% level.