A defining characteristic of charter schools is that they introduce a strong market element into public education. In this paper, we examine through the lens of a market model the evolution of the charter school sector in North Carolina between 1999 and 2012. We examine trends in the mix of students enrolled in charter schools, the racial imbalance of charter schools, patterns in student match quality by schools’ racial composition, and the distributions of test score performance gains compared to those in traditional public schools. In addition, we use student fixed effects models to examine plausibly causal measures of charter school effectiveness. Our findings indicate that charter schools in North Carolina are increasingly serving the interests of relatively able white students in racially imbalanced schools and that despite improvements in the charter school sector over time, charter schools are still no more effective on average than traditional public schools.

Since their birth in Minnesota in the early 1990s, charter schools have represented a growing component of the K–12 education policy landscape in many U.S. states. Such schools share features of both public and private schools. In return for public funding they are not allowed to charge tuition, must admit students through a lottery if they are oversubscribed, and are subject to the same accountability standards as regular public schools. They are similar to private schools in that they are operated by nongovernmental entities, are relieved of many restrictions on their use of inputs, and are schools of choice in that no students are assigned to attend them. In contrast to private schools, they require a charter to operate that is subject to periodic state-mandated review. With its 107 charter schools in 2013, North Carolina—the subject of this study—ranked fourteenth among the states in terms of the size of the charter sector.

A defining characteristic of charter schools is that they introduce a strong market element into public education. In a standard private sector market with differentiated products, new firms enter a market in response to profit-making opportunities. Over time, the firms that are well-run and that satisfy consumer preferences will tend to expand or be replicated while those that are less successful—perhaps because they are not well-managed or because they misjudge the nature of the demand—will not attract and keep customers, and consequently will go out of business. As a result of this Darwinian process of entry and exit, one would predict the firms that survive in a newly established market space would, as a group, become more successful over time in delivering what customers want. If we apply this reasoning to the market for schooling, more choice should produce a better educational fit between what schools offer and what parents want for their children, and should ultimately lead to better outcomes, at least in terms of the outcome most widely valued and readily measured—standardized test scores.

In this paper, we examine the extent to which a similar market process has emerged in the charter school sector in North Carolina, an examination that builds on similar research in Texas by Baude et al. (2014). In doing so, we take careful note of some significant features of the charter school sector. For example, although entry into the sector is far freer for charter schools than for traditional public schools, entry is still constrained by the requirement that each charter school must first obtain a charter from the state. Thus, state policies toward charters, including the procedures of the chartering agency or agencies, are likely to influence how the market evolves. Another relevant feature—one that emerges clearly from extensive prior research on parental choice of school in a variety of contexts and one that applies to traditional public schools as well—is that in choosing schools, parents care not only about the quality of the education being offered but also about the mix of students in a school (e.g., Bifulco and Ladd 2007). This feature means the market will be subject to externalities in the sense that the decisions of some parents affect the value of schooling available to others. In addition, this feature and other considerations are likely to affect who benefits and who loses from the introduction of market forces in education. Because schooling is so important to the life chances of children, this attention to winners and losers has considerable policy significance. Thus an evaluation of the charter school sector requires close attention to whether the charter school sector serves the needs of those who are not well served by the traditional public school sector.

In section 2 we describe the changing charter school context in North Carolina. In section 3 we explore the supply and demand for charter schools serving different groups of students. In section 4 we use value-added models to compare the distribution of test score gains in charter schools to the distribution of gains for comparable students in the larger public school sector, and in sections 5 and 6 we explore possible explanations for those trends. Although most of the analysis in the paper is descriptive, we provide some plausibly causal estimates of charter school effects in section 7.1

When Republicans in North Carolina first raised the possibility of charter school legislation in the mid 1990s, advocates for poor and minority students were strongly opposed. This opposition may have reflected the state's historical experience with school choice during the 1960s, when “freedom academies” were established to provide a way for white students to avoid integrated schools (Myers 2004). Eventually, Democrats agreed to support charter schools enabling legislation out of concern that the alternative form of choice being pushed by Republicans—vouchers for private schools—would be even more detrimental to the interests of disadvantaged students.2 Thus the state's 1996 legislation enabling charter school was a compromise solution to the politically contentious issue of parental choice and its racial implications. This brief historical context helps to explain the significant differences between the charter school movement in southern states such as North Carolina and northern states such as Massachusetts and Michigan, where debates about charters focused less on race and more on unions, accountability, and the use of charters as a mechanism for helping the poor (Fuller, Elmore, and Orfield 1996; Bettinger 2005).

The original North Carolina legislation specified goals related to student learning, teachers, and parents. The first two goals were to improve student learning and to increase learning opportunities, especially for students at risk of academic failure or the academically gifted. The next two were to encourage new teaching methods and to create new professional opportunities for teachers. Another was to give parents more choice of schooling options. Finally, the legislation made it clear that charters were to be held accountable for measurable student achievement results. Of primary interest for this study are the role of parental choice and the focus on measurable student learning.

The original legislation explicitly stated that charter schools could not discriminate on the basis of race or ethnicity. In addition, the legislation stated that within a year of opening, the population of the school “shall reasonably reflect” the racial and ethnic mix of the community in which it is located.3 Further, the State Board of Education was encouraged to give preference to applications that demonstrated the capability of serving students at risk of academic failure. It also required charter school operators to develop a transportation plan so that transportation would not be a barrier to any student who lived within the district of the charter school. In return for public funding, the charter schools were not allowed to charge tuition or fees, but were permitted to raise funds from other sources.

The 1996 law gave state officials somewhat more control over the establishment of charter schools than was the case in other states (Bifulco and Ladd 2006, 2007). The ultimate authorizing power for all charters was given to the State Board of Education, with the number of charter schools statewide capped at one hundred. In addition, no more than five charters could be authorized in a single district in any given year. In other ways the legislation was relatively permissive in that it allowed any group or individual to apply for a charter and to operate as an independent nonprofit corporation with far more autonomy over the use of for-profit management companies and personnel policies than was the case for traditional public schools.

Despite significant growth in the state's population and periodic efforts by charter proponents to increase the number of charter schools, the cap remained at one hundred until 2011. In 2010, the state secured $400 million in federal Race to the Top funds by promising, among other things, to raise the charter school cap. In 2011, a Republican-controlled legislature completely eliminated the cap and made it possible for existing charters to expand their enrollments by 20 percent per year without approval from the State Board. Two years later, the legislature loosened the regulations further and created a more charter-friendly environment by creating a new North Carolina Charter Schools Advisory Board, made up of charter supporters. Specifically, the legislation stated that, “All appointed members of the Advisory Board shall have demonstrated an understanding of and a commitment to charter schools as a strategy for strengthening public education.”4

Figure 1 shows the growth in the state's charter schools over time. Thirty-three charter schools were operating in the 1997–98 school year (all academic years in the figure and throughout the paper are designated by the final year). The number had risen to ninety-seven by 2005, through a combination of new entrants and a few exits, and to the maximum allowed, one hundred, by 2012. After the cap was removed the number of charters in the state grew rapidly—nine new charter schools opened for the 2012–13 school year and twenty-three more for the 2013–14 school year, although five schools also closed during that period. Another twenty-six schools were granted charters for the 2014–15 school year. The arrival of seventy-one new applications for the 2015–16 school year—shown by the dashed line in the figure—created great concern among charter school skeptics. Counter to the hopes of charter school proponents, however, the charter school advisory board in 2014 recommended that only eleven of them be granted a charter.5
Figure 1.

Growth in North Carolina Charter Schools (1998—2016).

Figure 1.

Growth in North Carolina Charter Schools (1998—2016).

Close modal

As of 2014, charter school students accounted for 3.6 percent of all public school students in the state, with the percentage of K–8 students (4.2 percent) being twice that of ninth to twelfth grade students (2.1 percent). Although the overall percentages are low, they are far higher in some of the urban districts—currently, charter school students account for 15.1 percent of all students in Durham, 4.7 percent in Winston-Salem, 6.1 percent in Charlotte-Mecklenburg, and 4.9 percent in Wake County.

Table 1 compares, over time, the characteristics of students in charter schools with those in traditional public schools.6 The trends in the racial mix of students tell a clear story. In the early years, black students were substantially overrepresented and white students were underrepresented in the charter schools relative to traditional public school enrollments. Over time, however, that pattern changed. The white share of charter school students increased from 58.6 percent to 62.2 percent over the full period, while their share of traditional public school students declined, from 64.1 percent to 53.0 percent. Thus, by 2012, white students were significantly overrepresented in the charter school sector. During this same period, Hispanic students increased from 0.8 percent of charter students to 5.5 percent in 2012, still well below their 13.5 percent share of traditional public school students. Combining these two minority groups, we find that, as of 2012, charter schools served a disproportionately small number of minority students; black and Hispanic students accounted for 31.8 percent of the charter school students in this grade range, which was well below their 39.2 percent share of traditional public school students.7

Table 1. 
Descriptive Statistics: Charter versus Traditional Public Schools (TPS) (Grades 4—8)
199820052012
CharterTPSCharterTPSCharterTPS
Number of students 2,232 474,552 11,378 529,166 20,020 533,415 
Ethnicity (% students)       
Black 35.7 29.6 32.6 29.8 26.3 25.7 
Hispanic 0.8 2.7 2.9 7.5 5.5 13.5 
White 58.6 64.1 59.2 56.8 62.2 53.0 
Other 4.8 3.7 5.3 5.9 6.0 7.8 
Parent education (% students)       
High School or less 32.2 55.4 32.1 51.6 
Some Post high school 19.8 18.5 23.4 21.3 
College graduate + 48.1 55.4 32.1 51.6 
Average reading (standardized)       
4th grade −0.075 0.000 −0.084 0.002 0.201 −0.007 
5th grade −0.029 0.000 −0.034 0.000 0.228 −0.009 
6th grade 0.153 −0.000 0.072 −0.002 0.242 −0.010 
7th grade −0.070 0.000 0.077 −0.002 0.257 −0.010 
8th grade −0.495 0.001 0.110 −0.002 0.289 −0.010 
Average math (standardized)       
4th grade −0.257 0.001 −0.213 0.005 0.031 −0.001 
5th grade −0.266 0.001 −0.138 0.003 0.052 −0.002 
6th grade 0.054 −0.000 −0.038 0.001 0.169 −0.007 
7th grade −0.148 0.001 −0.023 0.001 0.195 −0.007 
8th grade −0.534 0.001 0.058 −0.001 0.146 −0.005 
199820052012
CharterTPSCharterTPSCharterTPS
Number of students 2,232 474,552 11,378 529,166 20,020 533,415 
Ethnicity (% students)       
Black 35.7 29.6 32.6 29.8 26.3 25.7 
Hispanic 0.8 2.7 2.9 7.5 5.5 13.5 
White 58.6 64.1 59.2 56.8 62.2 53.0 
Other 4.8 3.7 5.3 5.9 6.0 7.8 
Parent education (% students)       
High School or less 32.2 55.4 32.1 51.6 
Some Post high school 19.8 18.5 23.4 21.3 
College graduate + 48.1 55.4 32.1 51.6 
Average reading (standardized)       
4th grade −0.075 0.000 −0.084 0.002 0.201 −0.007 
5th grade −0.029 0.000 −0.034 0.000 0.228 −0.009 
6th grade 0.153 −0.000 0.072 −0.002 0.242 −0.010 
7th grade −0.070 0.000 0.077 −0.002 0.257 −0.010 
8th grade −0.495 0.001 0.110 −0.002 0.289 −0.010 
Average math (standardized)       
4th grade −0.257 0.001 −0.213 0.005 0.031 −0.001 
5th grade −0.266 0.001 −0.138 0.003 0.052 −0.002 
6th grade 0.054 −0.000 −0.038 0.001 0.169 −0.007 
7th grade −0.148 0.001 −0.023 0.001 0.195 −0.007 
8th grade −0.534 0.001 0.058 −0.001 0.146 −0.005 

Notes: Racial composition based on students for whom race is observed. Similarly, percentages for parental education are based on students for whom parental attainment (for the highest-attaining parent) is reported. In the early years, parent education does not have a category for high school +, jumping from high school graduate to trade school/community college. 2008 was the last year information on parent education was collected. All reading and math scores have been standardized by grade and year to have mean zero and standard deviation of 1.

This marked change in the racial composition of charter schools largely reflects two complementary trends: the closure of charter schools with relatively small proportions of white students and the opening of charter schools with high proportions of such students. To illustrate this uneven turnover of charter schools, we compared the racial composition of charter schools with the racial composition of their surrounding school district. Of the twelve charter schools that closed between 2005 and 2012, for example, all but one had a lower percentage of white students than the white percentage in the corresponding school district. In contrast, of the nineteen charter schools that opened between 2005 and 2012, thirteen had white percentages higher than their corresponding district. Where there was once a sector that included many heavily minority schools, the charter school sector in North Carolina has, over time, become one that includes many more schools with relatively high percentages of white students.

The second panel in table 1 documents that in 1998 and 2005, students whose parents have at least a college degree were overrepresented in charter schools relative to the traditional public sector. In 1998, close to 43 percent of parents with children in a charter school had college degrees, in contrast to only 25.8 percent of those with children in traditional public schools.8 This overrepresentation of students with college-educated parents should not be surprising. Despite the fact that charter schools are often billed as a way to expand options for disadvantaged students, parents must gather information and take the initiative to seek out a charter school, actions that are easier for college-educated parents than for those with limited education.

In the bottom two panels, we characterize the students in the two sectors by their average reading and math scores. All the test scores are normalized across the state by subject and grade by year to have a mean of 0 and a standard deviation of 1. For that reason, the averages for the bulk of students—namely, those in the traditional public schools—are close to zero. With one exception (grade 6 students) the children in charter schools in 1998 were performing at lower levels than their counterparts in traditional public schools. The most obvious interpretation of that pattern is that the initial charter schools attracted low-performing students. Another possibility is that the charter schools were of low quality. By 2012, however, charter school students were outperforming their counterparts in traditional public schools by substantial amounts at all grade levels. That improvement could indicate that higher-achieving students moved into the sector over time. Alternatively, it could indicate a charter school sector that improved over time in its ability to raise students’ achievement. We return to the topic of improvement versus selection in section 6.

Given the salience of race to the policy discussion surrounding charter schools in North Carolina, we end this descriptive section with information on the racial mix of students within individual charter schools and how that has changed over time. The fact that there are a large number of black and Hispanic students—albeit a declining share of all charter students—in the charter school sector as a whole need not mean that the charter schools themselves are racially balanced. In fact, as shown in figure 2, that is far from the case. The shaded bars represent the patterns in 2014, and the outlined boxes the pattern in 1998. In both periods, most charter schools were racially imbalanced, in that they were either predominantly white (less than 20 percent nonwhite students) or predominantly minority (more than 80 percent nonwhite). In other words, few charter schools had racially balanced student bodies. Over time this racial imbalance has intensified, with the share of students in predominantly white charters nearly doubling, from 24.2 to 47.1 percent. With a declining overall share of minority students in charter schools, the share of students in predominantly minority schools has declined somewhat but has become more concentrated in schools that are more than 90 percent minority. These patterns are strikingly different from the racial mix of students in traditional public schools, shown in figure 2b.
Figure 2a.

Racial Mix of Charter School Students (1998 and 2014).

Figure 2a.

Racial Mix of Charter School Students (1998 and 2014).

Close modal
Figure 2b.

Racial Mix of Traditional School Students (1998 and 2014).

Figure 2b.

Racial Mix of Traditional School Students (1998 and 2014).

Close modal

For the early years of North Carolina's charter school program, the racial imbalance in charter schools reflects choices made by both black and white families. Studying fourth through eighth graders who switched into the charter school sector, Bifulco and Ladd (2007, see table 2) documented that students from each racial group gravitated to charter schools containing more of their own group than the school they were leaving. Black students moved out of traditional public schools that were on average 53 percent black to charter schools that averaged 72 percent black; white choosers left schools that were on average 28 percent black in favor of schools that averaged less than 18 percent black. Conditional logit models designed to infer the preferences of charter school choosers confirmed that black and white parents had very different preferences with respect to a school's racial composition. In particular, the preferred mix for black parents was a school that was between 40 and 60 percent black, while the preferred mix for white parents was 20 percent black. Not surprisingly, these preferences are often incompatible with racial balance. Even though black parents might prefer racially balanced schools, the fact that white parents prefer schools with far lower proportions of black students sets up a tipping point. Once a school becomes “too black,” it becomes almost all black as white parents avoid it.

Table 2. 
Descriptive Statistics for Estimates of School Gains
MathReading
CharterSign ofCharterSign of
YearMean (SD)TPS Mean (SD)Charter-TPSMean (SD)TPS Mean (SD)Charter-TPS
1999 −0.078 (0.155) 0.001 (0.084) – −0.056 (0.112) 0.003 (0.068) − 
2000 −0.080 (0.167) 0.001 (0.088) – −0.050 (0.122) 0.001 (0.077) − 
2001 −0.007 (0.179) 0.000 (0.109) – 0.008 (0.100) 0.000 (0.078) 
2002 −0.037 (0.144) 0.001 (0.104) – −0.018 (0.130) 0.000 (0.075) − 
2003 −0.029 (0.164) 0.000 (0.099) – 0.023 (0.107) 0.000 (0.071) 
2004 −0.035 (0.132) 0.001 (0.099) – 0.016 (0.090) 0.000 (0.072) 
2005 −0.012 (0.125) 0.000 (0.097) – 0.007 (0.087) 0.000 (0.068) 
2006 −0.016 (0.144) 0.000 (0.106) – 0.042 (0.083) −0.001 (0.074) 
2007 −0.025 (0.140) 0.001 (0.110) – 0.029 (0.089) −0.001 (0.074) 
2008 
2009 0.015 (0.109) 0.000 (0.108) 0.058 (0.078) −0.002 (0.063) 
2010 0.032 (0.115) −0.001 (0.104) 0.066 (0.062) −0.002 (0.061) 
2011 0.017 (0.109) 0.001 (0.110) 0.058 (0.071) −0.002 (0.064) 
2012 0.003 (0.109) 0.000 (0.117) 0.049 (0.067) −0.002 (0.069) 
MathReading
CharterSign ofCharterSign of
YearMean (SD)TPS Mean (SD)Charter-TPSMean (SD)TPS Mean (SD)Charter-TPS
1999 −0.078 (0.155) 0.001 (0.084) – −0.056 (0.112) 0.003 (0.068) − 
2000 −0.080 (0.167) 0.001 (0.088) – −0.050 (0.122) 0.001 (0.077) − 
2001 −0.007 (0.179) 0.000 (0.109) – 0.008 (0.100) 0.000 (0.078) 
2002 −0.037 (0.144) 0.001 (0.104) – −0.018 (0.130) 0.000 (0.075) − 
2003 −0.029 (0.164) 0.000 (0.099) – 0.023 (0.107) 0.000 (0.071) 
2004 −0.035 (0.132) 0.001 (0.099) – 0.016 (0.090) 0.000 (0.072) 
2005 −0.012 (0.125) 0.000 (0.097) – 0.007 (0.087) 0.000 (0.068) 
2006 −0.016 (0.144) 0.000 (0.106) – 0.042 (0.083) −0.001 (0.074) 
2007 −0.025 (0.140) 0.001 (0.110) – 0.029 (0.089) −0.001 (0.074) 
2008 
2009 0.015 (0.109) 0.000 (0.108) 0.058 (0.078) −0.002 (0.063) 
2010 0.032 (0.115) −0.001 (0.104) 0.066 (0.062) −0.002 (0.061) 
2011 0.017 (0.109) 0.001 (0.110) 0.058 (0.071) −0.002 (0.064) 
2012 0.003 (0.109) 0.000 (0.117) 0.049 (0.067) −0.002 (0.069) 

Notes: Table 2 provides descriptive statistics for our school value-added estimates. In it we report the mean and the standard deviation of performance (SD in parentheses). These statistics are based on student-level estimates, to correspond with the student-level unit of analysis in our value-added models. Thus, these are analogous to frequency weighted school-level estimates. Nonweighted estimates produce similar results. A negative difference between the charter and the traditional public school (TPS) mean signifies that the charters exhibit lower average gains in test scores than the traditional public schools. 2008 is excluded so as to make the models similar across years; the 2008 data do not have the free or reduced-price lunch variable, a key predictor in our models.

These patterns and trends hint at the major factors influencing the operation of the charter school sector over time: the rising proportion of white children attending charter schools and the fact that many of them are in schools that have low shares of minority students. We return to the implications of these trends in a later section of the paper.

Parents differ in their reasons for enrolling their child in a charter school. For many, the quality or nature of the school's offering, or both, may be the key determinant. For example, some parents may be interested in STEM programs and others in highly structured programs that focus on the basics. Alternatively, or in addition, families may value the mix of students in the school, with white middle class parents likely to value schools with high proportions of other children like themselves and black families looking for racially balanced schools (Bifulco and Ladd 2007).9 And some families, particularly disadvantaged families, may place a high value on the availability of transportation or lunch services, without which the school may be neither accessible nor feasible. In practice, parents are likely to have to make tradeoffs among the things they value.

In light of these differing preferences, charter school suppliers have incentives to target their product toward particular segments of the market, with the segments often defined by the socioeconomic and racial backgrounds of the students. Some charter schools, for example, explicitly target disadvantaged students by offering transportation and lunch services along with a “no excuses” approach to schooling that is far more appealing to economically disadvantaged students than to those from higher-income backgrounds. This approach is characterized by high expectations, a strong disciplinary code of conduct, extended instructional time, and a variety of student supports. The Knowledge is Power (KIPP) charter schools epitomize this approach. At this point, North Carolina has only a few KIPP schools and a small, but unknown, number of other “no excuses” schools. In contrast to many other states, charter management organizations, which typically are the ones that offer such models, are not significant actors in North Carolina. Nonetheless, a review of the mission statements in their charter school applications shows that about 25 percent of the charter schools indicated a goal of serving significant proportions of disadvantaged students (Eisen 2014).

In contrast, some charters differentiate their offerings in ways that make them unappealing to disadvantaged students but appealing to advantaged students. Although some of that targeting may be achieved through the nature of their programs, or, importantly, by where they locate, it also occurs in the form of a school's decision not to provide transportation and lunch services, neither of which is required under the state's charter legislation. Although 64 percent of the charter school applicants pledged to provide transportation in their charter applications, only half that number was doing so as of 2011. Likewise, whereas 62 percent of the original charters promised to provide lunch to their students, only two-thirds of those were doing so in 2011 (Eisen and Ladd 2015). Charter schools that do not offer such services are not likely to attract disadvantaged students, many of whom are racial minorities; instead they are likely to attract a disproportionately white middle class group of students.

One outcome of these supplier practices is the racial segregation of the state's charter schools that we documented in the previous section. In light of this racial segregation, the question for us is whether the charters provide a better fit for the children who end up in mainly white schools or for those who end up in mainly black schools. In other words, which segment of the racially segmented market are charters best serving?

We infer the quality of the fit of a school by using information from the North Carolina Education Research Data Center on whether students return to a school the following year (provided the school offers the subsequent grade). This proxy for match quality reflects a combination of parental satisfaction with the school and movement out of the school for other reasons. Some parents, for example, may withdraw their child from a school for reasons unrelated to the perceived quality of the school, such as a job-related move. In addition, schools themselves may either dismiss a child or send a signal that the school is not the right one for the child. Regardless of whether the departure from a school is initiated by the parent or the school, high departure rates most likely signal a poor match between the parents’ educational expectations and the offerings of the school.

We estimate models at the individual level in which the dependent variable, Rist, takes on the value 1 if the student i in year t returns to the school s in year t + 1 (regardless of whether the student advances to the next grade or is retained in grade) and the value 0 if the student leaves the school. The sample is all students in both charter and traditional public schools in grades 4 through 8, provided the school is open the following year and offers the next grade for that student. The model takes the form shown in equation 1:
1
where Xi is a vector of student characteristics, such as the race and gender of the students, free and reduced-price lunch status, math and reading test scores, special needs or gifted status, and number of years the student has been in the school. Controlling for these student characteristics is important because some types of students are far more likely to transfer among schools than are others. It is well documented, for example, that children from low-income families in urban areas often move from school to school as their families adjust to changing circumstances and housing arrangements. The grade by-year fixed effects control for differences in average movement rates across grades by year.

The charter school fixed effects are central to this analysis. Their coefficients represent the average rate at which the students in each charter school remain in the school relative to similar students in traditional public schools (which is the left-out category, and is captured by the constant in the model). A positive coefficient for a specific charter school signifies that the school is a better match for the child, either as perceived by the parents or as viewed by the school, or some combination of both, than are the traditional public schools. A negative coefficient signifies a weaker fit.

In light of the significant degree of racial imbalance across charter schools that we documented in figure 2, we are interested in whether the quality of the match differs by the racial mix of students in the charter school. Figures 3a and 3b summarize our findings for an early and a recent period. They depict the coefficients on the charter school fixed effects (i.e., the average return rate for each school, after statistically adjusting for the characteristics of the students) relative to the return rate for comparable students in traditional public schools. Although we did the analysis for four separate three-year time periods starting in 2003, we present graphs only for an early period (2003–2005) and the most recent period (2009–2011). Consistent with the fact that parents had to choose to enroll their child in a charter school, our results indicate first that charter schools appear to be a better match than public schools for demographically similar students. That is evident from the large number of points above the reference line for traditional public schools (at 0) in figures 3a and 3b. Importantly, however, we find that the match is better for students in mainly white schools than for those in mainly minority schools.
Figure 3a.

Relative Return Rates for All Charters (2003—2005).

Figure 3a.

Relative Return Rates for All Charters (2003—2005).

Close modal
Figure 3b.

Relative Returns Rates for All Charters (2009—2011).

Figure 3b.

Relative Returns Rates for All Charters (2009—2011).

Close modal

The concentrations of schools at the far left and the far right of the graphs reflect the U-shaped distribution of the charter schools by the racial mix of their students shown earlier in figure 2a. Of interest here is that for both sets of years, the charter schools disproportionately serving white students (those on the left) appear to represent a better match than do the schools serving concentrations of minority students (those on the right). Indeed, a simple correlation confirms this relationship, with the estimated return rate (relative to the rate for comparable traditional public schools) and the percentage minority in the school having a correlation coefficient of about −0.3 for both time periods. At the same time there is no clear trend across the two time periods, other than the decline in the number of schools with racial compositions between 40 and 80 percent minority students.

We are not able to separate the observed pattern into the part contributed by parental preferences and the part contributed by school actions designed to discourage or prohibit students from returning. Nonetheless, we suspect that parental preferences dominate for the schools serving mainly white students and that school actions play a more significant, and perhaps a large, role in the schools serving mainly minority students. This speculation is based on the tough “no excuses” policies in some of the charters serving large proportions of disadvantaged students.

Further analysis of specific districts that have multiple charter schools (graphs not shown) indicates that the level of racial integration across the traditional public schools within a district can affect the pattern of the match.10 In particular, in Wake County (which has made a strong and nationally recognized effort over time to keep its traditional public schools racially balanced), children in the mainly white charter schools are much more likely to return to the charter school the following year (relative to their public school counterparts) than are the children in the few, mainly black charter schools. This pattern provides support for the conclusion that the whiteness of many of the charters is likely to be one of the reasons for their appeal. In contrast, in the state's other large district, Charlotte-Mecklenburg, which now features extremely segregated traditional public schools, children in mainly white charters are no more likely to return compared to their counterparts in traditional public schools than are the children in mainly black charters. Our interpretation is that parents in Mecklenburg wanting mainly white school environments for their children are less dependent on the charter school sector to achieve that goal than they are in Wake County. To the extent that some families are using charter schools as a way to enroll their children in schools that are whiter than the district's traditional public schools is cause for concern. That is especially true in light of the state's history of segregation academies in the post-Brown years, in which parental choice was used to keep white students apart from minority students (Myers 2004).

In tracing the evolution of charter schools since their introduction, we seek to compare trends in student test score gains in charters and traditional public schools. Although many factors other than schools influence student test scores, we focus on this measure because of its policy salience and because North Carolina's initial charter school legislation specifically mentioned the intent to hold charter schools accountable for measurable student learning.11

To make this comparison, we estimate value-added models that generate average gains in test scores in each charter school and in each traditional public school serving students in any combination of grades between 4 and 8. We estimate the models separately for each year from 1999 to 2012 and control statistically for the demographic backgrounds of the students, their grade level, and whether they are new to the school. Once we have our school-specific average gains for each year, we can compare the distributions for each type of school over time. Our approach is similar to that used by Baude et al. (2014) in their study of the evolution of charter schools in Texas and in other previous school value-added applications (Ladd and Walsh 2002; Tekwe et al. 2004).

Although the average test score gains for each school are derived from value-added models that are similar to those often used to measure the effectiveness of individual teachers (e.g., Hanushek and Rivkin 2010), we caution against interpreting our measures as indicators of school quality or school effectiveness for reasons cited in Ladd and Walsh (2002). Instead, they simply indicate how well the students in each school perform on state-level tests, given their prior achievement levels and demographic characteristics. Strong performance of a school's students might indicate the school has effective programs. It might well reflect, however, the school's success in attracting able and motivated students.

The basic model (see equation 2) takes the following form for student i in grade g and school s (with no year indicator because we run separate models for each year).12
2

The dependent variable, , is an individual student's performance in a given subject (math or reading). Xi is a vector of individual-level background covariates, is a vector of grade fixed-effects, and is a vector of school fixed effects (our parameters of interest). In we include: free/reduced-price lunch status, gifted status in reading and math, race/ethnicity (indicators for black, Hispanic, and other), whether the student is new to the school, whether a student is exceptional, and individual students’ lagged math and reading scores. The lagged test scores are included because our goal is to estimate average gains, rather than levels, of test scores for each school.

We have intentionally not included peer composition variables in equation 2, such as the percent of students who are eligible for free or reduced-price lunch or who are minorities in a student's school-grade. A long literature has documented that peers may have either positive or negative spillover effects on student achievement (e.g., Hanushek et al. 2003; McEwan 2003). Moreover, student composition may play a role in attracting (or deterring) high-quality teachers who are able to produce achievement gains (Lankford, Loeb, and Wyckoff 2002; Clotfelter, Ladd, and Vigdor 2006). Given our interest in comparing average achievement gains across schools, however, it would not be appropriate to control for any school level variables such as peer characteristics.13

To estimate the school value-added coefficients, we use a procedure suggested by McCaffrey et al. (2012).14 This increasingly common approach to estimating a large number of fixed effects adds an additional assumption—that the fixed effects sum to 0—to standard fixed effects models. This assumption allows us to produce estimates for all school fixed effects, both charter and traditional public schools, without leaving out any of the fixed effects. As such, the constant in these models () shows the “grand mean.”15 Thus, the value-added coefficients can be interpreted as deviations from the overall mean of school fixed effects, in standardized units. For example, a fixed effect value of 1 means that the school is 1 standard deviation above the overall mean of school value-added estimates.16

The objective of our analysis is to compare the distribution of the school fixed effects (the s), across charter and traditional public schools. We are particularly interested in the extent to which the two distributions converge over time, as would be predicted by a market model.

Figures 4a and 4b portray the distributions for reading and math in our estimates for charter and traditional public schools for selected years (with all school level estimates weighted by the number of students in the school). Table 2 reports the comparable means of the distributions for each sector for each year. The figures show that in the early years (1999–2003), the distribution of gains across charter schools was below that for traditional public schools, and that was true for both math and reading. These shortfalls in the charter sector in the early years are fully consistent with the findings reported by Bifulco and Ladd (2006) using a student fixed effects approach that relies on switchers between the traditional public school and charter sectors (an approach we also leverage below), the North Carolina specific results from the Center for Research on Education Outcomes (CREDO) study of charter effects across the United States (2013), and preliminary results by Baude et al. (2014) from data in Texas.
Figure 4.

Relative Reading and Math Gains, Charter versus Traditional Public Schools (1999—2009).

Figure 4.

Relative Reading and Math Gains, Charter versus Traditional Public Schools (1999—2009).

Close modal

Despite this initial gap in charter school performance, however, student achievement in the charter school sector has improved, by this measure, relative to that in traditional public schools. During the thirteen years that we include in our analysis, the mean charter school gain in reading rose by 0.081 σ (from −0.078 standard deviations below to 0.003 standard deviations above the overall mean) and in math by 0.101 σ (from −0.056 to 0.045), whereas the traditional public school sector distribution remained relatively unchanged over that same time period. In recent years, the average achievement gain in charter schools has surpassed those in the traditional public schools serving demographically similar students. This shift occurred as early as 2003 for reading but not until 2009 for math. By 2012, average achievement gains in the charter schools exceeded the average achievement gains of students in the traditional public schools in both subjects.17

We are also interested in the variation in the gains estimates across schools within each sector. Table 2 shows that the variation across schools (as measured by the standard deviation) in the charter sector greatly exceeded that in the public sector until the most recent few years, when the variation across sectors became approximately the same in both subjects. The declining variation across schools in the charter sector is generally consistent with what one would expect to observe in a maturing market responding to market pressures. It may also reflect, however, improvements in the way the state has regulated and supported charter schools over time.

Regardless of the source, the trend is clear; in the early years of charter schools, the students in those schools were performing at lower levels (given their demographic characteristics and prior achievement levels) than their counterparts in the traditional public schools. Over time, however, that relationship changed, with the change coming sooner for performance on reading tests than on math tests. By the final year, 2012, charter school students were outperforming students in the traditional public schools in both subjects. Moreover, over time, the variation in average student performance across charter schools has declined and is now no greater than that across traditional public schools.

In the following sections, we explore potential contributors to these changes in the distribution of test scores gains in charter schools relative to those in traditional public schools. First, we document the contributions of charter school entry and exit. Second, we shift the focus to the students and examine changes over time both in the patterns of student selection into charters and in plausibly causal measures of charter school effectiveness.

In a typical private sector market with differentiated goods, one would predict that market forces would induce weak firms to exit the market and strong firms to enter it, resulting in improved average performance over time. The analog in this case is that charter schools whose students are performing poorly would shut down, those whose students are doing well would remain (or “persist”) in the market, and new schools would seek to enter the market and do well enough to survive. The one difference in the charter school case is that the decision to enter or exit the sector is not solely up to the schools themselves but is also a function of the state granting a charter to operate.18

Figure 5 portrays the student-weighted distributions of school value-added estimates by the three categories—“entrants,” “persisters,” and “exiters”—over the entire period. A clear pattern emerges: The test score gains of the students in the schools that exit are typically lower than those in charters that remain open, and the test score gains of the students in charters during their first year of operation are lower than those of students in the schools that persist, albeit only slightly higher than schools that eventually exit. Thus, even though poor academic performance is typically not the reason for revoking a charter in North Carolina, the process of entry and exit nevertheless has much the same effect—the closing of schools whose students are not performing well.
Figure 5.

Distribution of Value Added by School Status (1999—2012).

Figure 5.

Distribution of Value Added by School Status (1999—2012).

Close modal

In table 3, we shed light on the changes over time by examining the average test score gains of the three types of schools in two periods: the early period of charter schools (1999–2005) and the more recent period (2006–2012). The larger number of new charters entering the market in the early period relative to the later period reflects the rush to open charter schools in the early years before the cap was binding. In later years, as the state's charter school cap was approached, entry of new charter schools into the charter school market became less common.19

Table 3. 
Student Gains by Charter School Exit, Persistence, and Entry
Exiting SchoolsPersisting SchoolsEntering Schools
Reading, 1999—2005    
Mean gains −0.246 −0.028 −0.169 
Number of school years 408 40 
% of charter enrollment 0.67 94.55 4.77 
Reading, 2006—2012    
Mean gains −0.136 0.005 0.077 
Number of school years 12 601 13 
% of charter enrollment 0.54 97.83 1.63 
Math, 1999—2005    
Mean gains −0.078 0.006 −0.100 
Number of school years 408 40 
% of charter enrollment 0.67 94.55 4.77 
Math, 2006—2012    
Mean gains −0.043 0.051 0.090 
Number of school years 12 601 13 
% of charter enrollment 0.54 97.83 1.63 
Exiting SchoolsPersisting SchoolsEntering Schools
Reading, 1999—2005    
Mean gains −0.246 −0.028 −0.169 
Number of school years 408 40 
% of charter enrollment 0.67 94.55 4.77 
Reading, 2006—2012    
Mean gains −0.136 0.005 0.077 
Number of school years 12 601 13 
% of charter enrollment 0.54 97.83 1.63 
Math, 1999—2005    
Mean gains −0.078 0.006 −0.100 
Number of school years 408 40 
% of charter enrollment 0.67 94.55 4.77 
Math, 2006—2012    
Mean gains −0.043 0.051 0.090 
Number of school years 12 601 13 
% of charter enrollment 0.54 97.83 1.63 

Note: Entering schools become persisting schools after their first year being open, unless they exit.

In both periods, we see that the schools that exit are those with the lower test scores gains, and that is true for both subjects. The new charter entrants, however, appear to have improved over time. In the early period (1999–2005), new charter schools featured lower average test score gains than the persisting schools, although they still exhibited higher gains than exiting schools. During the 2006–2012 period, in contrast, the test scores gains of students in the newly entering charters were higher than those in the persisting schools. This change is likely the result of a significant policy change that took effect in November 2006. Starting in that year, all new charters were required to delay a year after approval before they could admit students. That permitted the state's Office of Charter Schools to make sure they were ready to open, and to provide technical support as necessary.

Thus, the overall gains in achievement of charter school students relative to traditional public school students described in the previous section reflect a combination of the departure in both periods of charter schools with relatively low achievement gains and, in the recent period, the entrance of schools demonstrating relatively larger achievement gains.

We emphasized earlier that our school-level measures of test score gains should not be interpreted as measures of the effectiveness with which charter schools raise the test scores of their students. Among the various reasons for this assertion is that the measures control neither for the unobservable characteristics of students that may affect their performance nor for the effects of peers on student achievement levels. Other reasons include the spillover effects of student mobility (Baude et al. 2014), potential resource differences, and measurement error (Ladd and Walsh 2002). Prior work on similar gains-based measures of school effectiveness show they consistently favor schools serving more advantaged students (Clotfelter and Ladd 1996; Ladd and Walsh 2002; Bifulco and Ladd 2007).

We begin by documenting the dramatic change over time in the academic characteristics of the students who have been entering charter schools. In table 4, we do so for sixth, seventh, and eighth graders by comparing the average prior-year test scores (and also the number of absences as a measure of student motivation) for students entering the charter school sector to those of students who stayed behind in traditional public schools for selected years.20 Table 4 documents that the charter school sector has been attracting a relatively more able and motivated group of students over time. This trend is evident for both reading and math test scores in grades 6, 7, and 8. In each of the early years, the entering students were typically less able than their counterparts who remained in traditional public schools. In the more recent years, however, the situation changed quite dramatically. In these years, the average test scores of the new charter school students exceeded those of their former public school peers by more than 0.2 standard deviations in reading and about 0.15 standard deviations in math. Although we do not have data on student absences for the early years, the absence data for the later years indicates, consistent with their higher test scores, that the new entrants to charter schools had about 20 percent fewer absences, on average, than those who stayed behind, indicating higher student motivation and other cognitive skills that may lead to higher student outcomes.

Table 4. 
Student Selection into Charter Schools
Early YearsRecent Years
200020012002201020112012
New ToRemainNew ToRemainNew ToRemainNew ToRemainNew ToRemainNew ToRemain
Charterin TPSCharterin TPSCharterin TPSCharterin TPSCharterin TPSCharterin TPS
6th grade             
Avg. read (lag) −0.070 0.001 −0.002 0.000 −0.048 0.001 0.232 −0.008 0.232 −0.009 0.242 −0.010 
Avg. math (lag) −0.206 0.003 −0.083 0.001 −0.084 0.001 0.149 −0.005 0.162 −0.006 0.169 −0.007 
Days absent (lag) 5.1 5.8 4.2 5.0 3.8 4.7 
N 1,200 95,281 1,463 98,110 1,748 99,848 3,717 103,328 4,019 104,925 4,289 106,623 
7th grade             
Avg. read (lag) −0.062 0.001 0.034 −0.000 −0.052 0.001 0.274 −0.009 0.259 −0.009 0.257 −0.010 
Avg. math (lag) −0.155 0.002 −0.051 0.001 −0.149 0.002 0.213 −0.007 0.179 −0.006 0.195 −0.007 
Days absent (lag) 5.2 6.1 4.2 5.3 4.0 5.1 
N 1,162 92,849 1,149 95,325 1,546 98,837 3,537 102,070 3,682 103,791 4,004 105,056 
8th grade             
Avg. read (lag) −0.094 0.001 −0.010 0.000 0.011 −0.000 0.287 −0.009 0.299 −0.010 0.289 −0.010 
Avg. math (lag) −0.167 0.002 −0.099 0.001 −0.051 0.001 0.154 −0.005 0.183 −0.006 0.146 −0.005 
Days absent (lag) 5.2 6.5 4.5 5.7 4.2 5.4 
N 851 90,103 1,014 91,899 1,205 94,880 3,142 101,415 3,445 102,175 3,711 103,917 
Early YearsRecent Years
200020012002201020112012
New ToRemainNew ToRemainNew ToRemainNew ToRemainNew ToRemainNew ToRemain
Charterin TPSCharterin TPSCharterin TPSCharterin TPSCharterin TPSCharterin TPS
6th grade             
Avg. read (lag) −0.070 0.001 −0.002 0.000 −0.048 0.001 0.232 −0.008 0.232 −0.009 0.242 −0.010 
Avg. math (lag) −0.206 0.003 −0.083 0.001 −0.084 0.001 0.149 −0.005 0.162 −0.006 0.169 −0.007 
Days absent (lag) 5.1 5.8 4.2 5.0 3.8 4.7 
N 1,200 95,281 1,463 98,110 1,748 99,848 3,717 103,328 4,019 104,925 4,289 106,623 
7th grade             
Avg. read (lag) −0.062 0.001 0.034 −0.000 −0.052 0.001 0.274 −0.009 0.259 −0.009 0.257 −0.010 
Avg. math (lag) −0.155 0.002 −0.051 0.001 −0.149 0.002 0.213 −0.007 0.179 −0.006 0.195 −0.007 
Days absent (lag) 5.2 6.1 4.2 5.3 4.0 5.1 
N 1,162 92,849 1,149 95,325 1,546 98,837 3,537 102,070 3,682 103,791 4,004 105,056 
8th grade             
Avg. read (lag) −0.094 0.001 −0.010 0.000 0.011 −0.000 0.287 −0.009 0.299 −0.010 0.289 −0.010 
Avg. math (lag) −0.167 0.002 −0.099 0.001 −0.051 0.001 0.154 −0.005 0.183 −0.006 0.146 −0.005 
Days absent (lag) 5.2 6.5 4.5 5.7 4.2 5.4 
N 851 90,103 1,014 91,899 1,205 94,880 3,142 101,415 3,445 102,175 3,711 103,917 

Notes: Table 4 compares the mean values of the three outcomes for those who were observed in the traditional public school (TPS) in the prior year and in a charter school in the specified year with those who remained in the same traditional public school from the prior year to the specified year. The entries are the values of test scores and absences in the prior year–—that is, before the students enrolled in a charter school. All reading and math scores have been standardized to mean zero and standard deviation of 1. N is the number of reading observations we have–—the other two variables are similarly populated, with some missing observations. Absences were capped at 50 to minimize the effect of outliers.

The extent to which this increasingly positive selection by ability of students into charter schools contributes to the observed rightward shift in the test score gain distribution of charter schools over time is an open question. One possibility is that it contributes little, given that we have controlled statistically for each student's prior-year test scores in our estimation of the test-score gain measures. If that were the case, we would conclude that most of the rightward shift would be attributable to an increase in the effectiveness of the charter schools. At the same time, positive selection based on unmeasured student characteristics such as the noncognitive skills and motivation that lead to lower absenteeism or to better classroom performance could translate into higher gains even after we control for initial test scores. Moreover, a rising share of advantaged students in a charter school could well increase the test scores of other students through some form of spillover or peer effect. To the extent that either of these latter two mechanisms are operative, some of the increases in the observed gains in student test scores would reflect selection effects rather than an increase in the effectiveness of the charter schools themselves.

Hence it behooves us to estimate measures of charter school effectiveness that exclude at least some of these selection effects. The three standard ways of doing so are to compare student outcomes of those who won and lost random lotteries for entrance into charter schools, to compare student outcomes for charter school students to those for a control group constructed using the technique of propensity score matching, or to estimate achievement models with indicators for whether a student attended a charter school controlling for student fixed effects. Although all three approaches have well-known strengths, they each also have weaknesses (Zimmer et al. 2009; Betts and Tang 2011). The lottery approach is generalizable only to oversubscribed schools, requires detailed data on applications and outcomes of the lottery, and often suffers from attrition bias. The propensity matching approach is based only on observed characteristics and hence does not control for the unmeasured and unobservable differences that may characterize students in traditional public and charter schools. Finally, the student fixed effects approach identifies charter school effectiveness based only on the students with test scores who switch from traditional public schools into charter schools, rather than to the full set of charter school students. Moreover, one must be cautious in applying the approach to elementary schools, where most of the switchers in the tested grades are those who switch out rather than into charters. We rely here on the student fixed effect approach and apply it to students in grades 4–8.

By using this approach, we are measuring how students perform in a charter school relative to how those very same students performed in a traditional public school. Comparing students to themselves means we are accounting for all their time-invariant observable and unobservable characteristics, and also implicitly for any peer effects on their student performance.21 Specifically, we follow the lead of Bifulco and Ladd (2006), and estimate student fixed effects models as outlined in equation 3:
3

The outcome of interest remains achievement for a given student (Aigst). The variable of interest—Cist—indicates whether student i attended a charter school s in year t, and α is the measure of effectiveness. Also included in the model are student fixed effects (γi), and grade-by-year fixed effects (ηgt), as well as some time-varying characteristics of students, Xit, including lagged test scores.22 Once we have this arguably causal measure of school effectiveness, we can compare trends in school effectiveness to the trends in measures of value-added estimates of the type we reported earlier. In doing so, however, we need to pay attention to the fact that our effectiveness measures are identified only by the students who shift into the charter school sector during our time periods and not to all charter school students. Hence we look at trends in value added for all students, and also for the subset of students who switch from one sector to another.

Table 5 reports our findings for the period 2003–2011 and two sub-periods, 2003–2007 and 2008–2011. We selected these sub-periods in part because of the significant regulatory changes in 2006 and also to assure that we would have sufficient data for individual students within a sub-period to estimate models with student fixed effects. In fact, the basic findings are not sensitive to the specific sub-periods.

Table 5. 
Student Fixed Effects (FE) and Value-Added (VA) Models
ReadingMath
FE ModelVA (All)VA (Switchers)FEVA (All)VA (Switchers)
Full-time period (2003—2011) 
Charter school −0.0373** 0.0687** 0.0086** −0.1225** 0.0166** −0.0350** 
 (0.005) (0.002) (0.004) (0.005) (0.002) (0.004) 
N 3,046,109 2,981,397 81,459 3,015,380 2,952,272 81.087 
Early period (2003—2007) 
Charter school −0.0580** 0.0288** −0.0256** −0.1374** −0.0277 ** −0.0758** 
 (0.006) (0.003) (0.006) (0.006) (0.003) (0.005) 
N 1,775,665 1,754,495 47,021 1,776,867 1,755,701 47,050 
Late period (2008—2011) 
Charter school 0.0053 0.1049** 0.0515** −0.0774** 0.0656** 0.0172** 
 (0.008) (0.003) (0.007) (0.009) (0.003) (0.007) 
N 1,270,444 1,226,902 34,438 1,238,513 1,196,571 34,037 
ReadingMath
FE ModelVA (All)VA (Switchers)FEVA (All)VA (Switchers)
Full-time period (2003—2011) 
Charter school −0.0373** 0.0687** 0.0086** −0.1225** 0.0166** −0.0350** 
 (0.005) (0.002) (0.004) (0.005) (0.002) (0.004) 
N 3,046,109 2,981,397 81,459 3,015,380 2,952,272 81.087 
Early period (2003—2007) 
Charter school −0.0580** 0.0288** −0.0256** −0.1374** −0.0277 ** −0.0758** 
 (0.006) (0.003) (0.006) (0.006) (0.003) (0.005) 
N 1,775,665 1,754,495 47,021 1,776,867 1,755,701 47,050 
Late period (2008—2011) 
Charter school 0.0053 0.1049** 0.0515** −0.0774** 0.0656** 0.0172** 
 (0.008) (0.003) (0.007) (0.009) (0.003) (0.007) 
N 1,270,444 1,226,902 34,438 1,238,513 1,196,571 34,037 

Notes: Entries in each cell come from separate regressions. The FE regressions include the charter school indicator, student FE, grade-by-year indicator variables, and seven time-varying characteristic of students, including lagged test scores. The value added models (VAM) include the charter school indicator, time invariant and time varying student characteristics, as well as grade-by-year indicator variables. “All” refers to the full sample; “switchers” refers to the set of students who switched between the charter and the traditional public school sector during the relevant time period.

**p < 0.05.

We first start with the estimated charter school effects from the models with student fixed effects (see columns 2 and 5). They indicate, first, that over the full period we examine (2003–2011), charter schools were somewhat less effective than traditional public schools in reading and far less effective in math. A comparison of the effectiveness estimates across the two sub-periods, however, suggests that the charters have become more effective time over time. In particular, the negative effect of −0.058 for reading during the 2003–2007 period had disappeared by the 2008–2011 period, and the very large negative effect of −0.137 for math was reduced by half. We note that some unidentified, but probably small, portion of that improvement could reflect the higher quality peer groups in the charter schools.23

At the same time, even in the recent period, the fixed effects estimates provide no support for the conclusion that the charter schools are more effective than the traditional public schools in raising the test scores of the students who switch into them. Understanding the reasons for that weak performance is beyond the scope of this paper.24 Hence, the positive coefficients of the value-added estimates—even for the sample restricted to those students who switched sectors—provides a misleadingly positive picture of the effectiveness of the charter schools. Much of what appears to be the positive effects of charter schools in the recent period in fact reflects student selection.

Nonetheless, the evidence is consistent with the conclusion that charter schools have indeed become more effective over time, albeit not by quite as much as would be implied by changes in the value-added measures. Looking just at the changes in value-added estimates for the sample of switchers, we find that between the two periods the gain in reading was 0.077 standard deviations and in math 0.093 standard deviations. Based on the changes in our fixed effects estimates, we can attribute about 0.06 standard deviations of gains in reading to improved charter school effectiveness (the difference between −0.0580 and 0.0053) and about 0.06 standard deviations in math (the difference between −0.1374 and −0774). Thus we conclude that somewhere between two thirds and three quarters of the estimated gains in value added reflect improvements in charter school effectiveness with the remainder attributable to a combination of other factors, including student selection.

A defining characteristic of charter schools is that they introduce a strong market element into public education. In this paper, we have examined the evolution of the charter school sector in North Carolina between 1999 and 2012 with attention to three market-related considerations. First, we find that the state's charter schools, which started out disproportionately serving minority students, have been serving an increasingly white student population over time. In addition, during the period, individual charter schools have become increasingly racially imbalanced, in the sense that some are serving primarily minority students and others are serving primarily white students. The resulting market segmentation in the charter school sector reflects a major difference between charter schools and the typical textbook version of a private sector market. In the case of schools, consumers—in this case, parents—care not only about the quality of a school's program but also the mix of students in the school. As a result, market forces will tend to lead not only to more satisfied consumers but also to market segmentation, which in the case of schools is typically by the race of the student.

Second, we find that, as would be predicted, the quality of the match between parental preferences and the offering of the schools is, in general, higher for charter schools than for traditional public schools—where our proxy for match quality is the demographic-adjusted proportion of parents who keep their children in the charter school the next year relative to similar parents whose children are in traditional public schools. Importantly, however, we find that the fit between the child and the school is stronger for children in mainly white schools than in mainly minority schools. Although we have no way to test explicitly for motivation, this difference in fit is consistent with the view that many white parents are using the charter schools, at least in part, to avoid more racially diverse traditional public schools, and also that charters serving largely minority children may be taking actions to discourage some of their students from returning.

Third, we document that the charter schools as a group initially started out behind the traditional public schools in terms of the test score gains of their students. Over time, however, the distributions across schools in the two sectors converged and by the end of our period charter school students tended to have higher test score gains than those attending the traditional public schools. This finding reflects in part the winnowing out of charter schools whose students performed poorly, and, in recent years, the entry of schools whose students performed better. This process is consistent with predictions that market forces would drive under-performing schools out of business. The apparent success of the charter schools entering after 2006 has likely been enhanced by a policy change in that year that required charter schools to delay opening for a year after their charter was approved, and the associated support provided by the state's Office of Charter Schools during and after that year.

At the same time, the relatively high test score gains of students in the charter schools at the end of our study period provides a misleading picture of the absolute effectiveness of charter schools relative to traditional public schools. In fact, our analysis of charter school effectiveness using test score models that control statistically for student selection with the use of student fixed effects shows no relative advantage for charter schools in reading and a disadvantage in math, at least for the students who switch. That is, the apparent success of the students switching from public schools to charter schools in recent years largely reflects the greater ability and motivation of the students who are switching (which we document in table 4) rather than the greater effectiveness of the charter schools relative to the traditional public schools.

Nonetheless, consistent with the prediction of a market model, the North Carolina charter schools have indeed become somewhat more effective over time. It is simply that they started out so weak that the improvements have still not made them more effective on average than the traditional public schools.

Moving forward, the charter school sector in North Carolina is likely to grow significantly, although probably not as fast as its proponents would like. On the one hand, the 2011 removal of the one hundred–school cap, the seventy-one applications for new charters in 2014, and legislative changes that make it easier for existing charter schools to expand without specific approval, all point to large increases in the size of the sector over the next few years. On the other hand, by recommending that only eleven of the seventy-one new charter applications be forwarded to the State Board for approval that year, the Charter School Advisory Board put a bit of a damper on the rate of growth, at least in the short term. Not happy with this outcome, however, the Legislature has now approved a “fast-track” option for charter school operators with experience operating successful schools and want to replicate them. Such schools would not have to go through the typical planning year, and could open months after their approval at the start of the following academic year. As the state's charter sector grows, we would expect to see a continuation of the trends that we have documented here, with the possibility that new entrants may once again struggle as the proliferation of new schools exceeds the limited capacity of the Office of Charter Schools to oversee and support them during their start-up periods.

In this paper, we have said nothing about how the growth of charters in particular districts is likely to affect the ability of those districts to provide quality schooling to the children in the traditional public schools. That issue is currently an urgent concern in Durham County, for example, where the rapid growth of charters has not only increased racial segregation but has also imposed significant financial burdens on the school district. One recent study found the net cost to the Durham Public Schools could be $1,000 or more per student enrolled in a charter school, although the precise amount differs based on the assumptions (Troutman 2014). Major contributors to this burden are that charter schools serve far lower proportions of expensive-to-educate children than the traditional public schools and the district cannot reduce its spending in line with the loss of students because of its fixed costs. In ongoing research we plan to investigate further the evolving financial and other implications of charter schools on districts’ traditional public schools.

In another line of inquiry we are exploring the extent to which the goals of charter schools have been changing over time. One approach is to compare the goals stated in the charter school applications of charters approved at different points over time to examine changes in the types of students they intend to serve (e.g., low income and minority, disabled students, or all students), their subject focus (e.g., STEM, general purpose, or other) and their pedagogical approach (e.g., normal grade format, mixed grades, or Montessori). Another approach is to examine the backgrounds of charter school board members over time to tease out the extent to which charters are becoming more of a business proposition and less of an innovative way of providing education.

As would be predicted by the standard market model of competition, the charter school sector in North Carolina will undoubtedly continue to evolve. In this case, however, state policy makers have both the power and the responsibility to influence that evolution. In particular, they have the authority to limit the number of entrants or to alter the authorization and review processes. The question is whether they will use that authority to ensure that the sector serves the public interest and not just the private interests of those who send their children to charter schools.

The authors thank the Center for the Analysis of Longitudinal Data in Education Research funded by the U.S. Department of Education (grant R305C120008) for financial support, Adrienne Jones and Winnie Biwott for expert research assistance, and Izzy Hernandez-Cruz for additional assistance. We are also indebted to discussants and participants at APPAM and AEFP conferences for their feedback on earlier versions of this paper.

Baude
,
Patrick L.
,
Marcus
Casey
,
Eric A.
Hanushek
, and
Steven G.
Rivkin
.
2014
.
The evolution of charter school quality
.
NBER Working Paper No. w20645
.
Bettinger
,
Eric P.
2005
.
The effect of charter schools on charter students and public schools
.
Economics of Education Review
24
(
2
):
133
147
. doi:10.1016/j.econedurev.2004.04.009.
Betts
,
Julian R.
, and
Y. Emily
Tang
.
2011
.
The effect of charter schools on student achievement: A meta-analysis of the literature
.
Seattle, WA
:
Center on Reinventing Public Education
.
Bifulco
,
Robert
, and
Helen F.
Ladd
.
2006
.
The impacts of charter schools on student achievement: Evidence from North Carolina
.
Education Finance and Policy
1
(
1
):
50
90
. doi:10.1162/edfp.2006.1.1.50.
Bifulco
,
Robert
, and
Helen F.
Ladd
.
2007
.
School choice, racial segregation, and test-score gaps: Evidence from North Carolinas charter school program
.
Journal of Policy Analysis and Management
26
(
1
):
31
56
. doi:10.1002/pam.20226.
Carruthers
,
Celeste K.
2012
.
The qualifications and classroom performance of teachers moving to charter schools
.
Education Finance and Policy
7
(
3
):
233
268
. doi:10.1162/EDFP_a_00067.
Center for Research on Education Outcomes (CREDO)
.
2013
.
National charter school study 2013
. Available credo.stanford.edu/documents/NCSS%202013%20Final%20Draft.pdf.
Accessed 30 September 2016
.
Chetty
,
Raj
,
John N.
Friedman
, and
Jonah E.
Rockoff
.
2014
.
Measuring the impact of teachers I: Evaluating bias in teacher value-added estimates
.
American Economic Review
104
(
9
):
2593
2632
. doi:10.1257/aer.104.9.2593.
Clotfelter
,
Charles T.
, and
Helen F.
Ladd
.
1996
.
Recognizing and rewarding success in public schools
. In
Holding schools accountable: Performance-based reform in education
,
edited by
Helen F.
Ladd
, pp.
23
63
.
Washington, DC
:
The Brookings Institution
.
Clotfelter
,
Charles T.
,
Helen F.
Ladd
, and
Jacob L.
Vigdor
.
2006
.
Teacher-student matching and the assessment of teacher effectiveness
.
Journal of Human Resources
41
(
4
):
778
820
. doi:10.3368/jhr.XLI.4.778.
Eisen
,
Allison
.
2014
.
The purpose of the charter: Exploring the accountability of North Carolina charter schools
.
MA thesis, Duke University
.
Eisen
,
Allison
, and
Helen F.
Ladd
.
2015
.
A spotty record for North Carolina charters
.
News and Observer
,
10 March
.
Fuller
,
Bruce F.
,
Richard
Elmore
, and
Gary
Orfield
.
1996
.
Who chooses? Who loses? Culture, institutions, and the unequal effects of school choice
.
New York
:
Teachers College Press
.
Hanushek
,
Eric A.
,
John F.
Kain
,
Jacob M.
Markman
, and
Steven G.
Rivkin
.
2003
.
Does peer ability affect student achievement
?
Journal of Applied Econometrics
18
(
5
):
527
544
. doi:10.1002/jae.741.
Hanushek
,
Eric A.
, and
Steven G.
Rivkin
.
2010
.
Generalizations about using value-added measures of teacher quality
.
American Economic Review
100
(
2
):
267
271
. doi:10.1257/aer.100.2.267.
Kane
,
Thomas J.
, and
Douglas O.
Staiger
.
2008
.
Estimating teacher impacts on student achievement: An experimental evaluation
.
NBER Working Paper No. w14607
.
Ladd
,
Helen F.
, and
Randall P.
Walsh
.
2002
.
Implementing value-added measures of school effectiveness: Getting the incentives right
.
Economics of Education Review
21
(
1
):
1
17
. doi:10.1016/S0272-7757(00)00039-X.
Lankford
,
Hamilton
,
Susanna
Loeb
, and
James
Wyckoff
.
2002
.
Teacher sorting and the plight of urban schools: A descriptive analysis
.
Educational Evaluation and Policy Analysis
24
(
1
):
37
62
. doi:10.3102/01623737024001037.
McCaffrey
,
Daniel F.
,
J. R.
Lockwood
,
Kata
Mihaly
, and
Tim R.
Sass
.
2012
.
A review of Stata routines for fixed effects estimation in normal linear models
.
Stata Journal
12
(
3
):
406
432
.
McEwan
,
Patrick J.
2003
.
Peer effects on student achievement: Evidence from Chile
.
Economics of Education Review
22
(
2
):
131
141
. doi:10.1016/S0272-7757(02)00005-5.
Myers
,
Christopher
.
2004
.
White Freedom Schools: The white academy movement in eastern North Carolina, 1954–1973
.
North Carolina Historical Review
81
(
4
):
393
425
.
Tekwe
,
Carmen D.
,
Randy L.
Carter
,
Chang-Xing
Ma
,
James
Algina
,
Maurice E.
Lucas
,
Jeffrey
Roth
,
Mario
Ariet
,
Thomas
Fisher
, and
Michael B.
Resnick
.
2004
.
An empirical comparison of statistical models for value-added assessment of school performance
.
Journal of Educational and Behavioral Statistics
29
(
1
):
11
36
. doi:10.3102/10769986029001011.
Troutman
,
Elizabeth
.
2014
.
Refocusing charter school policy on disadvantaged students
.
MA thesis, Duke University
.
Zimmer
,
Ron
,
Brian
Gill
,
Kevin
Booker
,
Stephane
Lavertu
,
Tim R.
Sass
, and
John
Witte
.
2009
.
Charter schools in eight states: Effects on achievement, attainment, integration, and competition
. Available www.rand.org/pubs/monographs/MG869.html.
Accessed 30 September 2016
.

Appendix A

Table A.1. 
Number of Charters, Charter Exits, and New Charters Statewide by Year
YearNo. ChartersNo. Charter ExitsNo. New Charters
1991—92 
1992—93 
1993—94 
1994—95 
1995—96 
1996—97 
1997—98 33 33 
1998—99 71 45 
1999—00 75 
2000—01 85 15 
2001—02 88 
2002—03 93 
2003—04 93 
2004—05 97 
2005—06 94 
2006—07 90 
2007—08 97 
2008—09 97 
2009—10 96 
2010—11 99 
2011—12 100 
2012—13 108 
2013—14 127 23 
2014—15 153 26a  
2015—16 224 71b  
YearNo. ChartersNo. Charter ExitsNo. New Charters
1991—92 
1992—93 
1993—94 
1994—95 
1995—96 
1996—97 
1997—98 33 33 
1998—99 71 45 
1999—00 75 
2000—01 85 15 
2001—02 88 
2002—03 93 
2003—04 93 
2004—05 97 
2005—06 94 
2006—07 90 
2007—08 97 
2008—09 97 
2009—10 96 
2010—11 99 
2011—12 100 
2012—13 108 
2013—14 127 23 
2014—15 153 26a  
2015—16 224 71b  

Notes: Educational Directory and Demographic Information Exchange. Number of Charter Exits is the number of charters that close at the end of the given school year. Number of New Charters is the number of charters opened at the beginning of the given school year. Found at: http://apps.schools.nc.gov/pls/apex/f?p=125:1.

bApplications received for 2015—2016 found at: www.ncpublicschools.org/charterschools/applications/submitted/2015-16/.

1. 

Except where noted, all the data in this paper come from the North Carolina Education Research Data Center. The center provides data on a confidential basis to researchers, from the North Carolina Department of Public Instruction on all students in the state. All identifying information has been removed.

2. 

A charter school bill was initially introduced in the House by a Republican and in the Senate by a Democrat in 1995, but it failed to pass. The following year the North Carolina Family Policy Council worked closely with the original sponsors to draft a compromise proposal that passed in 1996. For a brief legislative history of charter schools, see article by Will Schultz of the North Carolina History Project (northcarolinahistory.org/encyclopedia/charter-schools/).

3. 

See the 1996 legislation (www.ncga.state.nc.us/enactedlegislation/statutes/html/bysection/chapter_115c/gs_115c-218.45.html). That language was changed in 2013 to read, the charter school “shall make efforts for the population of the school to reasonably reflect the racial and ethnic composition of the general population residing within the local school administrative unit in which the school is located…”

4. 

See www.ncleg.net/Sessions/2015/Bills/House/PDF/H334v4.pdf, page 2. This 2013 bill enabled additional expansion by permitting charter schools to add one grade higher or lower than it currently offered without the approval of the State Board. The bill also reduced the proportion of charter school teachers in grades K–5 who had to be certified from 75 percent to 50 percent, removed the right of districts to submit impact statements with charter school applications, and set up a fast-track approval process.

5. 

This low approval rate led to criticism by the state legislature, which has indicated interest in using the newly established fast-track procedure to set up more charters.

6. 

This table—and many of our other analyses—uses data only for students in grades 4–8, given the comparative abundance of data, including student test scores, available in these grades.

7. 

A similar analysis for students in the broader 3–12 range of grades provides a comparable picture, albeit one that shows slightly stronger trends. For the broader grade range, black students were overrepresented in charter schools relative to traditional public schools in 1998 (41.3 percent and 29.1 percent, respectively) but by 2012 were underrepresented (24.9 percent and 27.3 percent, respectively). Correspondingly, the white underrepresentation in charter schools relative to traditional public schools in 1998 (54.7 percent and 64.9 percent, respectively) turned to an overrepresentation (64.3 percent and 53.7 percent, respectively) by 2012.

8. 

Because the state stopped collecting information on parental education in 2008, we are not able to extend the data to 2012.

9. 

Bifulco and Ladd (2007) infer from the choices that black families make among charter schools that they prefer racially balanced schools, but they are usually not able to attain that goal because white families typically prefer charters that are less than 20 percent black.

10. 

In the district specific models, we limit the sample to a single district and remove district fixed effects.

11. 

The legislation did not specifically mention that the accountability criterion would be gains in student test scores based on the state's standardized tests. In fact, it left open the possibility that some charters might want to develop other measurable accountability measures. In practice, however, the only charter schools that were approved were those that agreed to be held accountable based on the state's tests.

12. 

To address possible intra-school correlations in student performance, we cluster our standard errors at the school level. Combining this with a school fixed effect likely means that our standard errors are conservatively estimated.

13. 

We note, however, that including a set of peer characteristics in our value-added models does little to change the conclusions we outline in this paper.

14. 

McCaffrey et al. (2012) outline several ways of estimating many fixed effects. Our approach utilizes the xtreg command, followed by the predict, u post-estimation command in STATA.

15. 

For more of this standard approach that STATA utilizes, see www.stata.com/support/faqs/statistics/intercept-in-fixed-effects-model/.

16. 

We note the results we report below are not adjusted for Bayesian shrinkage, an adjustment that is often used in the estimation of value-added measures for teachers (Kane and Staiger 2008; Chetty, Friedman, and Rockoff 2014). That type of adjustment is less relevant in the context of schools because of the larger samples of students in schools than in a teacher's classroom. Implementing this adjustment in our early analyses confirmed that it had little effect.

17. 

The improvement in the last few years is also consistent with the recent changes reported in the CREDO study (2013) for North Carolina.

18. 

The stated reason for the closure of about a third of the charter schools during the period was inadequate enrollment. The reason given for most of the other closures was financial irregularity or mismanagement. Only one school was closed explicitly for poor academic performance.

19. 

These numbers reflect the number of observations in our analyses. These are lower than the numbers reported in figure 1 for two reasons: first, because we only include charter schools with grades 4–8 and second, because of list-wise deletion in our models that occurs because of the sporadic missingness in some of our model controls.

20. 

We focus on students entering charters in grades 6, 7, and 8 because far more students transfer to charters in those grades than in the earlier grades. Sample sizes would only be about 600 in the recent years for grades 4 and 5.

21. 

One potential concern with such models is that they do not account for relevant individual-level characteristics that vary over time, such as previous test score performance. One might be concerned, for example, that charter school students might have a pre-entry dip in test scores and then a reversion to mean performance levels. Our examination of patterns in student test scores prior to entry provides no evidence to support a pre-entry dip and hence no evidence of regression to the mean. Further, prior studies have generally concluded that prior trends do not explain the charter school results (Bifulco and Ladd 2006; Betts and Tang 2011).

22. 

The student fixed effects models reported in table 5 include all the time-varying student-level variables that are included in our value-added models. The results change in no meaningful way if we estimate far more parsimonious models that include only whether a student is new to a school or if we restrict the sample to switchers alone. For example, the estimated coefficient for reading in the 2003–2007 period in a model based on the parsimonious model for switchers only is −0.0582, which is almost identical to the reported estimate of −0.0580. For the 2008–2011 period, the coefficient for reading is −00096, but, like the reported coefficient, is indistinguishable from 0.

23. 

Our assertion that the contribution of peer effects is likely to be small is consistent with estimates (not reported) from value-added models that include school-level peer variables (see footnote 16) and with the evidence and argument made by Baude et al. (2014) in their study of Texas charter schools.

24. 

Among the many possible explanations is low teacher quality. Carruthers (2012) provides some support for that explanation in her study of the quality of the charter school teachers with whom she was able to observe in the traditional public schools before they switched sectors. She found that between 1997 and 2009, the teachers who moved to charter schools (who accounted for about 36 percent of all teachers in the charter sector) were typically less qualified and less effective than comparable teachers who moved within the traditional public school sector.