Abstract
Half of all college students will enroll in remedial coursework but evidence of its effectiveness is mixed. Using a regression-discontinuity design with data from a large urban community college system, we make three contributions. First, we articulate three alternative hypotheses regarding the potential impacts of remediation. Second, in addition to credits and degree completion we examine several underexplored outcomes, including initial enrollment, grades in subsequent courses, and post-treatment proficiency test scores. Finally, we exploit rich high school background data to examine impact heterogeneity by predicted dropout risk. We find that remedial assignment does little to develop students’ skills. But we also find little evidence that it discourages initial enrollment or persistence, except for a subgroup we identify as potentially misassigned to remediation. Instead, the primary effect of remediation appears to be diversionary: students simply take remedial courses instead of college-level courses. These diversionary effects are largest for the lowest-risk students.
1. Introduction
Remedial education, or “developmental” education as it is called in the field, may be the most widespread and costly intervention aimed at addressing a perceived lack of preparation among incoming college students.1 Remedial courses, which do not count towards degree completion, are intended to help students master the skills needed for successful progression toward their degree goals. Half of all undergraduates will take one or more remedial courses while enrolled.2 At community colleges, remedial credits represent approximately 10 percent of all credits earned, suggesting that the cost of remediation may be nearly $4 billion per year in this sector alone.3
Previous research, primarily relying on regression-discontinuity (RD) analyses comparing students just above and below remedial test score cutoffs, has found mixed evidence at best regarding whether assignment to remediation actually improves student outcomes (Bettinger and Long 2005, 2009; Calcagno and Long 2008; Boatman and Long 2010; Martorell and McFarlin 2011; Dadgar 2012; Hodara 2012). But remediation is not going away, and if anything, remediation policies trend towards becoming stricter over time (Hughes and Scott-Clayton 2011).
Policy makers may be cautious in their interpretations of the existing research on remediation for two reasons: first, lingering uncertainty about the generalizability of prior findings to other contexts, particularly to lower-ability students who may not be represented in local RD estimates; second, the reality that remediation may serve other purposes beyond simply developing students’ college readiness. For example, an unadvertised but implicit function of remedial assignment may be to signal students about their likelihood of college completion; it may be efficient to both the student and the institution to realize this and adjust their investments sooner rather than later. Moreover, regardless of its effectiveness in remediating skill deficiencies, remediation may still serve as an expedient form of student tracking. Even if remediated students never make it to college-level coursework, students in both remedial and college-level courses may learn more during their three semesters of attendance (the average, in our sample) than if they were all grouped in already-crowded college courses.
Our study makes three primary contributions. First, we articulate three alternative hypotheses regarding the functions of remediation: as development for future coursework, discouragement from further study, or simply a diversion onto a separate track. Second, using rich administrative data on 100,000 students at six institutions within a large, urban community college system (LUCCS), we utilize a regression discontinuity approach (comparing students just above and below remedial test score cutoffs) to examine several outcomes underexplored in the prior literature, including the initial decision to enroll, grades in subsequent college courses in the same subject, and post-treatment scores on a proficiency exam required in order to earn any degree.4 Finally, we explore impact heterogeneity using a novel new approach that gets us beyond the usual local nature of RD estimates: Because the placement test scores used for the RD are quite noisy, we use rich high school background data to identify students with varied levels of prior predicted dropout risk who all scored around the placement test cutoff.
Our findings affirm prior research indicating that assignment to remediation does not develop students’ skills sufficiently to increase their rates of college success (Calcagno and Long 2008; Martorell and McFarlin 2011). On the other hand, neither does remedial assignment appear to be a significant discouragement to student progress, except for one group we identify as potentially misassigned to remediation—students who passed a more difficult writing test, but just barely failed a significantly easier reading test. Among other negative effects, this group experienced an 8 percentage point increase in the likelihood of dropping out.
Overall, the primary effect of remediation appears to be diversionary: students generally enroll and persist at the same rates but simply take remedial courses instead of college-level courses. Although our conceptual framework suggests diversion is not necessarily a bad thing, our findings provide some reason for concern. First, we find that potentially one quarter of students diverted from college-level courses in math, and up to 70 percent of those diverted in reading, would have earned a B or better in the relevant college course. Further, our analysis of impacts by prior predicted dropout risk suggests that diversionary effects are largest for the lowest-risk students, and we fail to find positive effects for any risk subgroup.
The remainder of the text proceeds as follows: in section 2, we describe our conceptual framework and review the prior literature. In section 3, we describe our empirical strategy. In section 4 we present our main results and specification checks. Section 5 explores heterogeneous effects by test type and prior predicted dropout risk. Section 6 concludes.
2. Conceptual Framework and Prior Literature
The increasing availability of large-scale administrative data sets makes it increasingly feasible for researchers to examine dozens of outcomes for any given intervention. Carefully delineating a program's potential mechanisms is thus essential to identifying the key outcomes of interest and interpreting the resulting pattern of estimates. Prior research has described several purposes that remedial coursework might serve within an institution. We categorize these potential functions into three broad hypotheses, namely, that remediation serves as (1) skill development that prepares students for future college-level courses, (2) a discouragement that stigmatizes students and sends a signal about their probability of college success, or (3) a diversion that steers students out of college-level courses and reduces heterogeneity within classrooms. These functions may or may not be intentional and are not mutually exclusive. In the following, we describe each hypothesis and summarize the relevant causal research in context.
The Developmental Hypothesis
Prior research has documented low levels of preparation among recent cohorts of high school graduates (Greene and Forster 2003). At open-access institutions, remedial coursework is intended to develop underprepared students’ skills so that they have the opportunity to pursue college success regardless of prior preparation (RP Group 2007). Indeed, this central function is expressed in terminology—many institutions and researchers now eschew the traditional term “remedial education” in favor of the more optimistic “developmental education.” In this view, developmental education is an investment—compared with how they might have fared without remediation, these students may experience an initial negative setback as they delay some college coursework but should reap benefits over the longer term. These longer term benefits should most directly include improved performance in college-level courses, which may in turn lead to greater persistence and higher rates of degree completion and/or transfer.
Three prior evaluations of remedial education provide evidence on the developmental mechanism by examining whether remediated students eventually complete more college-level credits, persist for longer, and/or complete degrees or transfer at higher rates than similar students who took the most direct path. The first quasi-experimental study of remediation, by Bettinger and Long (2009), provides the most encouraging evidence in support of the developmental hypothesis. They take advantage of seemingly arbitrary variation in placement test cutoff policies across two- and four-year campuses in Ohio, using distance to college as an instrument for students’ probability of remediation. They find some important positive impacts: Students who were more likely to be remediated (by virtue of the cutoff policy at the nearest school) were more likely to complete a bachelor's degree in four years. They also find those remediated in English were less likely to drop out in their first year. On the other hand, mixed with these positive effects, they also find some significant negative impacts. For example, remediated students in both English and math completed significantly fewer total credits, and those remediated in math were more likely to drop out in their first year.
Two subsequent studies using an RD approach, comparing students just above and below test score cutoffs for remediation within institutions, find little evidence to support the development hypothesis. A study using data from over 100,000 two-year entrants in the state of Florida found no impact on retention, degree completion, transfer, or completion of college credits for students near the cutoff (Calcagno and Long 2008). Martorell and McFarlin (2011), who studied over 250,000 students in Texas public two- and four-year colleges, find that assignment to remediation decreased the probability of completing additional years of college and reduced credit accumulation, with no impact on degree attainment.
It is worth noting that the prior literature has not fully explored one set of outcomes particularly relevant to the development hypothesis: grades in subsequent college-level coursework in the remediated subject. If remediation improves students’ performance in the college-level courses that directly follow remediation, this alone might justify the intervention, even without broader impacts on credits or graduation, which may be asking too much of a fairly narrow treatment. Of course, grades can be a tricky outcome to examine causally because many students simply never reach college-level coursework. Two prior studies examine whether remediated students have a higher likelihood of ever completing a college-level course in the relevant subject (Calcagno and Long 2008; Dadgar 2012). These studies find no effect, but potential impacts higher in the grade distribution are unexplored. Boatman and Long (2010) take a different approach, examining grades only for students who attempted a college-level course—although these comparisons do not have a causal interpretation because they are conditional on a post-treatment outcome (ever taking a college-level course).5 In order to examine grades while preserving our causal identification strategy, we examine binary outcomes such as whether or not a student ever earned a B or better or C or better in the first college-level course in the relevant subject (where those never taking the course are entered as zeros).
The Discouragement Hypothesis
Martorell and McFarlin's (2011) finding that assignment to remediation negatively impacts college persistence suggests the presence of discouragement or stigma effects. This is consistent with evidence on the impact of test score performance labels at the high school level, indicating that being labeled as a poor performer discourages students from enrolling in college (Papay, Murnane, and Willett 2011). Via this mechanism, an assignment to remediation may send a message to students that they are not “college material.” This is in line with Burton Clark's (1960) description of a “cooling out” process in higher education, in which obstacles encountered by the student in college serve to gradually diminish their degree aspirations. Students assigned to remediation may, as a result, be less likely to enroll, or may end up dropping out sooner even if they do enroll. Note that although discouragement is typically framed as an undesirable potential side effect, it is possible to take a more agnostic view: A remedial assignment may simply give students a signal about their preparation that causes them to rationally reevaluate the benefits of enrollment or persistence.6
Allowing for potential discouragement effects has several implications for our study. First, it highlights the importance of tracking students from the point that they receive their first test scores, not only after they enroll. Whereas all of the prior remediation studies look for negative impacts on persistence or completion conditional on enrollment, ours joins just one other recent study in examining whether there are any effects on college enrollment between the time of the first test and initial course registration. Martorell, McFarlin, and Xue (2015), using the same Texas data as in the Martorell and McFarlin (2011) study, find no significant effect on initial enrollment in either direction. Given the variation in remedial testing and assignment procedures across systems, our study will help establish whether this finding generalizes to a different context.
Second, the discouragement hypothesis implies that some students assigned to remediation may be negatively affected even if they never actually enroll in or complete remediation. With the exception of Martorell, McFarlin, and Xue (2015), prior research typically uses remedial assignment policy as an instrumental variable for actual remedial course-taking (Calcagno and Long 2008; Bettinger and Long 2009; Martorell and McFarlin 2011). But unless one is willing to assume away any direct effects of the remedial label, only the reduced-form effect of remedial assignment can be credibly established. We thus focus primarily on reduced-form effects of remedial assignment.
Finally, the discouragement hypothesis highlights the importance of considering heterogeneous effects when evaluating remedial policies: Some students may be discouraged, whereas other students may do better than they would otherwise. One potential limitation of any RD study is that the estimated effects are local to students scoring near remedial cutoffs (that is, the highest-ability remediated students). Defenders of the developmental model may legitimately argue that higher-ability students might be the most sensitive to discouragement effects and least likely to benefit from developmental instruction, implying that lower-ability students might experience more positive effects. Martorell and McFarlin (2011) are able to examine RD effects separately for cohorts with higher and lower cutoffs and find less negative (but not positive) effects when the marginal student is of lower measured ability.
Recent RD studies have also explored the effects of assignment to lower levels of remediation (Boatman and Long 2010; Dadgar 2012; Hodara 2012). These studies compare students just above and below test score cutoffs for longer versus shorter (or more versus less intensive) remedial sequences, rather than comparing those above and below the threshold for college-level coursework. These studies have also found less negative effects, with a smattering of some positive effects of assignment to lower remedial levels. This pattern is consistent either with less negative effects for lower-ability students, or simply reflects a different bundle of treatment (for example, it may be that discouragement effects apply equally to remediated students regardless of level but those at lower levels get a larger “dose” of the developmental mechanism).
Our study provides a new means of exploring heterogeneous effects in RD designs: Although the placement test scores used for assignment are often assumed to be a measure of ability, they are in fact quite noisy and error-prone (ACT, Inc. 2006; Scott-Clayton 2012). This implies that even around the cutoff there is variation in student ability. We use rich demographic and background data on high school achievement to predict students’ pretreatment risk of dropping out of college. We then run our RD analysis separately for subgroups based on this index of dropout risk.
The Diversion Hypothesis
A third possible view of remediation is neither as optimistic as the development hypothesis nor as pessimistic as the discouragement hypothesis. Under the diversion hypothesis, the primary role of remediation is simply for institutions to sort students of different ability onto different course tracks. The goal in this case need not be to prepare remediated students for future coursework but simply to maximize learning gains for both remediated and nonremediated students for as long as they remain enrolled (which, in our sample, is an average of three semesters). While research on K–12 education largely finds that low-achieving students learn more when they are placed in heterogeneous classrooms (Peterson 1989; White et al. 1996; Burris, Heubert, and Levin 2006), some studies have found that tracking may be beneficial, or at least not harmful, to students of lower ability (Figlio and Page 2002; Zimmer 2003).
Moreover, there is strong evidence of peer effects in higher education (Sacerdote 2001; Zimmerman 2003; Winston and Zimmerman 2004; Carrell, Fullerton, and West 2009), raising the concern that allowing too many underprepared students into college-level courses might depress the achievement of the better-prepared. While Carrell, Fullerton, and West (2009) find that positive peer effects for low-achieving students outweigh negative effects for high-achieving students, it is not obvious their results (in the context of the U.S. Air Force Academy) would extrapolate to community college students. Finally, if college-level courses are already at capacity, then allowing too many students into college-level coursework might depress achievement because of overcrowding, regardless of whether or not the ability mix shifts.
The prior causal research on remediation has not been designed to examine impacts on nonremediated students, and our study is no different on this dimension. Like prior research, we are able to examine diversion by looking at the relative impact on total credits (including both remedial and college-level courses) versus college-level credits—a diversion story suggests there may be no effect on the former but negative effects on the latter. Both Calcagno and Long (2008) and Martorell and McFarlin (2011) find evidence of diversion effects. Nevertheless, a potential outcome under the diversion hypothesis is that even if remediated students never make it to college level coursework, they may learn more in their remedial courses than they would have otherwise. This suggests that one should examine some direct measures of learning beyond simply credits and credentials. Martorell and McFarlin (2011) find no impact on labor market outcomes, though their estimates are too noisy to rule out modest effects in either direction. We extend the literature on this front by examining post-assignment scores on a proficiency exam that is required of all students in order to graduate.
3. Empirical Strategy
Institutional Context
Our analysis focuses on first-time degree-seeking students who were admitted to any of the six community colleges in a single LUCCS between Fall 2001 and Fall 2007.7 Over the period under study in this report, the LUCCS utilized two different exams for placement in math. From 2001 to 2004 the LUCCS utilized a single-score math exam that was developed in-house. Since 2004, the LUCCS has utilized scores from the COMPASS® numerical skills/pre-algebra module as well as the algebra module for remedial placement.8 For reading/writing placement, over the entire period the LUCCS used the COMPASS® reading exams as well as a writing exam that the LUCCS adapted slightly from the standard COMPASS® writing module (and which the LUCCS grades in-house).
As in many systems, students are exempted from the placement exams if they score above a certain level either on the SAT, ACT, or on a standardized state high school exam. Approximately 20 percent of entering students were exempt from placement testing in math, and approximately 25 percent were exempt in English for the cohorts under study. These exempt students are excluded from the analysis. All students who are not exempted from placement testing must take the relevant placement exam(s) prior to initial enrollment.9 The retesting policy is strict—students may not retake a placement exam until they have completed either a remedial course or at least twenty hours of documented participation in an alternative intervention, which might include a workshop or regular tutoring.
Each year, the LUCCS central office establishes minimum cut scores for access to college-level courses that apply to all of the LUCCS institutions, although schools are free to establish higher cutoffs, and some schools in some years were allowed to have lower cutoffs on the writing exam on a pilot basis. We determined the cutoff policies in place at each college in each year by examining information from college course catalogs and following up with institutional administrators if necessary. We also checked these stated cutoffs against the actual course-taking patterns by test score that we can observe in our data.
Students are encouraged but not required to begin their remedial coursework immediately upon enrollment. Although they may be able to access some college-level courses before completing remediation, many college-level courses require freshman composition in particular as a prerequisite. Moreover, students must pass college-level freshman composition and at least one credit-bearing math course in order to earn any degree, so a student cannot graduate without successfully exiting remediation. Although relatively few students in the LUCCS who are assigned to a remedial course circumvent that placement to enroll in a college-level course, this does not imply that all students follow their remedial assignment. As we will show subsequently, many students who are assigned to remediation never actually take a remedial course, whereas others who test out of remediation may nonetheless take a remedial course (in some cases, math/science majors have higher remedial cutoffs than the institution-level cutoff we use in our analyses).
Data and Sample
The data for this analysis were provided under a restricted-use agreement with the LUCCS. All students are followed for three years after they were first tested (we can also look at longer-term outcomes for some cohorts). We can track students’ credits, grades, and degree outcomes even if they transfer to another public two- or four-year institution within the same urban public college system.10 Our data include information on all placement exam administrations, so we are able to identify and utilize the scores from the students’ first-test attempt. Finally, our sample includes tested students even if they ultimately did not enroll at any of the LUCCS institutions, enabling us to examine whether remedial placement may impact the enrollment decision itself.
Table 1 provides descriptive information on the full sample of test-takers and main subsamples for the analysis. Column 1 shows that among all test-takers during this time period, 72 percent were assigned to remedial math, 72 percent were assigned to remedial writing, and 38 percent were assigned to remedial reading. Overall, approximately 90 percent were assigned to remediation in one or more subjects.11 This proportion has generally been flat or declining over the sample timeframe except for discrete and substantial jumps when new tests or new cutoffs were implemented.
. | . | . | . | . | Reading Analysis, . | Reading Analysis, . |
---|---|---|---|---|---|---|
. | All . | . | Remediation in . | Math Analysis . | Failed Writing . | Passed Writing . |
. | Test-Takers . | No Remediation . | Any Subject . | Sample (±4 pts) . | (±4 pts) . | (±4 pts) . |
Assigned to Dev Math, College Std. | 0.720 | 0.000 | 0.793 | 0.573 | 0.776 | 0.752 |
Assigned to Dev Writing College Std. | 0.719 | 0.000 | 0.792 | 0.660 | 1.000 | 0.000 |
Assigned to Dev Reading College Std. | 0.378 | 0.000 | 0.418 | 0.303 | 0.481 | 0.442 |
Assigned to Any Dev Ed. College Std. | 0.901 | 0.000 | 1.000 | 0.851 | 1.000 | 0.840 |
Female | 0.578 | 0.562 | 0.580 | 0.561 | 0.569 | 0.683 |
Age | 21.585 | 20.102 | 21.712 | 20.941 | 21.104 | 20.408 |
White, Non-Hispanic | 0.142 | 0.233 | 0.134 | 0.176 | 0.116 | 0.156 |
Black, Non-Hispanic | 0.282 | 0.280 | 0.282 | 0.290 | 0.303 | 0.306 |
Latino | 0.343 | 0.231 | 0.354 | 0.303 | 0.365 | 0.331 |
Asian Pacific Islander | 0.106 | 0.116 | 0.106 | 0.106 | 0.108 | 0.082 |
Other Race | 0.067 | 0.067 | 0.066 | 0.061 | 0.062 | 0.068 |
Language Minority | 0.527 | 0.426 | 0.537 | 0.514 | 0.540 | 0.429 |
Graduated from Local High School | 0.551 | 0.653 | 0.543 | 0.579 | 0.586 | 0.643 |
Years delayed enrollment | 2.664 | 1.838 | 2.728 | 2.269 | 2.182 | 1.728 |
HS GPA (0–100 scale, non-missing) | 73.558 | 75.705 | 73.343 | 73.526 | 72.704 | 73.343 |
HS College Prep Units (non-missing) | 11.737 | 14.452 | 11.509 | 12.158 | 11.642 | 12.531 |
Has HS GPA | 0.904 | 0.942 | 0.901 | 0.913 | 0.907 | 0.934 |
Four-year college is 1st Choice | 0.294 | 0.394 | 0.285 | 0.296 | 0.313 | 0.390 |
Received Pell at Entry (non-missing) | 0.569 | 0.455 | 0.582 | 0.534 | 0.602 | 0.572 |
Missing Pell Data | 0.170 | 0.113 | 0.174 | 0.131 | 0.161 | 0.160 |
Dependent at Entry (non-missing) | 0.737 | 0.837 | 0.729 | 0.774 | 0.776 | 0.787 |
Missing Dependency Data | 0.366 | 0.392 | 0.361 | 0.359 | 0.331 | 0.353 |
Enrolled immediately | 0.750 | 0.837 | 0.743 | 0.789 | 0.773 | 0.787 |
Enrolled w/in 3 | 0.829 | 0.886 | 0.825 | 0.866 | 0.838 | 0.840 |
Took Dev. Math w/in 3 | 0.547 | 0.195 | 0.581 | 0.533 | 0.578 | 0.587 |
Took College Math w/in 3 | 0.367 | 0.642 | 0.342 | 0.481 | 0.352 | 0.385 |
Passed College Math | 0.299 | 0.536 | 0.277 | 0.382 | 0.290 | 0.319 |
C or Higher in College Math | 0.249 | 0.469 | 0.229 | 0.317 | 0.240 | 0.268 |
B or Higher in College Math | 0.176 | 0.363 | 0.158 | 0.217 | 0.163 | 0.171 |
Took Dev. Reading w/in 3 | 0.219 | 0.002 | 0.241 | 0.179 | 0.251 | 0.258 |
Took College English w/in 3 | 0.534 | 0.856 | 0.504 | 0.611 | 0.477 | 0.745 |
Passed College English | 0.451 | 0.742 | 0.424 | 0.519 | 0.408 | 0.628 |
C or Higher in College English | 0.425 | 0.720 | 0.398 | 0.492 | 0.380 | 0.591 |
B or Higher in College English | 0.310 | 0.596 | 0.284 | 0.364 | 0.256 | 0.418 |
Earned AA or BA w/in 3 | 0.092 | 0.220 | 0.080 | 0.115 | 0.084 | 0.136 |
Transferred or Earned Degree w/in 3 | 0.124 | 0.294 | 0.109 | 0.155 | 0.112 | 0.180 |
Still Enrolled at End of Year 3 | 0.240 | 0.191 | 0.245 | 0.244 | 0.253 | 0.205 |
Dropped Out by Year 3 | 0.635 | 0.516 | 0.645 | 0.601 | 0.634 | 0.615 |
Number of terms enrolled w/in 3 | 3.297 | 3.882 | 3.249 | 3.503 | 3.351 | 3.463 |
Tot. Equated Credits Passed in 3 | 29.761 | 39.009 | 28.967 | 33.036 | 29.762 | 31.347 |
Tot. College Credits Passed in 3 | 23.347 | 37.504 | 22.072 | 26.811 | 22.507 | 27.607 |
Took CPE w/in 3 | 0.142 | 0.315 | 0.126 | 0.172 | 0.130 | 0.209 |
Pass CPE w/in 3 | 0.129 | 0.302 | 0.114 | 0.157 | 0.116 | 0.197 |
CPE Highest Score w/in 3 | 42.427 | 44.789 | 41.899 | 42.045 | 41.611 | 42.899 |
Sample Size | 100,250 | 7,592 | 90,342 | 18,724 | 7,049 | 1,374 |
. | . | . | . | . | Reading Analysis, . | Reading Analysis, . |
---|---|---|---|---|---|---|
. | All . | . | Remediation in . | Math Analysis . | Failed Writing . | Passed Writing . |
. | Test-Takers . | No Remediation . | Any Subject . | Sample (±4 pts) . | (±4 pts) . | (±4 pts) . |
Assigned to Dev Math, College Std. | 0.720 | 0.000 | 0.793 | 0.573 | 0.776 | 0.752 |
Assigned to Dev Writing College Std. | 0.719 | 0.000 | 0.792 | 0.660 | 1.000 | 0.000 |
Assigned to Dev Reading College Std. | 0.378 | 0.000 | 0.418 | 0.303 | 0.481 | 0.442 |
Assigned to Any Dev Ed. College Std. | 0.901 | 0.000 | 1.000 | 0.851 | 1.000 | 0.840 |
Female | 0.578 | 0.562 | 0.580 | 0.561 | 0.569 | 0.683 |
Age | 21.585 | 20.102 | 21.712 | 20.941 | 21.104 | 20.408 |
White, Non-Hispanic | 0.142 | 0.233 | 0.134 | 0.176 | 0.116 | 0.156 |
Black, Non-Hispanic | 0.282 | 0.280 | 0.282 | 0.290 | 0.303 | 0.306 |
Latino | 0.343 | 0.231 | 0.354 | 0.303 | 0.365 | 0.331 |
Asian Pacific Islander | 0.106 | 0.116 | 0.106 | 0.106 | 0.108 | 0.082 |
Other Race | 0.067 | 0.067 | 0.066 | 0.061 | 0.062 | 0.068 |
Language Minority | 0.527 | 0.426 | 0.537 | 0.514 | 0.540 | 0.429 |
Graduated from Local High School | 0.551 | 0.653 | 0.543 | 0.579 | 0.586 | 0.643 |
Years delayed enrollment | 2.664 | 1.838 | 2.728 | 2.269 | 2.182 | 1.728 |
HS GPA (0–100 scale, non-missing) | 73.558 | 75.705 | 73.343 | 73.526 | 72.704 | 73.343 |
HS College Prep Units (non-missing) | 11.737 | 14.452 | 11.509 | 12.158 | 11.642 | 12.531 |
Has HS GPA | 0.904 | 0.942 | 0.901 | 0.913 | 0.907 | 0.934 |
Four-year college is 1st Choice | 0.294 | 0.394 | 0.285 | 0.296 | 0.313 | 0.390 |
Received Pell at Entry (non-missing) | 0.569 | 0.455 | 0.582 | 0.534 | 0.602 | 0.572 |
Missing Pell Data | 0.170 | 0.113 | 0.174 | 0.131 | 0.161 | 0.160 |
Dependent at Entry (non-missing) | 0.737 | 0.837 | 0.729 | 0.774 | 0.776 | 0.787 |
Missing Dependency Data | 0.366 | 0.392 | 0.361 | 0.359 | 0.331 | 0.353 |
Enrolled immediately | 0.750 | 0.837 | 0.743 | 0.789 | 0.773 | 0.787 |
Enrolled w/in 3 | 0.829 | 0.886 | 0.825 | 0.866 | 0.838 | 0.840 |
Took Dev. Math w/in 3 | 0.547 | 0.195 | 0.581 | 0.533 | 0.578 | 0.587 |
Took College Math w/in 3 | 0.367 | 0.642 | 0.342 | 0.481 | 0.352 | 0.385 |
Passed College Math | 0.299 | 0.536 | 0.277 | 0.382 | 0.290 | 0.319 |
C or Higher in College Math | 0.249 | 0.469 | 0.229 | 0.317 | 0.240 | 0.268 |
B or Higher in College Math | 0.176 | 0.363 | 0.158 | 0.217 | 0.163 | 0.171 |
Took Dev. Reading w/in 3 | 0.219 | 0.002 | 0.241 | 0.179 | 0.251 | 0.258 |
Took College English w/in 3 | 0.534 | 0.856 | 0.504 | 0.611 | 0.477 | 0.745 |
Passed College English | 0.451 | 0.742 | 0.424 | 0.519 | 0.408 | 0.628 |
C or Higher in College English | 0.425 | 0.720 | 0.398 | 0.492 | 0.380 | 0.591 |
B or Higher in College English | 0.310 | 0.596 | 0.284 | 0.364 | 0.256 | 0.418 |
Earned AA or BA w/in 3 | 0.092 | 0.220 | 0.080 | 0.115 | 0.084 | 0.136 |
Transferred or Earned Degree w/in 3 | 0.124 | 0.294 | 0.109 | 0.155 | 0.112 | 0.180 |
Still Enrolled at End of Year 3 | 0.240 | 0.191 | 0.245 | 0.244 | 0.253 | 0.205 |
Dropped Out by Year 3 | 0.635 | 0.516 | 0.645 | 0.601 | 0.634 | 0.615 |
Number of terms enrolled w/in 3 | 3.297 | 3.882 | 3.249 | 3.503 | 3.351 | 3.463 |
Tot. Equated Credits Passed in 3 | 29.761 | 39.009 | 28.967 | 33.036 | 29.762 | 31.347 |
Tot. College Credits Passed in 3 | 23.347 | 37.504 | 22.072 | 26.811 | 22.507 | 27.607 |
Took CPE w/in 3 | 0.142 | 0.315 | 0.126 | 0.172 | 0.130 | 0.209 |
Pass CPE w/in 3 | 0.129 | 0.302 | 0.114 | 0.157 | 0.116 | 0.197 |
CPE Highest Score w/in 3 | 42.427 | 44.789 | 41.899 | 42.045 | 41.611 | 42.899 |
Sample Size | 100,250 | 7,592 | 90,342 | 18,724 | 7,049 | 1,374 |
It is important to note that the LUCCS entrants are not typical of community college entrants nationally, with the exception of their gender composition (57 percent female). Nationally, over 60 percent of entrants identify as non-Hispanic white students, the average age is 23.6 years, and during the relevant time period just under 30 percent of community college entrants received Pell Grants.12 In contrast to these national figures, the LUCCS student body reflects the diversity of its urban environment: 34 percent identify their race/ethnicity as Hispanic, 28 percent identify as non-Hispanic black, 14 percent identify as non-Hispanic white, 11 percent identify as Asian/Pacific Islander, and 7 percent identify as another race/ethnicity. Nearly half received Pell Grants at entry, and more than half identified as speaking a primary language other than English.13 The LUCCS entrants, at 21.6 years old, are also younger than the national average, likely reflecting the fact that the LUCCS offers very few vocational/technical certificate programs.14
Table 1 indicates that a substantial proportion of students—17 percent overall—who take a placement test at one of the LUCCS colleges never enroll (or at least still had not enrolled three years after their first test).15 This highlights the importance of looking at initial enrollment as a margin that could be affected by remedial assignment. The average student enrolled for 3.3 semesters over three years, and nearly two thirds (64 percent) had dropped out (not enrolled, no degree) at the end of the three-year follow-up period.16 An additional 24 percent were still enrolled at one of the LUCCS colleges, and the remainder (12 percent) had either completed a degree or transferred to a local public four-year institution. Finally, table 1 indicates that approximately 13 percent of tested students had taken and passed a college proficiency exam (CPE) required for graduation, and those who took the exam scored an average of 42 points (out of 72 possible, with 34 required to pass). It should be noted that with such a low proportion of the sample taking this test, our ability to examine overall impacts on learning is substantially limited (because some students whose learning is impacted may never take this test).
Some of the outcomes in table 1 that will be a focus of our RD analysis merit additional explanation. To examine impacts on actual achievement in college-level courses in the relevant subject, we have created composite outcomes, such as “Passed College Math,” or “Earned B or Higher in College Math,” that take a value of one if the individual ever took the course and received the relevant grade, and zero otherwise (including those who never took the course, took it but dropped out, or took and finished it but received less than the relevant grade). We use a parallel strategy to construct the composite outcome “Ever Passed the CPE.” Using these composite outcomes allows us to sidestep the selection bias that would result if we were to limit our comparison to only those who took the course/test. It does have implications for the interpretation of those results, however—an issue that we will return to later.
In the case of the CPE, we also look for “impacts” on the CPE score for those students who did actually take the test. These estimates are not strictly causal because they are computed conditional on taking the test, which is itself an outcome of interest. Nonetheless, if there is no impact on the probability of taking the exam, the potential for selection bias in the score estimates is limited. (In the case of college grades, we do not examine conditional impacts because the probability of taking a college level course is so clearly impacted.)
Identification Strategy
Table 1 provides mean outcome levels for those students assigned to remediation in any subject, as well as means for those not assigned to any remediation (including students who were exempt from testing). Although comparisons between these two groups may provide useful context, they are unlikely to have any causal interpretation; students who score lower on placement exams are likely to do worse on average than those who score more highly, regardless of the effect of remediation.
Following prior literature by Calcagno and Long (2008) and Martorell and McFarlin (2011), we utilize a RD design to identify the causal effect of remedial assignment for those students who score near the cutoff. The intuition underlying the approach is simple: If we assume the underlying relationship between test scores and future outcomes is continuous and nothing other than the placement policy varies discontinuously at the cutoff, then we may attribute any observed discontinuity in outcomes at the cutoff to the placement policy. For example, although we might expect degree completion to be positively related to test scores, there is no reason other than the placement policy to expect a discontinuous jump (or dropoff) in this relationship at the test score cutoff.
In other words, the RD estimator is simply the difference between two regression functions at the cutoff, where one function is estimated by approaching the cutoff from below and the other is estimated by approaching the cutoff from above. Even if there is a systematic relationship between test scores and outcomes, as long as this relationship is continuous, there is no reason to expect the limits in equation 1 to differ except because of the difference in remedial assignment. The estimated RD impact is “local” to the cutoff, meaning the estimate only applies to individuals near the cutoff, unless further assumptions are made.
The rationale for a local linear approach is that the alternative global methods focus energy on estimating the relationship between the test score and outcomes for ranges of the score that are far from the cutoff and which thus may provide little information about the regression function at the cutoff. As bandwidth is restricted to be closer to the cutoff, higher order terms in the regression function become less necessary and in fact may lead to excessive sensitivity around the cutoff. Nonetheless, we test the robustness of our results to variations in bandwidth as well as the addition of quadratic terms and also examine graphical plots of the data as a check on the specification.
In equation 3, note that the cohort fixed effects absorb the main effects of test version. In addition, for those cohorts who took the new math test, the Above and ScoreDistance variables are based on the cutoffs for the more stringent of the two math modules (algebra) and the sample is restricted to those students who passed the easier of the two math modules (numerical skills/pre-algebra).17 The resulting β1 is (nearly) equivalent to running separate RDs within each college and test version, and then taking the weighted average of the impact estimates.18 We also show results separately by test version, because the cutoffs for the two exams fell at different points in the ability distribution.
To be placed into college-level English, students are required to pass both a reading test and a more difficult writing test. The writing test is the stronger determinant of college-level placement, because the majority of those near the cutoff in reading would fail the writing test anyway. Unfortunately, however, the writing test is graded on a limited 0–12 scale with each score unit representing a potentially large difference in underlying ability. Moreover, this final scale represents the sum of scores from two exam graders who are encouraged to agree on their scores, resulting in a discontinuous distribution of scores around the cutoff of 7.
Thus, we undertake two analyses in English, both focused on students scoring near the reading test cutoff. First, we limit the sample to those who failed the writing exam and look at the impact of being assigned to remediation in reading plus writing instead of writing alone. Second, we limit the sample to those who passed the writing exam and look at the impact of being assigned to remediation in reading versus being assigned to college-level English. Although this is a relatively rare occurrence (only 19 percent of those who passed the writing exam failed the easier reading exam), our large sample still generates sufficient power to identify meaningful impacts.19 Moreover, this latter analysis enables us to say something about the impact of remediation for students whose test scores may be an underestimate of their true ability (conditional on passing the writing exam, the likelihood that a failing score on the reading exam results is a “mistake” increases). For both of these analyses, the estimating equation follows equation 2 where Above and ScoreDistance are both computed from the reading score.
Finally, as noted earlier, all of our analyses focus on estimating the effect of remedial assignment rather than the effect of enrollment in remediation per se. This stands in contrast to prior studies which have used an instrumental variables or “fuzzy” regression discontinuity design in which cutoff-based remedial assignments are used as an instrument for actual remedial enrollment (Calcagno and Long 2008; Martorell and McFarlin 2011). A key assumption needed to justify such an approach is that remedial assignment has no effect on future outcomes except through its effect on remedial enrollment. In many contexts including the LUCCS, however, this assumption is unlikely to hold. For example, a student assigned to remediation may opt not to enroll at all. A student who enrolls may find his access to college-level coursework restricted, and will not be able to graduate, even if he never enrolls in a remedial course. Thus, we maintain that the assignment itself is the relevant treatment (nevertheless, someone willing to make the necessary assumptions can still ballpark the instrumental variable estimates by dividing any of our impact estimates by our estimated “first stage” impacts on remedial enrollment).
4. Full Sample Results
Graphical Analysis and Specification Checks
We begin by presenting graphical evidence to support the RD assumptions. First, we plot the distributions of the test scores to check for discontinuities in the density at the cutoff. Discontinuities in the density at the cutoff may suggest either that students are systematically sorting themselves around the cutoff (this is more of a concern in contexts that allow for retesting) or that some sample selection is taking place after students learn their scores (again this is more of a concern in contexts in which researchers may not have data on students who never enroll). In any case, we see little visual evidence of any discontinuities in the four distributions presented in figure 1. For each distribution we run McCrary (2008) tests for discontinuities in the density and find none (results available upon request).
Second, we examine graphical plots of several pretreatment covariates by test score in figures 2–345, and run the corresponding regressions to test for discontinuities (in math, although we show plots for the old and new math tests separately, we run a single pooled regression in the form of equation 3 except with each covariate as the dependent variable, no covariates included in the regression, and the regression bandwidth restricted to ±6 points). The regressions confirm what the graphs suggest: no systematic differences in covariates around the cutoff. The only covariate for which we find a statistically significant difference is for high school preparatory units around the math cutoff; the regression indicates those below the cutoff have 0.34 additional high school preparatory units (p = 0.03), although a single significant (but still substantively small) difference among the dozen of covariates tested provides little cause for concern.
Finally, we examine graphical plots of several key outcomes by test score for visual evidence of discontinuities at the cutoff, shown in figures 6–789. Overall, the graphs provide little indication that assignment to remediation affects the initial enrollment decision or the number of semesters enrolled over the subsequent three years. There is some hint of possible negative effects on degree/transfer mirrored by possible positive effects on dropout. The clearest pattern coming out of these graphs is that those scoring just below the cutoffs are substantially more likely to take remedial coursework, and somewhat less likely to take, pass, or do well in college-level coursework in the relevant subject.
As noted earlier, the effect of assignment to remediation on the likelihood of actually taking a remedial course is far less than 100 percent. In math, assignment to remediation increases the likelihood of taking a remedial math course by about 27 percentage points, and assignment to reading remediation increases the likelihood of taking a reading course by 45 to 55 percentage points, depending upon the specification. The higher compliance rate in English than math is due to two factors: (1) some majors have higher major-specific remedial cutoffs in math, so some students may take remedial courses even though they score above the institution-specific cutoff, and (2) more college courses require freshman composition (the first college-level English course) as a prerequisite than require any college-level math course as a prerequisite. These differences in compliance are important to keep in mind when interpreting the different patterns of results for remedial math and remedial reading assignments.
Main Results
Assignment to Remedial versus College-Level Math
Main results from the analysis of remedial math assignment are presented in table 2. Our main specification utilizes a local linear regression with a bandwidth of ±6 points. To test the sensitivity of our results we also show results with and without covariates, with a narrower local linear specification, and with a wider bandwidth that includes quadratic terms for all of the test score distance variables. Overall, the estimated effects are highly robust across these alternative specifications.
. | Main Specification . | No Covariates . | Narrow Bandwidth . | Wide Bandwidth . | ||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
Outcome . | B . | (SE) . | . | B . | (SE) . | . | B . | (SE) . | . | B . | (SE) . | . |
Enrolled immediately | −0.020 | (0.010) | ** | −0.020 | (0.010) | ** | −0.012 | (0.013) | −0.021 | (0.011) | * | |
Enrolled w/in 3 years | −0.004 | (0.008) | −0.004 | (0.008) | −0.006 | (0.011) | −0.013 | (0.009) | ||||
Took dev math | 0.273 | (0.011) | *** | 0.274 | (0.011) | *** | 0.279 | (0.014) | *** | 0.257 | (0.012) | *** |
Took college-level math | −0.077 | (0.012) | *** | −0.077 | (0.012) | *** | −0.075 | (0.015) | *** | −0.081 | (0.013) | *** |
Passed college-level math | −0.050 | (0.012) | *** | −0.051 | (0.012) | *** | −0.059 | (0.015) | *** | −0.048 | (0.013) | *** |
Earned C or higher in CL math | −0.043 | (0.011) | *** | −0.045 | (0.011) | *** | −0.050 | (0.014) | *** | −0.040 | (0.012) | *** |
Earned B or higher in CL math | −0.021 | (0.010) | ** | −0.023 | (0.010) | ** | −0.023 | (0.012) | * | −0.019 | (0.011) | * |
Earned AA | −0.002 | (0.008) | −0.002 | (0.008) | −0.001 | (0.010) | −0.005 | (0.008) | ||||
Earned AA or transferred | −0.002 | (0.009) | −0.001 | (0.009) | 0.002 | (0.011) | −0.004 | (0.009) | ||||
Still persisting | −0.008 | (0.011) | −0.009 | (0.011) | −0.016 | (0.013) | −0.010 | (0.012) | ||||
Dropped out | 0.010 | (0.012) | 0.011 | (0.012) | 0.013 | (0.015) | 0.015 | (0.013) | ||||
Semesters enrolled | 0.031 | (0.060) | 0.026 | (0.061) | 0.033 | (0.075) | 0.001 | (0.065) | ||||
Total equated credits | 1.034 | (0.718) | 0.966 | (0.732) | 0.865 | (0.902) | 1.032 | (0.775) | ||||
College level credits | 0.058 | (0.635) | 0.046 | (0.645) | 0.007 | (0.796) | −0.029 | (0.683) | ||||
Took college exit exam | 0.003 | (0.009) | 0.004 | (0.009) | 0.003 | (0.012) | 0.005 | (0.010) | ||||
Passed college exit exam | 0.001 | (0.009) | 0.002 | (0.009) | 0.005 | (0.011) | 0.005 | (0.010) | ||||
Score on college exit exam* | −0.085 | (0.443) | −0.051 | (0.446) | 0.615 | (0.559) | 0.312 | (0.491) | ||||
Bandwidth | ± 6 points | ± 6 points | ± 4 points | ± 12 points | ||||||||
Functional form | Local linear | Local linear | Local linear | Local quadratic | ||||||||
School/cohort FE | X | X | X | X | ||||||||
Covariates | X | X | X | |||||||||
Sample size | 25,970 | 25,970 | 17,641 | 49,204 |
. | Main Specification . | No Covariates . | Narrow Bandwidth . | Wide Bandwidth . | ||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
Outcome . | B . | (SE) . | . | B . | (SE) . | . | B . | (SE) . | . | B . | (SE) . | . |
Enrolled immediately | −0.020 | (0.010) | ** | −0.020 | (0.010) | ** | −0.012 | (0.013) | −0.021 | (0.011) | * | |
Enrolled w/in 3 years | −0.004 | (0.008) | −0.004 | (0.008) | −0.006 | (0.011) | −0.013 | (0.009) | ||||
Took dev math | 0.273 | (0.011) | *** | 0.274 | (0.011) | *** | 0.279 | (0.014) | *** | 0.257 | (0.012) | *** |
Took college-level math | −0.077 | (0.012) | *** | −0.077 | (0.012) | *** | −0.075 | (0.015) | *** | −0.081 | (0.013) | *** |
Passed college-level math | −0.050 | (0.012) | *** | −0.051 | (0.012) | *** | −0.059 | (0.015) | *** | −0.048 | (0.013) | *** |
Earned C or higher in CL math | −0.043 | (0.011) | *** | −0.045 | (0.011) | *** | −0.050 | (0.014) | *** | −0.040 | (0.012) | *** |
Earned B or higher in CL math | −0.021 | (0.010) | ** | −0.023 | (0.010) | ** | −0.023 | (0.012) | * | −0.019 | (0.011) | * |
Earned AA | −0.002 | (0.008) | −0.002 | (0.008) | −0.001 | (0.010) | −0.005 | (0.008) | ||||
Earned AA or transferred | −0.002 | (0.009) | −0.001 | (0.009) | 0.002 | (0.011) | −0.004 | (0.009) | ||||
Still persisting | −0.008 | (0.011) | −0.009 | (0.011) | −0.016 | (0.013) | −0.010 | (0.012) | ||||
Dropped out | 0.010 | (0.012) | 0.011 | (0.012) | 0.013 | (0.015) | 0.015 | (0.013) | ||||
Semesters enrolled | 0.031 | (0.060) | 0.026 | (0.061) | 0.033 | (0.075) | 0.001 | (0.065) | ||||
Total equated credits | 1.034 | (0.718) | 0.966 | (0.732) | 0.865 | (0.902) | 1.032 | (0.775) | ||||
College level credits | 0.058 | (0.635) | 0.046 | (0.645) | 0.007 | (0.796) | −0.029 | (0.683) | ||||
Took college exit exam | 0.003 | (0.009) | 0.004 | (0.009) | 0.003 | (0.012) | 0.005 | (0.010) | ||||
Passed college exit exam | 0.001 | (0.009) | 0.002 | (0.009) | 0.005 | (0.011) | 0.005 | (0.010) | ||||
Score on college exit exam* | −0.085 | (0.443) | −0.051 | (0.446) | 0.615 | (0.559) | 0.312 | (0.491) | ||||
Bandwidth | ± 6 points | ± 6 points | ± 4 points | ± 12 points | ||||||||
Functional form | Local linear | Local linear | Local linear | Local quadratic | ||||||||
School/cohort FE | X | X | X | X | ||||||||
Covariates | X | X | X | |||||||||
Sample size | 25,970 | 25,970 | 17,641 | 49,204 |
Source: Restricted use database covering placement test takers at the LUCCS community colleges.
Notes: All outcomes measured three years after test date unless otherwise noted. Outcomes for college-level math include zeros for those who never took college-level math. Equated credits are a measure of total credits which include remedial coursework. Estimates of effects on exit exam scores are not strictly causal because they are computed only for those students (representing approximately 17 percent of the analysis sample) who took the exam; however, because there is no impact on taking the exam, this limits the concern that such comparisons are biased by differential selection.
***Statistically significant at the 1% level; **statistically significant at the 5% level; *statistically significant at the 10% level.
We find little evidence of discouragement effects on initial enrollment. Although the results indicate assignment to math remediation has a small, statistically significant negative effect on immediate enrollment (–2 percentage points), this fades out such that there is no impact on whether students enroll within three years. Thus, students assigned to remediation may delay enrollment but it does not appear to discourage them from enrolling ultimately. Moreover, some students who delay enrollment may in fact have been recruited into non-credit skills remediation programs. For example, the LUCCS offers several intensive non-credit college transition programs for improving math and English skills, and students recruited into these programs may defer formal enrollment but still remain attached to the institution (this is essentially a form of pre-enrollment remediation and offers another reason why we would not want to assume the only students impacted by remedial assignment are those who enroll in a formal remedial course).
Nor do we find any indication of either development or discouragement effects post enrollment. Assignment to remediation has little influence, either positive or negative, on degree completion, degree/transfer, persistence, dropout, or semesters enrolled. Further, we find no evidence that students learned more in remediation, as measured by outcomes on a standardized proficiency exam required in order to earn a two- or four-year degree. We find no impact on rates of taking or passing the college exit exam, and for the 17 percent of these students who took the exam, we find no impact on their scores (unfortunately, as noted earlier, we cannot examine impacts on learning for students who never take the test).20
The only outcomes for which we see a consistent pattern of impacts are those relating to the specific courses students take—assignment to math remediation increases the probability of taking remedial math by 27 percentage points, and decreases the probability of taking college level math by about 8 percentage points. Similarly, those assigned to math remediation were 5 percentage points less likely to pass college-level math, 4 percentage points less likely to ever earn a C or better, and 2 percentage points less likely to ever earn a B or better in college-level math. If one is willing to assume the impacts on the B-or-better outcome result purely from the reduction in college-level course-taking, and not from actual negative impacts on the academic potential of students who would have taken college-level coursework anyway, this implies approximately one quarter of the students diverted out of college-level courses could have earned at least a B there. Those assigned to remediation earn slightly more total “equated” credits (including remedial coursework) over three years but this is driven entirely by remedial coursework—that is, there is no impact on college-level credits accumulated.
It may seem surprising that assignment to remediation could have such large impacts on college math course–taking, without subsequently impacting overall college credit accumulation or degree completion. But we think the explanation here is straightforward: Most students on the margin of remediation are not on the margin of completing an associate's degree, so the individuals who were deterred from taking college-level math may not have graduated even if they had done so. Similarly, the lack of impact on overall credits suggests that students are able to take other college courses even if they are prevented from taking the “gatekeeper” college math class.
Assignment to Reading and Writing Remediation versus Writing-Only Remediation
This analysis, in which all students are assigned to writing remediation but some are additionally assigned to remediation in reading, finds similarly little evidence of effects on longer-term college outcomes such as degree/transfer, persistence, dropout, and taking/passing the college exit exam (see table 3). Unlike the math analysis, we see no evidence here of diversion effects on the likelihood of taking, passing, or doing well in the relevant college-level course. This may be because students who fail the writing exam and are near the cutoff on the reading exam are unlikely to make it to college-level English regardless of their reading placement. Beyond a large impact on the likelihood of taking remedial reading, the only other positive impact is on ever taking remedial writing. This is likely due to the fact that some colleges at the LUCCS bundle remedial reading and writing into a single course; such courses are identified in our data set as remedial writing. There is a small 3.2 percentage point increase in the likelihood of immediate enrollment—the exact opposite of a discouragement effect—possibly as a result of concerted outreach programs for multiple-remediated students to complete these requirements as quickly as possible. But again, this impact fades out such that there is no enrollment effect, positive or negative, after three years. As a result we are reluctant to over-interpret the small difference in initial enrollment.
. | Main Specification . | No Covariates . | Narrow Bandwidth . | Wide Bandwidth . | ||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
Outcome . | B . | (SE) . | . | B . | (SE) . | . | B . | (SE) . | . | B . | (SE) . | . |
Enrolled immediately | 0.032 | (0.016) | * | 0.032 | (0.017) | * | 0.035 | (0.021) | * | 0.013 | (0.018) | |
Enrolled w/in 3 years | 0.010 | (0.014) | 0.009 | (0.015) | 0.018 | (0.018) | 0.005 | (0.016) | ||||
Took dev writing | 0.085 | (0.017) | *** | 0.083 | (0.017) | *** | 0.109 | (0.022) | *** | 0.080 | (0.019) | *** |
Took dev reading | 0.478 | (0.015) | *** | 0.478 | (0.015) | *** | 0.504 | (0.019) | *** | 0.451 | (0.016) | *** |
Took college-level English | −0.006 | (0.019) | −0.006 | (0.020) | 0.021 | (0.025) | −0.016 | (0.021) | ||||
Passed college-level English | −0.006 | (0.019) | −0.007 | (0.019) | 0.012 | (0.024) | −0.011 | (0.021) | ||||
Earned C or higher in CL English | 0.001 | (0.019) | 0.000 | (0.019) | 0.015 | (0.024) | −0.003 | (0.021) | ||||
Earned B or higher in CL English | −0.008 | (0.017) | −0.009 | (0.017) | 0.005 | (0.021) | −0.006 | (0.018) | ||||
Earned AA | −0.005 | (0.011) | −0.006 | (0.011) | −0.016 | (0.013) | −0.005 | (0.011) | ||||
Earned AA or transferred w/in 3 | −0.008 | (0.012) | −0.010 | (0.012) | −0.024 | (0.015) | −0.006 | (0.013) | ||||
Still persisting | 0.002 | (0.017) | 0.002 | (0.017) | 0.042 | (0.022) | * | 0.014 | (0.019) | |||
Dropped out | 0.008 | (0.019) | 0.010 | (0.019) | −0.017 | (0.024) | −0.006 | (0.020) | ||||
Semesters enrolled | 0.061 | (0.096) | 0.047 | (0.099) | 0.134 | (0.122) | 0.082 | (0.105) | ||||
Total equated credits | 1.711 | (1.110) | 1.446 | (1.148) | 2.489 | (1.408) | * | 2.220 | (1.209) | * | ||
College level credits | −0.052 | (0.939) | −0.270 | (0.970) | 0.197 | (1.191) | 0.470 | (1.018) | ||||
Took college exit exam | 0.001 | (0.013) | −0.001 | (0.013) | 0.001 | (0.017) | 0.004 | (0.014) | ||||
Passed college exit exam | 0.003 | (0.013) | 0.002 | (0.013) | 0.003 | (0.016) | 0.008 | (0.014) | ||||
Score on college exit exam* | 0.289 | (0.865) | 0.103 | (0.871) | 0.724 | (1.129) | 1.136 | (0.943) | ||||
Bandwidth | ± 6 points | ± 6 points | ± 4 points | ± 12 points | ||||||||
Functional Form | Local Linear | Local Linear | Local Linear | Local Quadratic | ||||||||
School/Cohort FE | X | X | X | X | ||||||||
Covariates | X | X | X | |||||||||
Sample size | 10,663 | 10,663 | 7,049 | 20,683 |
. | Main Specification . | No Covariates . | Narrow Bandwidth . | Wide Bandwidth . | ||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
Outcome . | B . | (SE) . | . | B . | (SE) . | . | B . | (SE) . | . | B . | (SE) . | . |
Enrolled immediately | 0.032 | (0.016) | * | 0.032 | (0.017) | * | 0.035 | (0.021) | * | 0.013 | (0.018) | |
Enrolled w/in 3 years | 0.010 | (0.014) | 0.009 | (0.015) | 0.018 | (0.018) | 0.005 | (0.016) | ||||
Took dev writing | 0.085 | (0.017) | *** | 0.083 | (0.017) | *** | 0.109 | (0.022) | *** | 0.080 | (0.019) | *** |
Took dev reading | 0.478 | (0.015) | *** | 0.478 | (0.015) | *** | 0.504 | (0.019) | *** | 0.451 | (0.016) | *** |
Took college-level English | −0.006 | (0.019) | −0.006 | (0.020) | 0.021 | (0.025) | −0.016 | (0.021) | ||||
Passed college-level English | −0.006 | (0.019) | −0.007 | (0.019) | 0.012 | (0.024) | −0.011 | (0.021) | ||||
Earned C or higher in CL English | 0.001 | (0.019) | 0.000 | (0.019) | 0.015 | (0.024) | −0.003 | (0.021) | ||||
Earned B or higher in CL English | −0.008 | (0.017) | −0.009 | (0.017) | 0.005 | (0.021) | −0.006 | (0.018) | ||||
Earned AA | −0.005 | (0.011) | −0.006 | (0.011) | −0.016 | (0.013) | −0.005 | (0.011) | ||||
Earned AA or transferred w/in 3 | −0.008 | (0.012) | −0.010 | (0.012) | −0.024 | (0.015) | −0.006 | (0.013) | ||||
Still persisting | 0.002 | (0.017) | 0.002 | (0.017) | 0.042 | (0.022) | * | 0.014 | (0.019) | |||
Dropped out | 0.008 | (0.019) | 0.010 | (0.019) | −0.017 | (0.024) | −0.006 | (0.020) | ||||
Semesters enrolled | 0.061 | (0.096) | 0.047 | (0.099) | 0.134 | (0.122) | 0.082 | (0.105) | ||||
Total equated credits | 1.711 | (1.110) | 1.446 | (1.148) | 2.489 | (1.408) | * | 2.220 | (1.209) | * | ||
College level credits | −0.052 | (0.939) | −0.270 | (0.970) | 0.197 | (1.191) | 0.470 | (1.018) | ||||
Took college exit exam | 0.001 | (0.013) | −0.001 | (0.013) | 0.001 | (0.017) | 0.004 | (0.014) | ||||
Passed college exit exam | 0.003 | (0.013) | 0.002 | (0.013) | 0.003 | (0.016) | 0.008 | (0.014) | ||||
Score on college exit exam* | 0.289 | (0.865) | 0.103 | (0.871) | 0.724 | (1.129) | 1.136 | (0.943) | ||||
Bandwidth | ± 6 points | ± 6 points | ± 4 points | ± 12 points | ||||||||
Functional Form | Local Linear | Local Linear | Local Linear | Local Quadratic | ||||||||
School/Cohort FE | X | X | X | X | ||||||||
Covariates | X | X | X | |||||||||
Sample size | 10,663 | 10,663 | 7,049 | 20,683 |
Source: Restricted use database covering placement test takers at the LUCCS community colleges.
Notes: All outcomes measured three years after test date unless otherwise noted. Outcomes for college-level English include zeros for those who never took college-level English. Equated credits are a measure of total credits which include remedial coursework. Estimates of effects on exit exam scores are not strictly causal because they are computed only for those students (representing approximately 17 percent of the analysis sample) who took the exam; however, because there is no impact on taking the exam, this limits the concern that such comparisons are biased by differential selection.
***Statistically significant at the 1% level; *statistically significant at the 10% level.
Assignment to Reading-Only Remediation versus College-Level English
The sample sizes for this analysis, presented in table 4, are significantly smaller because it is a relatively unusual occurrence for someone to fail the reading test but pass the relatively more difficult writing exam. Given the noise inherent in placement exams, we interpret this unusual pattern as suggestive evidence that these students’ reading scores may be an underrepresentation of their true ability.21 For this group, we identify large and significant negative effects of remedial assignment on the likelihood of ever taking, passing, or doing relatively well in college-level English courses. In addition to these “diversion” effects we also see evidence consistent with “discouragement” effects. We find negative effects on college-level credits completed and three-year degree attainment, and a positive effect on dropping out—specifically, students placed in remedial reading versus college-level English are 5 percentage points less likely to earn an associate's degree, 8 percentage points more likely to drop out, and earn four fewer college-level credits (significant at the 10 percent level). Moreover, although it is not statistically significant, we see a small 3 percentage point decline in immediate enrollment that does not fade out but if anything gets slightly bigger after three years (though it remains statistically insignificant in all but one specification).
. | Main Specification . | No Covariates . | Narrow Bandwidth . | Wide Bandwidth . | ||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
Outcome . | B . | (SE) . | . | B . | (SE) . | . | B . | (SE) . | . | B . | (SE) . | . |
Enrolled immediately | −0.027 | (0.037) | −0.030 | (0.037) | −0.025 | (0.046) | −0.024 | (0.040) | ||||
Enrolled w/in 3 years | −0.036 | (0.033) | −0.044 | (0.033) | −0.075 | (0.041) | * | −0.042 | (0.036) | |||
Took dev writing | 0.007 | (0.011) | 0.007 | (0.011) | 0.000 | (0.015) | 0.017 | (0.012) | ||||
Took dev reading | 0.559 | (0.035) | *** | 0.552 | (0.035) | *** | 0.573 | (0.045) | *** | 0.546 | (0.040) | *** |
Took college-level English | −0.155 | (0.040) | *** | −0.160 | (0.040) | *** | −0.184 | (0.050) | *** | −0.145 | (0.044) | *** |
Passed college-level English | −0.125 | (0.043) | *** | −0.134 | (0.043) | *** | −0.143 | (0.055) | *** | −0.107 | (0.047) | ** |
Earned C or higher in CL English | −0.138 | (0.044) | *** | −0.147 | (0.044) | *** | −0.150 | (0.055) | *** | −0.135 | (0.048) | *** |
Earned B or higher in CL English | −0.109 | (0.044) | ** | −0.113 | (0.044) | ** | −0.127 | (0.055) | ** | −0.097 | (0.047) | ** |
Earned AA | −0.054 | (0.031) | * | −0.058 | (0.032) | * | −0.029 | (0.039) | −0.050 | (0.034) | ||
Earned AA or transferred w/in 3 | −0.047 | (0.035) | −0.054 | (0.035) | −0.022 | (0.044) | −0.049 | (0.038) | ||||
Still persisting | −0.034 | (0.036) | −0.038 | (0.036) | −0.072 | (0.045) | −0.037 | (0.040) | ||||
Dropped out | 0.080 | (0.044) | * | 0.091 | (0.044) | ** | 0.090 | (0.055) | 0.083 | (0.048) | * | |
Semesters enrolled | −0.236 | (0.222) | −0.296 | (0.224) | −0.225 | (0.278) | −0.251 | (0.241) | ||||
Total equated credits | −2.306 | (2.624) | −2.960 | (2.687) | −1.521 | (3.263) | −1.975 | (2.850) | ||||
College level credits | −4.228 | (2.430) | * | −4.837 | (2.494) | * | −3.183 | (3.023) | −4.058 | (2.633) | ||
Took college exit exam | −0.029 | (0.037) | −0.036 | (0.037) | 0.005 | (0.047) | −0.020 | (0.040) | ||||
Passed college exit exam | −0.034 | (0.036) | −0.040 | (0.037) | −0.005 | (0.046) | −0.024 | (0.039) | ||||
Score on college exit exam* | 1.352 | (1.483) | 1.352 | (1.511) | 2.090 | (1.937) | 1.553 | (1.585) | ||||
Bandwidth | ± 6 points | ± 6 points | ± 4 points | ± 12 points | ||||||||
Functional Form | Local Linear | Local Linear | Local Linear | Local Quadratic | ||||||||
School/Cohort FE | X | X | X | X | ||||||||
Covariates | X | X | X | |||||||||
Sample size | 2,122 | 2,122 | 1,374 | 4,381 |
. | Main Specification . | No Covariates . | Narrow Bandwidth . | Wide Bandwidth . | ||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
Outcome . | B . | (SE) . | . | B . | (SE) . | . | B . | (SE) . | . | B . | (SE) . | . |
Enrolled immediately | −0.027 | (0.037) | −0.030 | (0.037) | −0.025 | (0.046) | −0.024 | (0.040) | ||||
Enrolled w/in 3 years | −0.036 | (0.033) | −0.044 | (0.033) | −0.075 | (0.041) | * | −0.042 | (0.036) | |||
Took dev writing | 0.007 | (0.011) | 0.007 | (0.011) | 0.000 | (0.015) | 0.017 | (0.012) | ||||
Took dev reading | 0.559 | (0.035) | *** | 0.552 | (0.035) | *** | 0.573 | (0.045) | *** | 0.546 | (0.040) | *** |
Took college-level English | −0.155 | (0.040) | *** | −0.160 | (0.040) | *** | −0.184 | (0.050) | *** | −0.145 | (0.044) | *** |
Passed college-level English | −0.125 | (0.043) | *** | −0.134 | (0.043) | *** | −0.143 | (0.055) | *** | −0.107 | (0.047) | ** |
Earned C or higher in CL English | −0.138 | (0.044) | *** | −0.147 | (0.044) | *** | −0.150 | (0.055) | *** | −0.135 | (0.048) | *** |
Earned B or higher in CL English | −0.109 | (0.044) | ** | −0.113 | (0.044) | ** | −0.127 | (0.055) | ** | −0.097 | (0.047) | ** |
Earned AA | −0.054 | (0.031) | * | −0.058 | (0.032) | * | −0.029 | (0.039) | −0.050 | (0.034) | ||
Earned AA or transferred w/in 3 | −0.047 | (0.035) | −0.054 | (0.035) | −0.022 | (0.044) | −0.049 | (0.038) | ||||
Still persisting | −0.034 | (0.036) | −0.038 | (0.036) | −0.072 | (0.045) | −0.037 | (0.040) | ||||
Dropped out | 0.080 | (0.044) | * | 0.091 | (0.044) | ** | 0.090 | (0.055) | 0.083 | (0.048) | * | |
Semesters enrolled | −0.236 | (0.222) | −0.296 | (0.224) | −0.225 | (0.278) | −0.251 | (0.241) | ||||
Total equated credits | −2.306 | (2.624) | −2.960 | (2.687) | −1.521 | (3.263) | −1.975 | (2.850) | ||||
College level credits | −4.228 | (2.430) | * | −4.837 | (2.494) | * | −3.183 | (3.023) | −4.058 | (2.633) | ||
Took college exit exam | −0.029 | (0.037) | −0.036 | (0.037) | 0.005 | (0.047) | −0.020 | (0.040) | ||||
Passed college exit exam | −0.034 | (0.036) | −0.040 | (0.037) | −0.005 | (0.046) | −0.024 | (0.039) | ||||
Score on college exit exam* | 1.352 | (1.483) | 1.352 | (1.511) | 2.090 | (1.937) | 1.553 | (1.585) | ||||
Bandwidth | ± 6 points | ± 6 points | ± 4 points | ± 12 points | ||||||||
Functional Form | Local Linear | Local Linear | Local Linear | Local Quadratic | ||||||||
School/Cohort FE | X | X | X | X | ||||||||
Covariates | X | X | X | |||||||||
Sample size | 2,122 | 2,122 | 1,374 | 4,381 |
Source: Restricted use database covering placement test takers at LUCCS community colleges.
Notes: All outcomes measured three years after test date unless otherwise noted. Outcomes for college-level English include zeros for those who never took college-level English. Equated credits are a measure of total credits which include remedial coursework. Estimates of effects on exit exam scores are not strictly causal because they are computed only for those students (representing approximately 17 percent of the analysis sample) who took the exam; however, when there is no impact on taking the exam, this limits the concern that such comparisons are biased by differential selection.
****Statistically significant at the 1% level; **statistically significant at the 5% level; *statistically significant at the 10% level.
It is interesting to note that the negative impact on earning a B or higher in college-level English is a full 9 percentage points, only slightly smaller than the 12 percentage point negative impact on ever taking college-level English. This suggests that the majority of students who were prevented from college-level English as a result of remedial assignment are students who could have done reasonably well in the course. This may be because for students who passed the writing exam, a failing score on the reading test may simply be the result of measurement error rather than evidence of inadequate preparation.
Additional Findings
Under the developmental hypothesis of remediation's impact, students may experience an initial delay in accessing college-level coursework, but the hope is that this will pay out over the longer term. It is possible that a three-year follow-up timeframe is too short to observe these hypothesized positive effects, especially because some students may attend part time (though the vast majority of students in this sample—over 85 percent—enroll full time at least for their first semester). Thus, for those students for whom longer follow-up data are available, we examine degree/transfer, persistence, and dropout after five years. We find no evidence of significant effects on these outcomes emerging as we extend the follow-up period. This is unsurprising as the majority of the sample has already dropped out by the end of the original three-year follow-up period.
5. Heterogeneity by Test Score and Predicted Dropout Risk
In math, we have large enough samples to support two subgroup analyses. We first examine the math results separately for students assigned on the basis of the old versus the new math test, as figures 6 and 7 indicate possibly different patterns of effects. Second, we examine whether the impact of remedial assignment varies depending upon predicted dropout risk, where we define risk as the probability of dropping out within two years based on pre-existing demographic and high school background characteristics. (We also examined the reading results for these risk subgroups and note our findings herein, but the much smaller sample sizes for those analyses makes them less conclusive.)
Heterogeneity by Math Test Type
Table 5 presents the results separately for those cohorts tested using the old test (2001–02 through 2003–04) and those tested using the new test (2004–05 through 2007–08). Recall that the new math test has two modules (both of which students must pass), and our analysis focuses on students who passed the easier module and were near the cutoff on the harder module. This explains why the sample size is smaller for the new test compared with the old test, which utilized a single score to determine placement.
. | Full Sample . | Old Math Test . | New Math Test . | ||||||
---|---|---|---|---|---|---|---|---|---|
. | (Med. Bandwidth) . | . | Subgroup . | . | Subgroup . | . | |||
Outcome . | B . | (SE) . | . | B . | (SE) . | . | B . | (SE) . | . |
Enrolled immediately | −0.020 | (0.010) | ** | −0.018 | (0.012) | −0.023 | (0.021) | ||
Enrolled w/in 3 years | −0.004 | (0.008) | 0.000 | (0.010) | −0.011 | (0.018) | |||
Took dev math | 0.273 | (0.011) | *** | 0.242 | (0.013) | *** | 0.400 | (0.022) | *** |
Took college-level math | −0.077 | (0.012) | *** | −0.066 | (0.014) | *** | −0.119 | (0.025) | *** |
Passed college-level math | −0.050 | (0.012) | *** | −0.038 | (0.013) | *** | −0.098 | (0.025) | *** |
Earned C or higher in CL math | −0.043 | (0.011) | *** | −0.035 | (0.013) | *** | −0.077 | (0.024) | *** |
Earned B or higher in CL math | −0.021 | (0.010) | ** | −0.016 | (0.011) | −0.041 | (0.022) | * | |
Earned AA | −0.002 | (0.008) | 0.003 | (0.009) | −0.021 | (0.018) | |||
Earned AA or transferred | −0.002 | (0.009) | 0.007 | (0.010) | −0.031 | (0.020) | |||
Still persisting | −0.008 | (0.011) | −0.008 | (0.012) | −0.008 | (0.021) | |||
Dropped out | 0.010 | (0.012) | 0.002 | (0.014) | 0.041 | (0.025) | |||
Semesters enrolled | 0.031 | (0.060) | 0.061 | (0.069) | −0.079 | (0.122) | |||
Total equated credits | 1.034 | (0.718) | 1.520 | (0.818) | * | −0.855 | (1.501) | ||
College level credits | 0.058 | (0.635) | 0.696 | (0.716) | −2.340 | (1.360) | * | ||
Took college exit exam | 0.003 | (0.009) | 0.011 | (0.010) | −0.024 | (0.020) | |||
Passed college exit exam | 0.001 | (0.009) | 0.007 | (0.010) | −0.021 | (0.020) | |||
Score on college exit exam* | −0.085 | (0.443) | −0.336 | (0.522) | 0.514 | (0.844) | |||
Bandwidth | ± 6 points | ± 6 points | ± 6 points | ||||||
Functional form | Local Linear | Local Linear | Local Linear | ||||||
School/cohort FE | X | X | X | ||||||
Covariates | X | X | X | ||||||
Sample size | 25,970 | 19,613 | 6,357 |
. | Full Sample . | Old Math Test . | New Math Test . | ||||||
---|---|---|---|---|---|---|---|---|---|
. | (Med. Bandwidth) . | . | Subgroup . | . | Subgroup . | . | |||
Outcome . | B . | (SE) . | . | B . | (SE) . | . | B . | (SE) . | . |
Enrolled immediately | −0.020 | (0.010) | ** | −0.018 | (0.012) | −0.023 | (0.021) | ||
Enrolled w/in 3 years | −0.004 | (0.008) | 0.000 | (0.010) | −0.011 | (0.018) | |||
Took dev math | 0.273 | (0.011) | *** | 0.242 | (0.013) | *** | 0.400 | (0.022) | *** |
Took college-level math | −0.077 | (0.012) | *** | −0.066 | (0.014) | *** | −0.119 | (0.025) | *** |
Passed college-level math | −0.050 | (0.012) | *** | −0.038 | (0.013) | *** | −0.098 | (0.025) | *** |
Earned C or higher in CL math | −0.043 | (0.011) | *** | −0.035 | (0.013) | *** | −0.077 | (0.024) | *** |
Earned B or higher in CL math | −0.021 | (0.010) | ** | −0.016 | (0.011) | −0.041 | (0.022) | * | |
Earned AA | −0.002 | (0.008) | 0.003 | (0.009) | −0.021 | (0.018) | |||
Earned AA or transferred | −0.002 | (0.009) | 0.007 | (0.010) | −0.031 | (0.020) | |||
Still persisting | −0.008 | (0.011) | −0.008 | (0.012) | −0.008 | (0.021) | |||
Dropped out | 0.010 | (0.012) | 0.002 | (0.014) | 0.041 | (0.025) | |||
Semesters enrolled | 0.031 | (0.060) | 0.061 | (0.069) | −0.079 | (0.122) | |||
Total equated credits | 1.034 | (0.718) | 1.520 | (0.818) | * | −0.855 | (1.501) | ||
College level credits | 0.058 | (0.635) | 0.696 | (0.716) | −2.340 | (1.360) | * | ||
Took college exit exam | 0.003 | (0.009) | 0.011 | (0.010) | −0.024 | (0.020) | |||
Passed college exit exam | 0.001 | (0.009) | 0.007 | (0.010) | −0.021 | (0.020) | |||
Score on college exit exam* | −0.085 | (0.443) | −0.336 | (0.522) | 0.514 | (0.844) | |||
Bandwidth | ± 6 points | ± 6 points | ± 6 points | ||||||
Functional form | Local Linear | Local Linear | Local Linear | ||||||
School/cohort FE | X | X | X | ||||||
Covariates | X | X | X | ||||||
Sample size | 25,970 | 19,613 | 6,357 |
Source: Restricted use database covering placement test takers at LUCCS community colleges.
Notes: All outcomes measured three years after test date unless otherwise noted. Approximately 76 percent of tested students enroll immediately and 84 percent enroll within three years.
***Statistically significant at the 1% level; **statistically significant at the 5% level; *statistically significant at the 10% level.
Broadly, the pattern of effects is similar regardless of the test in place. We do find, however, that both the impact on ever enrolling in remedial math as well as the estimated negative effects on some subsequent outcomes appear to be somewhat larger under the new test. Note that these results do not necessarily imply the old test was somehow better than the new test, or that the remedial practices in place during 2001–03 were somehow more effective (or less harmful) than those in place during 2004–07. Because each set of estimates applies only to students near the cutoff on the relevant test, we interpret the difference in results as indicating heterogeneous effects of remedial assignment for students at different points in the ability distribution. When the math test changed, the cutoff effectively increased, resulting in an 11 percentage point increase in math remedial assignment rates between 2003 and 2004 (from 66 to 77 percent assigned to remediation). Thus, students on the margin of remediation under the new test are likely of higher average ability than those on the margin of remediation under the old test. This may also explain why only about 20 percent of students just above the cutoff on the new test enrolled in remediation voluntarily, compared with about 40 percent of students just above the cutoff on the old test.22 Note, however, that the larger negative effects for the new test cannot all be explained simply by pointing to a larger “first stage” effect on ever enrolling in remediation; although the impact on remedial course-taking increases by 65 percent, the negative impacts on ever passing, earning a C or higher, or earning a B or higher in college level math, all more than double.
This is supported by examining the background characteristics of those within a few points of the old and new cutoffs. Students near the new cutoff have higher high school test scores and more college-preparatory course units, and they are also somewhat younger and more likely to have entered college immediately after high school. Neither the differences in background characteristics nor the differences in estimated impacts are dramatic. Nonetheless, these results suggest remedial assignment may be more harmful for students of higher ability.
Heterogeneity by Prior Predicted Dropout Risk
Results are presented in table 6.24 As in the full sample, there are no effects on degree completion/transfer, persistence, dropout, college-level credits, or taking/passing the college exit exam. The results suggest high-risk students may be more likely to delay initial enrollment as a result of remedial assignment (either because of discouragement or because of diversion into noncredit basic skills interventions). On the college math course–taking outcomes, however, the results generally appear more negative for students with a lower risk of dropping out. All subgroups experience a roughly 8–9 percentage point decline in the likelihood of ever taking college-level math, but the negative effects on ever passing college-level math, or ever earning a C or better in college level math, are almost twice as large in the lower-risk subgroup than in the other subgroups. Finally, the lower risk subgroup experiences a significant 4 percentage point decline in the likelihood of ever earning a B or better in college-level math, compared with an insignificant 2 percentage point decline in the medium risk subgroup and no decline in this measure in the highest risk subgroup.
. | Risk Subgroups (Bottom 25%, Middle 50%, Top 25%) . | Model with Risk Interaction Effect . | |||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
. | Lower-Risk . | Middle-Risk . | Highest Risk . | Main Effect of . | Interaction of . | ||||||||||
Outcome . | Subgroup . | Subgroup . | Subgroup . | Rem. Assignment . | Assignment*Risk . | ||||||||||
Enrolled immediately | −0.021 | (0.021) | 0 | −0.029 | (0.014) | ** | −0.028 | (0.021) | 0 | −0.017 | (0.029) | 0 | −0.014 | (0.052) | 0 |
Enrolled w/in 3 years | −0.008 | (0.017) | 0 | −0.013 | (0.012) | 0 | −0.003 | (0.019) | 0 | −0.002 | (0.025) | 0 | −0.012 | (0.045) | 0 |
Took dev math | 0.366 | (0.022) | *** | 0.261 | (0.016) | *** | 0.214 | (0.023) | *** | 0.509 | (0.030) | *** | −0.444 | (0.053) | *** |
Took college-level math | −0.076 | (0.025) | *** | −0.092 | (0.018) | *** | −0.075 | (0.024) | *** | 0.016 | (0.033) | 0 | −0.192 | (0.059) | *** |
Passed college-level math | −0.089 | (0.025) | *** | −0.054 | (0.018) | *** | −0.033 | (0.022) | 0 | −0.081 | (0.033) | ** | 0.045 | (0.057) | 0 |
Earned C or higher in CL math | −0.086 | (0.025) | *** | −0.037 | (0.017) | ** | −0.032 | (0.021) | 0 | −0.113 | (0.032) | *** | 0.123 | (0.056) | ** |
Earned B or higher in CL math | −0.043 | (0.024) | * | −0.022 | (0.014) | 0 | −0.006 | (0.018) | 0 | −0.121 | (0.030) | *** | 0.185 | (0.051) | *** |
Earned AA | 0.010 | (0.020) | 0 | −0.009 | (0.011) | 0 | −0.006 | (0.013) | 0 | −0.046 | (0.024) | * | 0.080 | (0.041) | ** |
Earned AA or transferred | 0.018 | (0.022) | 0 | −0.012 | (0.013) | 0 | −0.011 | (0.014) | 0 | −0.043 | (0.027) | 0 | 0.073 | (0.046) | 0 |
Still persisting | 0.012 | (0.023) | 0 | −0.024 | (0.016) | 0 | −0.008 | (0.021) | 0 | 0.011 | (0.029) | 0 | −0.040 | (0.051) | 0 |
Dropped out | −0.030 | (0.026) | 0 | 0.037 | (0.018) | ** | 0.021 | (0.023) | 0 | 0.037 | (0.033) | 0 | −0.039 | (0.058) | 0 |
Semesters enrolled | 0.099 | (0.124) | 0 | −0.110 | (0.086) | 0 | 0.053 | (0.123) | 0 | −0.087 | (0.166) | 0 | 0.139 | (0.296) | 0 |
Total equated credits | 1.289 | (1.557) | 0 | −0.373 | (1.030) | 0 | 1.104 | (1.386) | 0 | −1.802 | (2.016) | 0 | 4.284 | (3.542) | 0 |
College level credits | 0.337 | (1.404) | 0 | −1.295 | (0.906) | 0 | 0.319 | (1.198) | 0 | −4.320 | (1.808) | ** | 7.344 | (3.160) | ** |
Took college exit exam | −0.004 | (0.023) | 0 | −0.003 | (0.013) | 0 | 0.013 | (0.015) | 0 | −0.070 | (0.028) | ** | 0.133 | (0.047) | *** |
Passed college exit exam | −0.007 | (0.022) | 0 | −0.005 | (0.013) | 0 | 0.009 | (0.015) | 0 | −0.078 | (0.027) | *** | 0.144 | (0.046) | *** |
Score on college exit exam* | 0.358 | (0.692) | 0 | −0.611 | (0.694) | 0 | −0.868 | (1.312) | 0 | 0.886 | (1.144) | 0 | −2.180 | (2.205) | 0 |
Bandwidth | ± 6 points | ± 6 points | ± 6 points | ± 6 points | ± 6 points | ||||||||||
Functional form | Local Linear | Local Linear | Local Linear | Local Linear | Local Linear | ||||||||||
School/cohort FE | X | X | X | X | X | ||||||||||
Covariates | X | X | X | X | X | ||||||||||
Sample size | 6,141 | 12,192 | 6,282 | 24,615 | 24,615 |
. | Risk Subgroups (Bottom 25%, Middle 50%, Top 25%) . | Model with Risk Interaction Effect . | |||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
. | Lower-Risk . | Middle-Risk . | Highest Risk . | Main Effect of . | Interaction of . | ||||||||||
Outcome . | Subgroup . | Subgroup . | Subgroup . | Rem. Assignment . | Assignment*Risk . | ||||||||||
Enrolled immediately | −0.021 | (0.021) | 0 | −0.029 | (0.014) | ** | −0.028 | (0.021) | 0 | −0.017 | (0.029) | 0 | −0.014 | (0.052) | 0 |
Enrolled w/in 3 years | −0.008 | (0.017) | 0 | −0.013 | (0.012) | 0 | −0.003 | (0.019) | 0 | −0.002 | (0.025) | 0 | −0.012 | (0.045) | 0 |
Took dev math | 0.366 | (0.022) | *** | 0.261 | (0.016) | *** | 0.214 | (0.023) | *** | 0.509 | (0.030) | *** | −0.444 | (0.053) | *** |
Took college-level math | −0.076 | (0.025) | *** | −0.092 | (0.018) | *** | −0.075 | (0.024) | *** | 0.016 | (0.033) | 0 | −0.192 | (0.059) | *** |
Passed college-level math | −0.089 | (0.025) | *** | −0.054 | (0.018) | *** | −0.033 | (0.022) | 0 | −0.081 | (0.033) | ** | 0.045 | (0.057) | 0 |
Earned C or higher in CL math | −0.086 | (0.025) | *** | −0.037 | (0.017) | ** | −0.032 | (0.021) | 0 | −0.113 | (0.032) | *** | 0.123 | (0.056) | ** |
Earned B or higher in CL math | −0.043 | (0.024) | * | −0.022 | (0.014) | 0 | −0.006 | (0.018) | 0 | −0.121 | (0.030) | *** | 0.185 | (0.051) | *** |
Earned AA | 0.010 | (0.020) | 0 | −0.009 | (0.011) | 0 | −0.006 | (0.013) | 0 | −0.046 | (0.024) | * | 0.080 | (0.041) | ** |
Earned AA or transferred | 0.018 | (0.022) | 0 | −0.012 | (0.013) | 0 | −0.011 | (0.014) | 0 | −0.043 | (0.027) | 0 | 0.073 | (0.046) | 0 |
Still persisting | 0.012 | (0.023) | 0 | −0.024 | (0.016) | 0 | −0.008 | (0.021) | 0 | 0.011 | (0.029) | 0 | −0.040 | (0.051) | 0 |
Dropped out | −0.030 | (0.026) | 0 | 0.037 | (0.018) | ** | 0.021 | (0.023) | 0 | 0.037 | (0.033) | 0 | −0.039 | (0.058) | 0 |
Semesters enrolled | 0.099 | (0.124) | 0 | −0.110 | (0.086) | 0 | 0.053 | (0.123) | 0 | −0.087 | (0.166) | 0 | 0.139 | (0.296) | 0 |
Total equated credits | 1.289 | (1.557) | 0 | −0.373 | (1.030) | 0 | 1.104 | (1.386) | 0 | −1.802 | (2.016) | 0 | 4.284 | (3.542) | 0 |
College level credits | 0.337 | (1.404) | 0 | −1.295 | (0.906) | 0 | 0.319 | (1.198) | 0 | −4.320 | (1.808) | ** | 7.344 | (3.160) | ** |
Took college exit exam | −0.004 | (0.023) | 0 | −0.003 | (0.013) | 0 | 0.013 | (0.015) | 0 | −0.070 | (0.028) | ** | 0.133 | (0.047) | *** |
Passed college exit exam | −0.007 | (0.022) | 0 | −0.005 | (0.013) | 0 | 0.009 | (0.015) | 0 | −0.078 | (0.027) | *** | 0.144 | (0.046) | *** |
Score on college exit exam* | 0.358 | (0.692) | 0 | −0.611 | (0.694) | 0 | −0.868 | (1.312) | 0 | 0.886 | (1.144) | 0 | −2.180 | (2.205) | 0 |
Bandwidth | ± 6 points | ± 6 points | ± 6 points | ± 6 points | ± 6 points | ||||||||||
Functional form | Local Linear | Local Linear | Local Linear | Local Linear | Local Linear | ||||||||||
School/cohort FE | X | X | X | X | X | ||||||||||
Covariates | X | X | X | X | X | ||||||||||
Sample size | 6,141 | 12,192 | 6,282 | 24,615 | 24,615 |
Source: Restricted use database covering placement test takers at LUCCS community colleges.
Notes: All outcomes measured three years after test date unless otherwise noted. Approximately 76 percent of tested students enroll immediately and 84 percent enroll within three years. The model with risk interaction effects adds to the baseline model a linear control for estimated dropout risk and an interaction between estimated dropout risk and an indicator for remedial assignment. The main effect in this model can be interpreted as the estimated effect of assignment to remediation for a hypothetical student with zero dropout risk. The interaction coefficient then indicates the additional effect of remedial assignment for a hypothetical student with a 100 percent dropout risk. The average two-year dropout risk in the sample is 53 percent. See table 2 for full sample main results.
***Statistically significant at the 1% level; **statistically significant at the 5% level; *statistically significant at the 10% level.
To test the significance of these subgroup differences and ensure that they are not the result of arbitrary dividing lines between high, medium, and low risk, we also examine heterogeneity by using a pooled specification and simply adding to the baseline model a continuous measure of dropout risk, as well as an interaction between this continuous measure and the indicator for remedial assignment. The results from this regression are presented in the last two columns of table 6. Here, the main effect of remedial assignment can be interpreted as the effect for a hypothetical student at zero risk of dropping out. It is evident that these effects are more negative than the average effect from the baseline model, in some cases significantly so. For example, the results imply that a zero-dropout-risk student who is assigned to math remediation is 4.6 percentage points less likely to complete an associate's degree than a zero-dropout-risk student who was not assigned to math remediation. The interaction effects confirm, however, that the impact of remedial assignment is significantly less negative for those with higher predicted dropout risk.
Indeed, adding the main effect and interaction effects together suggests the impact of remediation may even be positive for a hypothetical student with 100 percent risk of dropping out. For the first time, we see significant negative effects for the hypothetical zero-risk student on college-level credit accumulation and the likelihood of taking/passing the CPE, with large, significant, positive coefficients on the interaction term implying net positive effects for the hypothetical 100 percent risk student.25 Still, when effects for these outcomes are estimated for the actual students in the observed top and bottom risk quartiles (with average predicted dropout risk of 67 percent and 38 percent, respectively), the effects are near zero and not statistically significant (see column 4 of table 6).
We also examined the reading plus writing and reading-only remediation analyses (tables 3 and 4) by risk subgroup. In neither case did we see any clear or statistically significant pattern of subgroup differences (this was similarly the case when we pooled the subgroups but added to the regression the dropout risk index and its interaction with remedial assignment). This may be because standard errors are larger for these subsamples, making it difficult to identify true differences; alternatively, the effects of these “treatments” in English could truly be more homogenous across subgroups.26
6. Discussion
Our results add to prior evidence from similar studies in Texas (Martorell and McFarlin 2011) and Florida (Calcagno and Long 2008) that assignment to remediation does not sufficiently develop students’ skills in order to improve their chances of college-level success. This does not necessarily mean that students who complete a remedial course sequence experience no benefit, but whatever benefit they might experience is washed out by null or negative effects for students who are assigned to remediation but never complete the sequence.27
Although a necessary caveat to any RD analysis is that these results only generalize to students near the threshold, it is also worth noting that our analysis examines two different subjects and aggregates impacts across multiple institutions, multiple years, and in the case of math, multiple tests—and in no case do we see any evidence of positive effects on college outcomes. Moreover, our analysis of high, medium, and low dropout risk students who all scored near the threshold (in math) indicates that although the pattern of effects is somewhat more negative for lower-risk students, we fail to find any positive effects even for the highest-risk subgroup representing the top quartile of predicted dropout risk.
Although our study cannot definitely “prove” which mechanisms are driving the impact of remediation, we find little support for the developmental hypothesis. On the other hand, neither do we find much evidence that assignment to remediation results in the active discouragement some have feared, at least in general. Students just below the remedial cutoffs (at least in this LUCCS) in both math and our main reading analysis are no less likely to enroll and stay enrolled for about the same number of terms as those just above the cutoff. These results suggest prior studies examining the impact of remedial assignment conditional on enrollment (e.g., Calcagno and Long 2008; Martorell and McFarlin 2011) may not suffer from any significant selection bias. If anything, students assigned to remediation take slightly more credits overall, although virtually all of these additional credits are in remedial courses.
For only one group do we find significant evidence consistent with the discouragement hypothesis: students who were assigned to remedial reading even though they passed a writing exam. The fact that students passed a harder exam while barely failing an easier one suggests their reading test scores may under-represent their true ability. For these students we find large but noisy negative effects on initial enrollment, large diversionary coursework effects, significant declines in degree receipt, and increases in dropout. These particularly large effects may also be due in part to the fact that remedial assignment in English has more consequences for access to other college coursework (since Freshman Composition is a common prerequisite) than does remedial assignment in math. In any case, the results suggest that policy makers may want to pay more attention to the risk of misassigning prepared students to remediation, which research suggests is relatively common when test scores are used as the sole determinant of placement (Belfield and Crosta 2012; Scott-Clayton 2012).
Overall, our pattern of results is most consistent with the diversion hypothesis. The primary, most consistent pattern of effects we find relates to the specific courses students take while they are enrolled: Instead of taking college-level courses in the relevant subject, students take remedial courses. Without additional information, it is difficult to conclusively determine whether this is a bad thing. Although we find no evidence that students learned more in the courses into which they were diverted, we cannot rule this out because our measure of student learning (college proficiency test scores) is limited by the fact that many students never even make it far enough to take the exam. Moreover, we do not know whether high rates of remediation may improve outcomes for nonremediated students, either through direct peer effects or by enabling institutions to reduce crowding in college-level courses.
There are three reasons to worry about these diversionary effects, however. First, the negative impacts in both the math and the second reading analysis on the proportion ever earning a B or better in the relevant college level course is concerning. It suggests that one-quarter of students diverted out of college-level coursework in math, and up to 70 percent of diverted students in English (0.70 = 10.9/15.5, table 4) actually could have done quite well there had they been given the opportunity. Second, diversionary effects appear to be largest for students with the lowest predicted risk of dropping out, for whom the ability tracking and peer effects rationales for such diversion make the least sense.
Finally, if the primary revealed function of remediation is diversion rather than development, it implies that remedial courses may not be providing the optimal content. Many remedial courses are designed explicitly to prepare students for college-level coursework in the relevant subject, which our analysis suggests they may never take. For example, a remedial math course may require students to master quadratic equations even though they are unlikely to need that skill either in their future jobs or even in a “college-level” math course (Jaggars and Hodara 2011). A question for future research is what type of remedial curriculum is most valuable for students who may not continue beyond the course.
Notes
We use the terms “remedial” and “developmental” interchangeably throughout the paper.
Estimate based on Beginning Postsecondary Students (BPS): 2009 transcript data (NCES 2012; tables accessed via QuickStats at nces.ed.gov/datalab/quickstats/createtable.aspx).
Credit attainment estimates based on BPS: 2009 transcript data (NCES 2012), which indicate an average of two remedial courses (roughly six “equated” credits) and sixty total credits earned within six years among first-time beginning students entering public two-year colleges. The Delta Cost Project (2012) estimates total expenditures of roughly $12,957 per full-time equivalent student per year, implying a per-credit cost of roughly $540 (since full-time is defined as 24 credits per year). This in turn implies the cost of remediation is roughly $3,200 per community college entrant (not per remediated student). With over 1.2 million first-time students entering community colleges annually, this suggests national costs of nearly $4 billion annually.
The system requested anonymity as a condition of providing access to the data.
They find no effects of remediation versus those assigned directly to college level in math or reading, but find that those assigned to lower levels of writing remediation have higher grades (if they ever take a college-level course) than those assigned to higher levels of writing remediation.
This is also in line with Manski's (1989) model of college education as experimentation, in which the dropout decision is the result of new information regarding skills and preferences.
We are then able to track these individuals if they transfer within these six institutions or to any of the several four-year institutions that are part of the same urban public college system.
The COMPASS® suite is a product of ACT, Inc.
This is in contrast to Texas's system, analyzed in Martorell and McFarlin (2011), in which students could delay their placement exam until after enrollment.
Unfortunately, students who transfer to private institutions, for-profit institutions, or public institutions outside the urban area are not captured in our data.
These rates are higher than those observed in the system overall (in which roughly 82 percent are assigned to remediation in at least one subject), because 20 to 25 percent of entrants are exempt from testing in each subject.
Authors’ calculations using the BPS: 2003–04 data set (NCES 2012).
This measure of language minority status is derived from self-reported native language and country of origin as indicated on the college application.
These differences are likely to also apply to the LUCCS versus the statewide Texas sample examined in Martorell and McFarlin (2011) as well as to the LUCCS versus the statewide Florida sample examined in Calcagno and Long (2008), because the LUCCS sample covers only a single urban area rather than an entire state system. Although these studies do not report all of these same demographic characteristics, certainly along race/ethnicity our sample has a significantly higher minority population.
Note that for a small proportion of students who took a placement exam while still enrolled in high school, the three-year follow-up period does not begin until after high school graduation.
Students who transferred to private or out-of-state institutions cannot be distinguished from dropouts in our data.
Students must pass both math modules to be placed directly into college-level classes, although in practice the vast majority of students who are near the cutoff on the easier module fail the harder module. In other words, the cutoff policy on the harder (algebra) module is the primary determinant of college-level versus remedial assignment.
With the minor difference that the pooled regression in equation 3 restricts the college fixed effects and student covariates to have the same effect across cohorts/test version.
We refer to the writing exam as “harder” than the reading exam purely based on the empirical observation that students are much more likely to fail writing than reading.
The impacts on proficiency test scores are not strictly causal because they are conditional on a post-treatment outcome (taking the exam). If, however, we make the assumption that any impact of treatment on test-taking is monotonic across individuals, then the fact that there is no overall impact on test-taking enables us to interpret the score differences—or lack thereof—causally (see, e.g., Lee 2009, who shows that under this monotonicity assumption, selection bias goes to zero as the impact of treatment on the probability of selection goes to zero).
Although it is easy to think of reasons why a student may underperform on the computer-adaptive reading exam (unfamiliarity with adaptive tests, lack of awareness that a test would be required that day, distractions or time constraints at the test center), it seems less plausible that random noise would cause a student to perform far better than her true ability on the written essay exam.
By “voluntarily,” we mean that remedial enrollment was not required as a matter of institutional policy for those scoring above the cutoff; however, it is possible that students may nonetheless have been strongly encouraged to take remediation even if they were above the cutoff.
For the 10 percent of the sample for whom high school measures were unavailable, we zero-out the achievement measures and include a missing data flag in the regression.
Note that we also performed McCrary tests of the continuity of the density function within each risk subgroup to check for evidence of manipulation. We found no evidence of manipulation within any subgroup; results available upon request.
To obtain the impact for a hypothetical 100 percent risk student, the interaction effect and main effect should be added together. To obtain the impact for the average (53 percent) risk student, the interaction effect should be roughly divided in half and then added to the main effect. The minimum predicted dropout risk in our sample was 1 percent and the maximum was 90 percent.
For the analysis of reading-and-writing remediation versus reading-only remediation, we see little clear pattern of differences across subgroups, and the interaction coefficient in the pooled regression is not statistically significant for any outcome (though it is generally positive, consistent with the subgroup findings for math). For the analysis of reading remediation versus college level English (for those who passed the writing exam), the sample size within subgroups creates very large standard errors, particularly in the low and high risk groups. If anything, the pattern of results suggests slightly larger negative effects on the college English course-taking outcomes for the highest-risk subgroup. These additional results are available upon request from the authors.
It is difficult to rigorously establish the causal effect of actually completing a remedial course sequence, because completion itself is difficult to directly manipulate. Because policy makers can only directly manipulate remedial assignment, not completion per se, we believe the impact of remedial assignment is the most relevant analysis for policy makers (course instructors, on the other hand, may be more interested in the effects of completion but such an analysis would require different, nonexperimental approaches that are beyond the scope of this paper).
Acknowledgments
This work was funded by the Bill and Melinda Gates Foundation. The authors would like to thank administrators of the community college system analyzed here for providing access to these data; we also thank Thomas Bailey, Shanna Jaggars, Michelle Hodara, Maria Encina Morales, and other members of the Community College Research Center for helpful comments and suggestions.