Abstract

This paper uses fixed effects analyses to estimate differences in student performance under online versus face-to-face course delivery formats in the California Community College system. On average, students have poorer outcomes in online courses in terms of the likelihood of course completion, course completion with a passing grade, and receiving an A or B. These estimates are robust across estimation techniques, different groups of students, and different types of classes. Accounting for differences in instructor characteristics (including through the use of instructor fixed effects) dampens but does not fully explain the estimated relationships. Online course-taking also has implications for downstream outcomes, although these effects are smaller. Students are more likely to repeat courses taken online, but are less likely to take new courses in the same subject following courses taken online.

1.  Introduction

The use of online courses is expanding rapidly at all levels of higher education. In the 1997–98 academic year, there were an estimated 1.08 million student-course enrollments in distance education undergraduate courses (Lewis et al. 1999). By 2006–07, these figures had increased dramatically, to 9.8 million undergraduate distance education enrollments (Parsad and Lewis 2008). The community college sector accounts for roughly half of these enrollments—public two-year colleges documented over 4.8 million enrollments in undergraduate distance learning courses in 2006–07. Moreover, policy makers and administrators increasingly regard online education as important to the long-term strategy of their institutions (Allen and Seaman 2014) because online course offerings are seen as an avenue to potentially cut costs while providing students with flexibility (Bartindale 2013).1 Notably, California governor Jerry Brown has advocated for the expansion of online course offerings (Murphy 2014), and offered grants for the state's community college system to coordinate online course delivery across campuses (Wilson 2013).

Although the drive to incorporate online classes continues to gain momentum, much remains to be learned about how online course-taking affects student achievement. This paper uses a series of fixed effects models, including college-course fixed effects, student fixed effects, and instructor fixed effects, to compare how students’ course performance differs between online and face-to-face (FtF) courses. We find, as others have, that students in FtF courses outperform their peers in online courses across a number of outcomes. We rule out several explanations for this gap related to how students sort into classes.

We further extend the literature by exploring how instructor characteristics contribute to the relationship between online course-taking and performance. We find that although online course enrollment is related to differences in the observed characteristics of instructors to which students are exposed, these account for only a negligible portion of the performance decrement associated with online course-taking. We also find, using instructor fixed effects analyses, that students perform worse in instructors’ online courses than in courses the same instructors teach face-to-face. These analyses provide novel evidence that instructor sorting into online versus FtF classes is unlikely to explain away student performance differences across the two formats.

We also examine novel downstream outcomes, including course repetition and future enrollment in other classes within the same subject area. We find that online course-taking is associated with a higher likelihood of repeating the same class, but a lower likelihood of taking new classes in the same subject area, compared with courses taken face-to-face.

Finally, we explore heterogeneity of the impacts of online courses on contemporaneous course performance and future course-taking across different kinds of students and in different subject areas. We find that the negative relationship between online course-taking and contemporaneous performance is highly robust across different subjects and different types of students, although we identify certain instances where the decrements associated with online course-taking are particularly pronounced. The negative relationship between online course-taking and the likelihood of future course-taking in the same subject also holds for all student subgroups that we explore, but the point estimates are nonsignificant for African American students. There is more variation in the strength of the relationship between online enrollment and future same-subject course-taking across course types. Whereas online enrollment is negatively associated with future same-subject course-taking for math, humanities, and social science courses, it is associated with an increased likelihood of future course-taking in information technology courses. Our results have important implications for community college administrators and counselors as they consider how to use online courses as part of a suite of strategies to support students’ needs.

2.  Past Literature

Because online courses are a somewhat recent phenomenon in higher education, there is relatively little research on how students fare in these courses compared with traditional face-to-face settings. A 2009 meta-analysis from the U.S. Department of Education found that outcomes were generally positive for students enrolled in online or blended courses versus traditional class settings (Means et al. 2009). Many of the studies analyzed in that meta-analysis, however, compared online versus FtF delivery of brief training sessions (some as short as 15 minutes) rather than full courses conducted over the course of an academic term (Jaggars and Bailey 2010). The latter setting is more relevant to postsecondary administrators considering whether to develop or expand online learning options. Moreover, among the seven studies that did compare term-length FtF courses with fully online alternatives, several were in subjects likely to be especially conducive to online learning (e.g., computer programming), and all were conducted at relatively selective universities (Jaggars and Bailey 2010). Outcomes may be different in broad-access institutions that enroll students with generally lower levels of academic achievement and preparation. Furthermore, even well-conducted studies that compare FtF versus online course delivery in semester-long courses (Figlio, Rush, and Yin 2013; Bowen et al. 2014; Adams, Randall, and Traustadottir 2015; Joyce et al. 2015) generally focus on a small subset of classes (e.g., one specific microeconomics, statistics, or microbiology class) and are therefore unable to explore the heterogeneity of effects across different types of subject matters.

A handful of studies have explored the outcomes of students across a wide set of courses in state community college settings (Xu and Jaggars 2011, 2013, 2014; Kaupp 2012; Johnson and Cuellar Mejia 2014; Streich 2014). These studies consistently find that students in FtF courses outperform their peers in online courses, both in terms of course persistence and grades (Xu and Jaggars 2011, 2013; Johnson and Cuellar Mejia 2014; Streich 2014), for most subgroups of students and in most subject areas (Xu and Jaggars 2014; Johnson and Cuellar Mejia 2014; Streich 2014).

Conducted independently and simultaneously with our work, Johnson and Cuellar Mejia's (2014) research is particularly relevant to our study as it establishes online–FtF performance gaps in the California community college setting for a cohort of students entering in fall 2006. Their most tightly controlled estimates come from bivariate probit models that use distance from students’ home ZIP code to the college attended as an instrument for online course-taking. Johnson and Cuellar Mejia find a roughly 14 percentage point reduction in the likelihood of course-passing when courses are taken online.

Using a series of fixed effects techniques, we find patterns that are strikingly similar to those found in past literature. We find that online course-taking is negatively associated with contemporaneous course performance in terms of course completion, course passing, and the likelihood of receiving an A or a B. We subject our analyses to several novel tests to determine whether selection into online courses biases these fixed effect estimates. We find the results are likely not biased.

We also include several analyses that are new in this literature. First, we test whether a novel factor—instructor characteristics—may explain the differences in performance in FtF versus online courses. Previous studies in traditional four-year institutions have found that postsecondary instructors have modest but measurable effects on student performance (Carrell and West 2010; Hoffman and Oreopoulos 2013), although findings are mixed as to the type of instructor qualifications that best promote student success. Several researchers have found that exposure to part-time, adjunct instructors is negatively associated with long-term outcomes like graduation rates (Ehrenberg and Zhang 2005; Jacoby 2006; Calcagno et al. 2008), persistence rates (Bettinger and Long 2006), and performance in subsequent classes (Carrell and West 2010). However, others have found benefits of having less-experienced or nontenured instructors on contemporaneous course performance (Carrell and West 2010), and on enrollment (Bettinger and Long 2010) and performance (Figlio, Schapiro, and Soter 2015) in subsequent courses in the same subject area. If instructor qualifications are associated with student achievement, and if students in online courses are exposed to a systematically different mix of instructors than are their peers in FtF courses, these instructor qualifications may explain (or suppress) any observed differences in performance between the two formats. Although there are differences in the characteristics of instructors who teach online and face-to-face, we find the inclusion of instructor characteristics does little to alter the negative relationship between online course-taking and student performance.

Second, we test whether online course-taking may affect students’ subsequent course-taking in the same subject area. We focus on whether students retake the same course (“course repetition”) and whether they take new follow-on courses in the same subject area (“subject persistence”). The course repetition outcome is a natural extension of our study of course passing—if students are differentially likely to pass online versus FtF courses, a tangible cost associated with being exposed to a less effective teaching method may be the greater likelihood of having to expend resources to relearn the same subject matter. The subject persistence outcome represents a less tangible potential cost to a poorer course experience, that is, students may become less interested in pursuing the course subject matter in the future. Past research has suggested that other course characteristics—such as the gender of the professor teaching the course (Carrell, Page, and West 2010) or teacher contract status (Figlio, Schapiro, and Soter 2015)—affect students’ likelihood of taking future courses in the same subject. Studying such downstream course-taking outcomes allows us to contextualize the costs and benefits of online courses.

3.  California Context

We explore the effects of online course-taking in California's community colleges. California is home to the nation's largest community college system, comprising 113 institutions educating over 2.1 million students per year (CCCCO 2016). Online course offerings have expanded steadily in California's community colleges. Distance education in some form has been offered since the 1980s, although the content of distance courses was initially restricted to course offerings that were transferable to four-year institutions (CCCCO 2013). This policy was relaxed in 1994, and, in 2002, the Board of Governors approved regulation changes to allow both credit and non-credit courses to be delivered virtually. As a result, distance education in California's community colleges grew from constituting 0.63 percent of course sessions in 1995–96 to 10.5 percent by 2011–12 (CCCCO 2013). Figure 1 plots the expansion of student enrollment in online courses in California community colleges from 2002–03 through 2011–12.

Figure 1.

Expansion of Student Enrollments in Online Courses, 2002—03 through 2011—12.

Figure 1.

Expansion of Student Enrollments in Online Courses, 2002—03 through 2011—12.

Campuses have latitude to set their own course offerings, and there is substantial variation in the extent to which campuses use online education. For instance, 56 of the 113 colleges in the system offered at least one degree or certificate fully through virtual delivery in 2011–12, including 296 associate degrees and 291 certificate programs (CCCCO 2013). As might be expected from the uneven adoption of fully online certificate programs, online course-taking is not equally popular in all California community colleges. Two colleges had no online course enrollments among our sample students during the time period studied. Among those with some offerings, the share of enrollments observed in online courses ranged from 0.95 percent at Evergreen Valley College to 56.50 percent at Coastline Community College.

California's community colleges offer two types of online courses. In an asynchronous format, instructors and student interactions are not primarily conducted in real-time. Instructors and students may e-mail each other or post to message boards, and lectures may be prerecorded. Students access course content at their own pace. In a synchronous delivery format, instructors and students do not meet in the same place, but all access the course platform simultaneously during pre-arranged times and there is real-time interaction among the course participants. Asynchronous delivery is the more popular method—over 90 percent of virtual courses were conducted through asynchronous course delivery in the 2011–12 academic year (CCCCO 2013).

Both asynchronous and synchronous courses are offered through Learning Management Systems (LMS), the technology platform through which the online course is conducted (Vai and Sosluski 2011). The LMS platform allows instructors to design, deliver, and manage the online course, providing tools to facilitate content delivery, student–student communication, student–instructor communication, and assessment (Vai and Sosluski 2011). Though there are several platforms available, Blackboard, Moodle, and Canvas tend to be most commonly used by California community colleges. Although instructors may also use features of LMS platforms (e.g., online gradebooks or discussion boards) to support their FtF courses, they are likely to feature more prominently in online courses, given the nature of instructional delivery in those courses.

Instructor requirements for training to teach online vary by campus. Results from a 2013 survey of online instructors show that 59 percent of California community colleges required training for instructors to teach online (Freitas and Gold 2015). Colleges incentivized instructors to take training in other ways as well—78 percent of colleges counted online training toward professional development credit and 21 percent counted the training toward unit credit for the salary schedule (Freitas and Gold 2015).

Characterizing the in-class experience offered in online courses is difficult because of the sheer size of the California Community College system. The California Community College Chancellor's Office's (CCCCO 2008) guidance on distance courses specifies that colleges are expected to offer the same standards of course quality as FtF sections, and that instructors are subject to the same qualification requirements as their FtF colleagues. Moreover, the guidelines for the Western Association for Schools and Colleges, which is the accreditor for the California Community College system, require that institutions benchmark online curricula against those offered in FtF courses and programs (WASC 2006). Therefore, although we lack sufficient data (for instance, course syllabi or assignment lists) to determine the extent to which online and FtF courses are strictly comparable, system-wide guidelines incentivize colleges to provide similar experiences across delivery modes to the extent possible.

4.  Analytic Method

Data and Sample

To determine how online course-taking is associated with student performance, we draw on data from the CCCCO. Our sample constitutes first-time entrants to the community college system in the 2008–09 academic year. We observe all course enrollments, course outcomes, student characteristics, and instructor characteristics for this cohort over 3,011,232 enrollments in 57,270 courses from 2008–09 through 2011–12. The construction of our dataset is described in a separate online appendix that can be accessed on Education Finance and Policy's Web site at http://www.mitpressjournals.org/doi/suppl/10.1162/edfp_a_00218.

We impose several sample restrictions. We drop physical education and fine arts courses, and courses offered for fewer than one (or more than five) credits. We include only courses taught in FtF lecture or discussion formats, or through online formats elaborated below. In order to obtain a more homogeneous sample, we want to compare students with relatively similar levels of education at the outset of their California Community College careers. We therefore exclude students who already hold an associate of arts or bachelor of arts degree at the time they enter college, students who are taking community college classes but are also enrolled at either K–12 or continuing education classes, students who have not finished high school and are not currently enrolled in K–12 schools, and students with high school degrees earned outside of the United States. We further limit the sample to students between the ages of 18 and 40 years. Finally, because our main intent is to explore how student performance differs by instructional modes, we limit our sample to courses in which both FtF and online options were offered at the same college in the same term. These restrictions narrow our sample from 440,405 unique students to 217,194 students. Appendix table B.1 (available in the online appendix) traces how the sample composition changes as these limitations are imposed.

Measures and Models

Main Independent Variable

Our primary predictor of interest is an indicator (Online) for whether a student took a given course online through either synchronous or asynchronous delivery. For each section of each course offered in each term at each college, we observe the instructional delivery mode. We compare FtF instruction with instruction that took place through either synchronous or asynchronous online delivery modes.

Outcomes

We explore how online course-taking is associated with three contemporaneous course performance outcomes. The first is an indicator for whether a student completed the course. Students are considered to have completed courses if they receive a letter grade (A–F), or a pass or no pass designation. Students with incompletes or who withdraw or are dropped from the class by the instructor are counted as having not completed the course. Students who withdraw because of military obligations receive a distinct grade notation signifying the reasons for withdrawal and are excluded from the analysis. Likewise, students who withdraw during the add/drop period—before a course enrollment would appear on their permanent record—are excluded from the analysis. We refer to this as the “completion” outcome.

A second set of analyses captures whether a student completes the course with a passing grade. This outcome variable is coded 1 if students complete the course with an A, B, C, or Pass grade; withdrawals and No Pass, F, and D grades are coded as 0. We refer to this as the “Pass/A/B/C” or “course passing” outcome. This is perhaps our most policy-relevant outcome, as receipt of an A, B, C, or Pass grade allows students to transfer credits to four-year institutions.2

Our final take on the course performance outcome uses an indicator variable for whether students receive an A or B grade. Because the way future institutions might view pass grades is ambiguous (i.e., whether they would equate a Pass to a C or whether they would view it more akin to an A or B), we exclude students graded on the Pass/No Pass options in for this outcome.3 We refer to this as the “A/B receipt” outcome. We introduce two outcomes related to future course-taking later in the paper.

Controls

Student controls include both time-variant and time-invariant variables. Time-variant controls include student age-at-term, the number of units a student takes in a given term, and student financial aid status. The financial aid status variable captures whether the student receives a Board of Governors tuition waiver. The waiver is needs-based, and virtually all students receiving any financial aid receive the waiver as part of their financial aid packages.

Time-invariant student-level controls include a vector of race indicators (Hispanic, Asian, black, other; white is omitted), an indicator for whether a student is female, and the type of prior educational credential received at entry into the California Community College system (high school diploma, GED, or California High School Proficiency credential). We also create a vector of indicators on the academic goals that students report to the college. Students are coded as having goals to transfer to a four-year college (with or without an associate's degree), to pursue an associate's degree with no intent to transfer, to further vocational goals, to pursue personal interests, to improve basic skills, or as having unknown goals.

We also include indicators for the course skill level. Courses are coded as basic-skills level (remedial), transferrable to the California State University (CSU) system only, transferrable to the CSU and University of California systems, or nontransferable but not basic-skills level.

Models

In an ideal world, we would evaluate the causal effect of taking a course online versus face-to-face using a controlled experiment in which we could randomly assign students taking a randomly chosen set of courses to online versus FtF course sections and observe their relative course performance. Such an experiment would provide strong internal as well as external validity of estimates of the effects of online course-taking. However, such an experiment is not feasible in the real world on a wide scale. We therefore use quasi-experimental techniques to build up progressively better-controlled models to explore how online course-taking is associated with student outcomes. A naïve approach would be to simply estimate an ordinary least squares regression:
formula
1
where Y represents the outcome of interest for student i observed in section j of course c at college s in term t, Online indicates whether the student enrolled in an online section, Course is a vector of other course characteristics, Student is a vector of time-varying and time-invariant student characteristics, is a vector of term fixed effects that index the academic term and year that a course was offered, and is an independently and identically distributed error term.
This approach raises concerns about bias on two levels, however. First, online enrollment may be concentrated in courses that are either more or less challenging than the average FtF course. In other words, our estimates might be biased because of sorting in how online courses are offered across different types of classes and among different institutions. To address this concern, we introduce college-course fixed effects ():
formula
2

This approach allows us to compare students taking the same courses in the same schools but through different delivery modes.4

Second, students who opt into online sections of a course may systematically differ from their peers in FtF sections of the same course. For instance, say that students who prefer FtF courses are more engaged with college life in general and that engagement is correlated with performance either positively (e.g., if engagement means students are more motivated to do well) or negatively (e.g., if engagement means that students are distracted by other college activities). These differences across the types of students who are prone to enroll in FtF versus online course sections would bias comparisons of the relative performance of online versus FtF students. To address this possibility, we use student fixed effects :
formula
3

This method allows us to hold the student (and therefore their generic “taste” for online courses) constant, and compare a student's performance in the classes she takes online with her own performance in FtF classes. In our initial models, we follow past literature (Xu and Jaggars 2014) by using college () and subject () fixed effects to control for college-level and subject-level differences in these specifications.

This method still raises a number of concerns, however. Most obviously, students who are observed in both types of classes might be opting to enroll in online versus FtF courses based on criteria that are correlated with the outcomes in which we are interested. For instance, perhaps students are more likely to enroll in FtF classes when they anticipate that the material will be especially challenging and they want the opportunity to ask questions of instructors in person. Alternatively, perhaps students are more likely to enroll in online sections of courses that they anticipate will be difficult—for instance, if they believe that they will retain information less well if it is delivered in a lecture that they cannot repeat and review at their convenience. If students make decisions about online versus FtF enrollment with an eye to issues that are likely to be correlated with their performance, our student fixed effects estimates will still suffer from bias. We explore the extent to which our estimates are likely to suffer from such bias in our results section. We also address these concerns by estimating a final set of fixed effects models that simultaneously estimate both student and college-course fixed effects:
formula
4

In effect, this specification allows us to determine whether, on average, students’ course-demeaned grades are higher or lower in classes they take online relative to their own (course-demeaned) performance in FtF classes.5

In order to account for the possibility that student outcomes may be correlated within institutions, all models are estimated using robust standard errors clustered at the college level.

5.  Results

Descriptive Results

Table 1 presents descriptive statistics on those who take online courses. Statistics are shown for three groups: the full sample, FtF-only students (students who never take online courses), and ever-online students (students observed enrolling in at least one online course). Observations represent unique student counts. Female, white, and Asian students are all disproportionately likely to be in the ever-online group relative to FtF-only. Ever-online students are less likely to ever enroll in basic skills (remedial) courses, and are more likely to state that their primary goal is to transfer to a four-year college than are students who take courses only face-to-face. Ever-online students have higher first-term GPAs and attempt more units in their first term, on average, than FtF-only students. While these statistics suggest that ever-online students may be a better-prepared group on average than FtF-only students, they are more likely to receive need-based aid.

Table 1.
Student Characteristics, Students First Enrolled in 2008—09 School Year
Full Sample mean (SD) (1)FtF Only mean (SD) (2)Ever Online mean (SD) (3)
Precollege student characteristics   
Age at first CCC term, years 20.31 20.02 20.73 
 (4.19) (3.85) (4.61) 
Female 0.52 0.48 0.57 
 (0.50) (0.50) (0.50) 
Hispanic 0.36 0.41 0.30 
 (0.48) (0.49) (0.46) 
White 0.31 0.28 0.36 
 (0.46) (0.45) (0.48) 
Asian 0.08 0.08 0.10 
 (0.28) (0.26) (0.30) 
Black 0.10 0.10 0.09 
 (0.29) (0.30) (0.29) 
Other race 0.15 0.14 0.15 
 (0.35) (0.35) (0.36) 
High school diploma 0.93 0.93 0.92 
 (0.26) (0.26) (0.26) 
GED 0.06 0.06 0.06 
 (0.24) (0.24) (0.25) 
In-college student characteristics 
Ever takes basic skills courses 0.42 0.44 0.39 
 (0.49) (0.50) (0.49) 
Ever receives financial aid 0.58 0.56 0.60 
 (0.49) (0.50) (0.49) 
First-term GPA 2.19 2.04 2.40 
 (1.28) (1.30) (1.22) 
Units attempted first term 9.24 9.05 9.51 
 (4.24) (4.14) (4.37) 
Modal goal: Transfer 0.54 0.52 0.56 
 (0.50) (0.50) (0.50) 
Modal goal: AA no transfer 0.06 0.06 0.06 
 (0.23) (0.23) (0.23) 
Modal goal: Vocational 0.08 0.08 0.07 
 (0.27) (0.28) (0.26) 
Modal goal: Unknown 0.28 0.29 0.27 
 (0.45) (0.45) (0.44) 
Modal goal: Personal interest 0.04 0.04 0.04 
 (0.20) (0.20) (0.19) 
Modal goal: Basic skills 0.01 0.01 0.01 
 (0.09) (0.10) (0.09) 
Unique students 217,194 128,851 88,343 
Full Sample mean (SD) (1)FtF Only mean (SD) (2)Ever Online mean (SD) (3)
Precollege student characteristics   
Age at first CCC term, years 20.31 20.02 20.73 
 (4.19) (3.85) (4.61) 
Female 0.52 0.48 0.57 
 (0.50) (0.50) (0.50) 
Hispanic 0.36 0.41 0.30 
 (0.48) (0.49) (0.46) 
White 0.31 0.28 0.36 
 (0.46) (0.45) (0.48) 
Asian 0.08 0.08 0.10 
 (0.28) (0.26) (0.30) 
Black 0.10 0.10 0.09 
 (0.29) (0.30) (0.29) 
Other race 0.15 0.14 0.15 
 (0.35) (0.35) (0.36) 
High school diploma 0.93 0.93 0.92 
 (0.26) (0.26) (0.26) 
GED 0.06 0.06 0.06 
 (0.24) (0.24) (0.25) 
In-college student characteristics 
Ever takes basic skills courses 0.42 0.44 0.39 
 (0.49) (0.50) (0.49) 
Ever receives financial aid 0.58 0.56 0.60 
 (0.49) (0.50) (0.49) 
First-term GPA 2.19 2.04 2.40 
 (1.28) (1.30) (1.22) 
Units attempted first term 9.24 9.05 9.51 
 (4.24) (4.14) (4.37) 
Modal goal: Transfer 0.54 0.52 0.56 
 (0.50) (0.50) (0.50) 
Modal goal: AA no transfer 0.06 0.06 0.06 
 (0.23) (0.23) (0.23) 
Modal goal: Vocational 0.08 0.08 0.07 
 (0.27) (0.28) (0.26) 
Modal goal: Unknown 0.28 0.29 0.27 
 (0.45) (0.45) (0.44) 
Modal goal: Personal interest 0.04 0.04 0.04 
 (0.20) (0.20) (0.19) 
Modal goal: Basic skills 0.01 0.01 0.01 
 (0.09) (0.10) (0.09) 
Unique students 217,194 128,851 88,343 

Note: CCC: California community college.

Source: Authors’ calculations from California Community College Chancellor's Office data.

Table 2 presents the course characteristics of sections that are taught face-to-face or online. Descriptively, we see that students in online sections have significantly lower completion rates, significantly lower rates of course passing (with an A/B/C or Pass grade) and significantly lower rates of A or B receipt. Online courses are slightly less likely to be basic skills status, and more likely to confer credits that are transferable to four-year colleges. The share of classes offered during the summer session is over twice as high for online courses as for FtF courses. The distribution of courses across subject areas differs for the two instructional modes as well (see table B.2 in the online appendix). For instance, business and management courses represent only about 5 percent of course enrollments in FtF sections, but over 10 percent of online enrollments. Conversely, subjects like math and humanities are underrepresented in online enrollments relative to FtF enrollments.

Table 2.
Course Characteristics, Students First Enrolled in 2008—09
All Courses mean (SD) (1)FtF Course Sections mean (SD) (2)Online Course Sections mean (SD) (3)
Contemporaneous course outcomes    
Completion rate (%) 83.63 84.58 78.99 
 (37.00) (36.12) (40.74) 
Pass/A/B/C rate (%) 61.41 62.52 55.98 
 (48.68) (48.41) (49.64) 
A/B receipt rate (%) 42.20 42.37 41.38 
 (49.39) (49.41) (49.25) 
Future course-taking outcomes    
Course repetition rate (%) 11.19 11.30 10.65 
 (31.52) (31.66) (30.85) 
Subject persistence rate (%) 53.62 54.94 46.21 
 (49.87) (49.76) (49.86) 
Share of courses that are:    
Basic skills status (%) 5.00 5.34 3.30 
 (21.79) (22.48) (17.87) 
Transferrable to UC or CSU systems (%) 73.50 73.31 74.46 
 (44.13) (44.24) (43.61) 
Transferrable only to CSU system (%) 8.02 6.83 13.81 
 (27.16) (25.23) (34.50) 
Share of courses offered in:    
Fall term (%) 48.07 49.04 43.32 
 (49.96) (49.99) (49.55) 
Spring term (%) 45.45 45.30 46.21 
 (49.79) (49.78) (49.86) 
Winter term (%) 0.58 0.60 0.51 
 (7.62) (7.71) (7.14) 
Summer term (%) 4.47 3.71 8.19 
 (20.67) (18.91) (27.42) 
Course enrollments 953,933 792,257 161,676 
All Courses mean (SD) (1)FtF Course Sections mean (SD) (2)Online Course Sections mean (SD) (3)
Contemporaneous course outcomes    
Completion rate (%) 83.63 84.58 78.99 
 (37.00) (36.12) (40.74) 
Pass/A/B/C rate (%) 61.41 62.52 55.98 
 (48.68) (48.41) (49.64) 
A/B receipt rate (%) 42.20 42.37 41.38 
 (49.39) (49.41) (49.25) 
Future course-taking outcomes    
Course repetition rate (%) 11.19 11.30 10.65 
 (31.52) (31.66) (30.85) 
Subject persistence rate (%) 53.62 54.94 46.21 
 (49.87) (49.76) (49.86) 
Share of courses that are:    
Basic skills status (%) 5.00 5.34 3.30 
 (21.79) (22.48) (17.87) 
Transferrable to UC or CSU systems (%) 73.50 73.31 74.46 
 (44.13) (44.24) (43.61) 
Transferrable only to CSU system (%) 8.02 6.83 13.81 
 (27.16) (25.23) (34.50) 
Share of courses offered in:    
Fall term (%) 48.07 49.04 43.32 
 (49.96) (49.99) (49.55) 
Spring term (%) 45.45 45.30 46.21 
 (49.79) (49.78) (49.86) 
Winter term (%) 0.58 0.60 0.51 
 (7.62) (7.71) (7.14) 
Summer term (%) 4.47 3.71 8.19 
 (20.67) (18.91) (27.42) 
Course enrollments 953,933 792,257 161,676 

Notes: Observations include course enrollments for cohort entering in 2008—09. Winter terms offered only under the quarter system. UC: University of California; CSU: California State University.

Source: Authors’ calculations from California Community College Chancellor's Office data.

Main Results

To test how online course-taking is associated with student outcomes, we build up a series of models using progressively stronger designs. Table 3 presents these results, building up to a model that estimates equation 1. Each cell represents the coefficient of the Online indicator variable in a model estimating the dependent variable specified in the row label. To get a sense of raw comparisons, column 1 presents the bivariate relationship between online course enrollment and course outcomes. The bivariate results confirm the comparisons presented in table 2—students are significantly less likely to complete courses when they are taken online and less likely to achieve a successful (Pass/A/B/C) result. The likelihood of receiving an A or B (versus withdrawing or receiving a C, D, or F), however, is not significantly different between the two types of classes once we correct the standard errors for within-school clustering.

Table 3.
Association Between Online Course-taking and Student Outcomes: Multivariate Regression Models
Outcome(1)(2)(3)
Complete −0.056*** −0.061*** −0.061*** 
 (0.004) (0.003) (0.003) 
Pass ABC −0.065*** −0.085*** −0.101*** 
 (0.006) (0.005) (0.005) 
A or B −0.010 −0.031*** −0.060*** 
 (0.006) (0.006) (0.006) 
Term fixed effects  
Course controls  
Student controls   
Colleges 109 109 109 
College-courses 6,200 6,200 6,200 
Students 213,568 213,568 213,568 
Student-course-terms 953,933 953,933 953,933 
Outcome(1)(2)(3)
Complete −0.056*** −0.061*** −0.061*** 
 (0.004) (0.003) (0.003) 
Pass ABC −0.065*** −0.085*** −0.101*** 
 (0.006) (0.005) (0.005) 
A or B −0.010 −0.031*** −0.060*** 
 (0.006) (0.006) (0.006) 
Term fixed effects  
Course controls  
Student controls   
Colleges 109 109 109 
College-courses 6,200 6,200 6,200 
Students 213,568 213,568 213,568 
Student-course-terms 953,933 953,933 953,933 

Notes: Provided as coefficient (within-college correlation robust standard error). Sample limited to first-time students entering in the 2008—09 academic year observed in college-courses offered in both formats in the same term. Course controls include basic skills status and whether the course transfers to the University of California or California State University systems. Student controls include age at term, units enrolled at term, financial aid receipt at term, sex, race, high school credential type, declared academic goal, and whether the student is ever observed in any basic courses. Missing variable dummies included. Student-course-term numbers are for Complete outcome variable. The Pass/A/B/C and A/B models, respectively, include 953,933 and 933,125 student-course-term observations. R2 statistics for column 3 are 0.016, 0.05, and 0.056 for the Complete, Pass/A/B/C, and A/B outcomes, respectively.

Source: Authors’ calculations from California Community College Chancellor's Office data.

***Significant at the 0.01 level.

Column 2 adds controls for course characteristics and fixed effects for the term that a course is taken. The results for course completion and course passing remain very similar to the bivariate specification presented in column 1, although the magnitudes of the coefficients increase slightly. The coefficient for A/B receipt also grows in magnitude and becomes significantly and negatively associated with course grade in this specification. We obtain similar results in column 3, which adds time-variant and time-invariant student-level controls. The A/B receipt coefficient nearly doubles in magnitude, but the basic pattern of results is the same: Online course-taking is associated with significantly worse results across all three outcomes.

Because our modest course controls may not fully remove the confounding influence of differences in characteristics of the types of courses that disproportionately enroll students online, our next set of analyses incorporate course-by-college fixed effects as in equation 2. The substantive results, presented in panel A of table 4, are slightly greater in magnitude than those presented in table 3. Online course enrollment is associated with a 6.8 percentage point decrease in the likelihood that a student will complete a course, a 10.9 percentage point decrease in the likelihood that a student will pass a course, and a 7.5 percentage point reduction in the likelihood that a student will pass with an A or B. These results are all statistically significant.

Table 4.
Association Between Online Course-taking and Student Outcomes: College-Course Fixed Effects Models
Panel A. Main Results
Complete (1)Pass/A/B/C (2)A or B (3)
Online −0.068*** −0.109*** −0.075*** 
 (0.003) (0.005) (0.005) 
College-course fixed effects 
Colleges 109 109 109 
College-courses 6,200 6,200 6,168 
Students 213,568 213,568 211,724 
Student-course-terms 953,933 953,933 933,125 
R2 0.048 0.088 0.105 
Panel B. Falsification Test 
 Complete Pass/A/B/C A or B 
 (1) (2) (3) 
Takes online in future 0.001 0.016*** 0.018*** 
(through Fall 2009) (0.002) (0.004) (0.004) 
College-course fixed effects 
Colleges 105 105 105 
College-courses 18,689 18,689 17,924 
Students 77,334 77,334 76,049 
Student-course-terms 231,931 231,931 214,835 
R2 0.135 0.169 0.205 
Panel A. Main Results
Complete (1)Pass/A/B/C (2)A or B (3)
Online −0.068*** −0.109*** −0.075*** 
 (0.003) (0.005) (0.005) 
College-course fixed effects 
Colleges 109 109 109 
College-courses 6,200 6,200 6,168 
Students 213,568 213,568 211,724 
Student-course-terms 953,933 953,933 933,125 
R2 0.048 0.088 0.105 
Panel B. Falsification Test 
 Complete Pass/A/B/C A or B 
 (1) (2) (3) 
Takes online in future 0.001 0.016*** 0.018*** 
(through Fall 2009) (0.002) (0.004) (0.004) 
College-course fixed effects 
Colleges 105 105 105 
College-courses 18,689 18,689 17,924 
Students 77,334 77,334 76,049 
Student-course-terms 231,931 231,931 214,835 
R2 0.135 0.169 0.205 

Notes: Provided as coefficient (within-college correlation robust standard error). Sample limited to first-time students entering in the 2008—09 academic year. Panel A limited to students observed in college-courses offered in both formats in the same term. Panel B includes students who are observed only in FtF courses in Fall 2008, and who persist through Fall 2009, but includes all FtF courses (whether offered in both formats or not). Term fixed effects, student controls, course controls, and missing variable dummies included. Course controls include basic skills status and whether the course transfers to the University of California or California State University systems. Student controls include age at term, units enrolled at term, financial aid receipt at term, sex, race, high school credential type, declared academic goal, and whether the student is ever observed in any basic courses.

Source: Authors’ calculations from California Community College Chancellor's Office data.

***Significant at the 0.01 level.

Because the results presented in table 4 include college-course fixed effects, they should control for the possibility that the types of courses that disproportionately enroll students in online sections are systematically more or less difficult than courses that disproportionately enroll students face-to-face. They may still suffer from bias, however, if the types of students who have a stronger natural “taste” for online courses are also more (or less) likely to perform well in college courses. We address these concerns in two ways.

First, we explore the extent to which student sorting may bias our estimates. We explore student performance in courses in the Fall term of 2008 among students who enrolled only in FtF courses in that term, and include an indicator for whether a student is observed in an online course in future terms. This indicator should not be associated with course performance in the current term unless there is selection into online course sections on unobservable dimensions not accounted for by the student and course-college controls that we currently include.6 We limit the sample in these models to students who persist through at least two more terms (Spring and Fall 2009) to ensure that the Future Online indicator is not picking up a differential level of persistence among students, and the Future Online indicator accordingly applies only to those two terms. At the same time, we broaden the range of courses to include the full set of FtF courses, rather than only courses offered in both formats.

Our results provide little evidence of sorting into online courses in a way that explains away our estimates (table 4, panel B). Future online course-taking is not related to course completion among FtF-only students taking courses in Fall 2008, but is positively predictive of the other two outcomes. Note, however, that these associations are in the opposite directions of the main results, suggesting that, if anything, sorting into online courses is positively associated with skill. The magnitudes of these estimates are very modest. Because online options are only available for a subset of courses, we also tested alternative specifications that limited the sample to students whose future studies included courses where both online and FtF options were available. These results were substantively similar to those presented in panel B. This test suggests that our course-by-college fixed effects results in table 4, panel A, are conservative estimates of the negative association between online course-taking and course performance.

The second way that we address concerns that student sorting into online courses may contaminate our results is to estimate models using student fixed effects. These models compare a student's performance in online courses with her own performance in FtF courses.7 For our initial pass using student fixed effects, we model equation 3, dropping the college-course terms and substitute college and subject fixed effects.8

The pattern of results using student fixed effects estimation (table 5) are substantively similar to those shown in table 3 and table 4. Nevertheless, the magnitudes of the coefficients grow under this specification. The results suggest that students are 8.4 percentage points less likely to complete, and 14.5 percentage points less likely to pass, the online courses in which they enroll compared with the courses they take through FtF instruction. They are 11.0 percentage points less likely to receive A or B grades in online courses than in FtF courses.

Table 5.
Association Between Online Course-taking and Student Outcomes: Student Fixed Effect Models
Panel A. Main Results
Complete (1)Pass/A/B/C (2)A or B (3)
Online course −0.084*** −0.145*** −0.110*** 
 (0.004) (0.006) (0.006) 
Student fixed effects 
Students 213,568 213,568 211,724 
Student-course-terms 953,933 953,933 933,125 
R2 0.379 0.475 0.473 
Panel B. Association Between Online Course-taking and Student Outcomes: 
Student-Term Fixed Effects 
 Complete Pass/A/B/C A or B 
 (1) (2) (3) 
Online course −0.074*** −0.121*** −0.090*** 
 (0.006) (0.008) (0.010) 
Student-Term Fixed Effects 
Students 213,568 213,568 211,724 
Student-Course-Terms 953,933 953,933 933,125 
R2 0.749 0.796 0.786 
Panel A. Main Results
Complete (1)Pass/A/B/C (2)A or B (3)
Online course −0.084*** −0.145*** −0.110*** 
 (0.004) (0.006) (0.006) 
Student fixed effects 
Students 213,568 213,568 211,724 
Student-course-terms 953,933 953,933 933,125 
R2 0.379 0.475 0.473 
Panel B. Association Between Online Course-taking and Student Outcomes: 
Student-Term Fixed Effects 
 Complete Pass/A/B/C A or B 
 (1) (2) (3) 
Online course −0.074*** −0.121*** −0.090*** 
 (0.006) (0.008) (0.010) 
Student-Term Fixed Effects 
Students 213,568 213,568 211,724 
Student-Course-Terms 953,933 953,933 933,125 
R2 0.749 0.796 0.786 

Notes: Provided as coefficient (within-college correlation robust standard error). Sample limited to first-time students entering in the 2008—09 academic year observed in college-courses offered in both formats in the same term. Term fixed effects, school fixed effects, subject fixed effects, student controls, course controls, and missing variable dummies included. Course controls include basic skills status and whether the course transfers to the University of California or California State University systems. Student controls include age at term, units enrolled at term, financial aid receipt at term, sex, race, high school credential type, declared academic goal, and whether the student is ever observed in any basic courses.

Source: Authors’ calculations from California Community College Chancellor's Office data.

***Significant at the 0.01 level.

Because we are still concerned that the factors that impel students to enroll in FtF versus online courses may bias our estimates, we explore whether we can predict characteristics of a given course based on whether we know that a student has opted for online versus FtF enrollment. In these specifications, course characteristics serve as dependent variables. We retain our main course characteristic controls (basic skills level, transfer eligibility, and subject fixed effects). Although controlling for those measures in our main specifications should allow us to adjust somewhat for the possibility that classes students opt into for online instruction are more or less “difficult” than the FtF classes in which they enroll, we explore four new measures that may be correlated both with course performance and students’ decisions to select into online sections.

Our first three measures capture the average student performance in the courses in which they enroll. Because we want to eliminate the influence of the student's own performance, or of any shocks that may have affected both the student's performance and the average class performance, we use lagged measures of average student performance for the entire year prior in FtF sections of the course. We limit our outcome measures to average performance in FtF sections to eliminate the possibility that different grading practices or general student success in online courses will affect average grades. This measure provides a gauge of how successful students could expect to be in FtF sections of the course—if the online classes that students opt into are systematically more or less “difficult” than the FtF classes in which they enroll, then that could bias our student fixed effects estimates. Table 6 provides no evidence of negative selection on these dimensions (columns 1–3). Indeed, students’ online courses have marginally higher completion rates than do the courses they opt to take face-to-face (p < 0.10).

Table 6.
Test for Selection on Course Characteristics
Lagged FtF Completion Rate (1)Lagged FtF Rate: Pass/A/B/C (2)Lagged FtF Rate: A/B (3)Retake Effort (4)
Online course 0.002* −0.000 0.003 −0.004* 
 (0.001) (0.001) (0.002) (0.002) 
Student fixed effects 
Students 211,796 211,796 211,795 213,568 
Student-course-terms 933,387 933,387 933,349 953,933 
R2 0.596 0.555 0.560 0.261 
Lagged FtF Completion Rate (1)Lagged FtF Rate: Pass/A/B/C (2)Lagged FtF Rate: A/B (3)Retake Effort (4)
Online course 0.002* −0.000 0.003 −0.004* 
 (0.001) (0.001) (0.002) (0.002) 
Student fixed effects 
Students 211,796 211,796 211,795 213,568 
Student-course-terms 933,387 933,387 933,349 953,933 
R2 0.596 0.555 0.560 0.261 

Notes: Provided as coefficient (within-college correlation robust standard error). Sample limited to first-time students entering in the 2008—09 academic year observed in college-courses offered in both formats in the same term. Term fixed effects, school fixed effects, subject fixed effects, student controls, course controls, and missing variable dummies included. Course controls include basic skills status and whether the course transfers to the University of California or California State University systems. Student controls include age at term, units enrolled at term, financial aid receipt at term, sex, race, high school credential type, declared academic goal, and whether the student is ever observed in any basic courses. FtF: face-to-face class.

Source: Authors’ calculations from California Community College Chancellor's Office data.

*Significant at the 0.10 level.

As a second gauge, we generated a measure for whether a student's enrollment in a given course was an effort to retake a course in which they had previously performed poorly or failed to complete. These results (column 4) suggest that there is a modest relationship between online course-taking and the likelihood that a student is retaking a given course—specifically, students’ online courses are marginally less likely to be retake efforts than are their FtF classes (p < 0.10). As a check to determine whether this relationship meaningfully affected our results, we reran our main student fixed effects analysis restricting the sample to exclude retake efforts. The results were very close to those presented in table 5, panel A. The coefficients for the course completion, Pass/A/B/C, and A/B outcomes were, respectively, −0.079, −0.140, and −0.109 (all significant at p < 0.01). Taken together, these analyses provide little evidence that students are systematically deciding to take online versus FtF courses in a way that would bias our results.

One might be concerned that even if students do not differentially select into online course formats based on course characteristics, term-varying student factors may influence both students’ propensity to take online courses and their performance. For instance, perhaps students sign up for online courses when they face particularly heavy work schedules, which also crowd out study time and lower course performance. If online enrollment is just a proxy for students facing a particularly busy term, we would expect students’ performance in FtF courses to drop during terms in which they are enrolled in online courses. In additional tests (not shown), however, we find that students’ performance in FtF courses is unrelated to an indicator for whether they are taking any online courses in the contemporaneous term (coefficients on the indicator are nonsignificant for all three outcomes, and range from −0.001 to 0.002).

We also estimated additional specifications in which we included student-term fixed effects (table 5, panel B). This specification effectively compares students’ performance in online classes to their own performance in the same term in FtF classes. The pattern of results remains the same—students perform less well in online classes across all three outcomes, although the magnitude of the coefficients is slightly smaller than those presented in panel A.

Finally, in our most robust set of fixed effects estimates (table 7), we included a set of college-course and student fixed effects simultaneously (Cornelissen 2008). These models reflect equation 4. This allows us to simultaneously account for course-invariant unobservable student factors and student-invariant unobservable course-level factors that may each predict students’ course performance. These coefficients are very similar to those estimated using the student fixed effects in table 5, panel A. Taken together, these estimates give a strikingly stable picture of weaker contemporaneous student performance in online courses than in FtF formats.

Table 7.
Association Between Online Course-taking and Student Outcomes: College-Course and Student Fixed Effects Included Simultaneously
Complete (1)Pass/A/B/C (2)A or B (3)
Online course −0.089*** −0.152*** −0.120*** 
 (0.004) (0.005) (0.006) 
College-course fixed effects 
Student fixed effects 
Colleges 109 109 109 
College-courses 6,200 6,200 6,168 
Students 213,568 213,568 211,724 
Student-course-terms 953,933 953,933 933,125 
Complete (1)Pass/A/B/C (2)A or B (3)
Online course −0.089*** −0.152*** −0.120*** 
 (0.004) (0.005) (0.006) 
College-course fixed effects 
Student fixed effects 
Colleges 109 109 109 
College-courses 6,200 6,200 6,168 
Students 213,568 213,568 211,724 
Student-course-terms 953,933 953,933 933,125 

Notes: Provided as coefficient (within-college correlation robust standard error). Sample limited to first-time students entering in the 2008—09 academic year observed in college-courses offered in both formats in the same term. Term fixed effects, student controls, course controls, and missing variable dummies included. Course controls include basic skills status and whether the course transfers to the University of California or California State University systems. Student controls include age at term, units enrolled at term, financial aid receipt at term, sex, race, high school credential type, declared academic goal, and whether the student is ever observed in any basic courses.

Source: Authors’ calculations from California Community College Chancellor's Office data.

***Significant at the 0.01 level.

Instructor Characteristics

Our results thus far suggest that neither student sorting across course modes nor choices by students to take particularly challenging courses online account for the negative relationship between online enrollment and student performance. We next consider whether differences in instructor characteristics across the two types of classes play a role. We look at four types of instructor characteristics: (1) the contract status of the instructor (temporary, tenure-track nontenured [“pre-tenure”], or tenured); (2) years of experience (0–2 years, 3–5 years, 6–10 years, 11 or more years); (3) whether the instructor is teaching any courses as an overload; and (4) whether the course is team-taught.9

We first explored which types of instructors students were disproportionately likely to encounter in online sections (table 8). Each instructor characteristic was sequentially included as a dependent variable in a college-course fixed effects model similar to equation 2.10 We find that students in online sections are disproportionately exposed to certain types of instructors: Students in online sections are significantly less (more) likely to have temporary (tenured) instructors, significantly less (more) likely to have teachers with fewer than 6 years (more than 10 years) of experience, and significantly more likely to have instructors who are teaching a schedule with overloads compared with their peers in FtF sections of the same course.

Table 8.
Relationship Between Online Course-taking and Instructor Characteristics
Outcome (Instructor Characteristics)
Temporary instructor −0.161*** 
 (0.014) 
Pre-tenure instructor −0.008 
 (0.007) 
Tenured instructor 0.165*** 
 (0.014) 
0—2 years experience −0.056*** 
 (0.005) 
3—5 years experience −0.028*** 
 (0.006) 
6—10 years experience 0.012 
 (0.009) 
11+ years experience 0.072*** 
 (0.011) 
Instructor overload 0.131*** 
 (0.013) 
Multiple instructors 0.001 
 (0.002) 
College-course fixed effects 
School-courses 6,168 
Student-course-terms 970,173 
Outcome (Instructor Characteristics)
Temporary instructor −0.161*** 
 (0.014) 
Pre-tenure instructor −0.008 
 (0.007) 
Tenured instructor 0.165*** 
 (0.014) 
0—2 years experience −0.056*** 
 (0.005) 
3—5 years experience −0.028*** 
 (0.006) 
6—10 years experience 0.012 
 (0.009) 
11+ years experience 0.072*** 
 (0.011) 
Instructor overload 0.131*** 
 (0.013) 
Multiple instructors 0.001 
 (0.002) 
College-course fixed effects 
School-courses 6,168 
Student-course-terms 970,173 

Notes: Provided as coefficient (within-college correlation robust standard error). Sample limited to first-time students entering in the 2008—09 academic year observed in college-courses offered in both formats in the same term. All models include term fixed effects, student and course controls, and missing variable dummies. Course controls include basic skills status and whether the course transfers to the University of California or California State University systems. Student controls include age at term, units enrolled at term, financial aid receipt at term, sex, race, high school credential type, declared academic goal, and whether the student is ever observed in any basic courses.

Source: Authors’ calculations from California Community College Chancellor's Office data.

***Significant at the 0.01 level.

We next investigated whether these differences might be expected to matter for student achievement. We substitute a vector of instructor characteristics in place of the online indicator into our college-course fixed effects models to determine the relationship of those characteristics with student performance (table 9). The contract status and experience variables are significantly related to student outcomes. Pre-tenure and temporary instructors are associated with better student performance across all three outcomes than are tenured professors (though the coefficient is nonsignificant for the A/B outcome for pre-tenure instructors), and having less experienced instructors is positively related to course completion, course passing, and A/B receipt. Taken together, these results suggest that online students may perform less well than their peers partly because they are disproportionately exposed to a group of instructors associated with poorer student performance on the metrics we explore.11

Table 9.
Relationship Between Instructor Characteristics and Student Outcomes
Complete (1)Pass/A/B/C (2)A/B (3)
Instructor temporary 0.023*** 0.036*** 0.032*** 
 (0.004) (0.007) (0.006) 
Instructor pre-tenure 0.018*** 0.012*** 0.009 
 (0.003) (0.004) (0.006) 
2 or fewer years experience 0.014*** 0.009** 0.024*** 
 (0.003) (0.004) (0.004) 
3–5 years experience 0.010*** 0.009** 0.024*** 
 (0.002) (0.004) (0.004) 
6–10 years experience 0.007*** 0.008* 0.018*** 
 (0.002) (0.004) (0.004) 
Instructor has overload 0.002 −0.002 -0.009 
 (0.005) (0.006) (0.006) 
Multiple instructors 0.004 0.000 0.011 
 (0.012) (0.014) (0.015) 
College-course fixed effects 
School-courses 6,200 6,200 6,168 
Student-course-terms 953,933 953,933 933,125 
Complete (1)Pass/A/B/C (2)A/B (3)
Instructor temporary 0.023*** 0.036*** 0.032*** 
 (0.004) (0.007) (0.006) 
Instructor pre-tenure 0.018*** 0.012*** 0.009 
 (0.003) (0.004) (0.006) 
2 or fewer years experience 0.014*** 0.009** 0.024*** 
 (0.003) (0.004) (0.004) 
3–5 years experience 0.010*** 0.009** 0.024*** 
 (0.002) (0.004) (0.004) 
6–10 years experience 0.007*** 0.008* 0.018*** 
 (0.002) (0.004) (0.004) 
Instructor has overload 0.002 −0.002 -0.009 
 (0.005) (0.006) (0.006) 
Multiple instructors 0.004 0.000 0.011 
 (0.012) (0.014) (0.015) 
College-course fixed effects 
School-courses 6,200 6,200 6,168 
Student-course-terms 953,933 953,933 933,125 

Notes: Coefficient (within-college correlation robust standard error). Sample limited to first-time students entering in the 2008–09 academic year observed in college-courses offered in both formats in the same term. All models include term fixed effects, student and course controls, and missing variable dummies. Course controls include basic skills status and whether the course transfers to the University of California or California State University systems. Student controls include age at term, units enrolled at term, financial aid receipt at term, sex, race, high school credential type, declared academic goal, and whether the student is ever observed in any basic courses.

Source: Authors’ calculations from California Community College Chancellor's Office data.

*Significant at the 0.10 level; **significant at the 0.05 level; ***significant at the 0.01 level.

To see whether the differences in instructors in online versus FtF sections explained any of the online–FtF performance gap, we reran our student fixed effects models including the instructor characteristics as controls (table 10). The Online coefficients diminish, but the change is small. For instance, the Online coefficient for the student fixed effects specification (replicated in column 1) for the Pass/A/B/C outcome declines in magnitude from −0.145 to −0.140 when instructor characteristics are included in column 2, but the new (column 2) point estimates are well within the confidence interval of the original (column 1) estimates. We see similar patterns if we add instructor controls to the college-course fixed effects specifications (not shown).

Table 10.
Association Between Online Course-taking and Outcomes, Adding Instructor Controls and Instructor Fixed Effects
Outcome(1)(2)(3)(4)
Complete −0.084*** −0.080*** −0.062*** −0.064*** 
 (0.004) (0.005) (0.003) (0.004) 
Pass/A/B/C −0.145*** −0.140*** −0.090*** −0.089*** 
 (0.006) (0.006) (0.005) (0.005) 
A or B −0.110*** −0.103*** −0.059*** −0.056*** 
 (0.006) (0.006) (0.005) (0.005) 
Student fixed effects   
Instructor controls  
Instructor fixed effects    
College-course-instructor fixed effects    
College-courses 6,200 6,200 6,200 6,200 
Instructors 23,556 23,556 23,556 23,556 
College-course-instructors 44,566 44,566 44,566 44,566 
Student-course-terms 953,933 953,933 936,681 936,681 
Outcome(1)(2)(3)(4)
Complete −0.084*** −0.080*** −0.062*** −0.064*** 
 (0.004) (0.005) (0.003) (0.004) 
Pass/A/B/C −0.145*** −0.140*** −0.090*** −0.089*** 
 (0.006) (0.006) (0.005) (0.005) 
A or B −0.110*** −0.103*** −0.059*** −0.056*** 
 (0.006) (0.006) (0.005) (0.005) 
Student fixed effects   
Instructor controls  
Instructor fixed effects    
College-course-instructor fixed effects    
College-courses 6,200 6,200 6,200 6,200 
Instructors 23,556 23,556 23,556 23,556 
College-course-instructors 44,566 44,566 44,566 44,566 
Student-course-terms 953,933 953,933 936,681 936,681 

Notes: Provided as coefficient (within-college correlation robust standard error). Sample limited to first-time students entering in the 2008—09 academic year observed in college-courses offered in both formats in the same term. All models include term, college, and subject fixed effects; student and course controls, and missing variable dummies included. Course controls include basic skills status and whether the course transfers to the University of California or California State University systems. Student controls include age at term, units enrolled at term, financial aid receipt at term, sex, race, high school credential type, declared academic goal, and whether the student is ever observed in any basic courses. Instructor controls include contract status (temporary and pre-tenure vs. tenured), experience (less than 3 years, 3—5 years, or 6—10 years vs. 11 years or more), whether the instructor was teaching an overload, and whether the course had multiple instructors. Student-course-terms are for Completion and Pass/A/B/C outcomes. N for A/B outcome is 933,125 (916,185 in instructor fixed effect models).

Source: Authors’ calculations from California Community College Chancellor's Office data.

***Significant at the 0.01 level.

Our measures of instructor characteristics are fairly rough and do not preclude the possibility that, independent of observable instructor characteristics, instructors who are tougher graders or less effective teachers could disproportionately opt into online teaching. To explore whether our results were robust to this possibility, we ran a set of models using instructor fixed effects. College fixed effects are also included. These models identify off of nearly 5,200 instructors who teach in both online and FtF courses. We find that within instructors, students perform worse in instructors’ online sessions than in their FtF courses (table 10, column 3), although the magnitude of the coefficients is smaller in these specifications than in column 1.

A final set of models includes college-course-instructor fixed effects, and identifies off of differences in student performance across formats taken with the same instructor in the same course (table 10, column 4). Coefficients are identified off of 6,686 instructor-course combinations offered in both face-to-face and online formats. These coefficients are similar in magnitude to those in column 3. These results suggest the online-FtF performance gap cannot be fully explained by exposure to systematically more demanding or less effective instructors in online courses.

At the same time, the coefficients do shrink in magnitude—the coefficients for completion, course passing, and A/B receipt in column 4 are about 20 percent, 35 percent, and 45 percent smaller in magnitude, respectively, than those in column 2. This suggests that the performance gap between students in face-to-face and online courses is partially driven by unobserved instructor characteristics that differ systematically between modes. We return to this point further in the discussion.

Effects on Subsequent Same-subject Course-taking

To explore how online course-taking affects subsequent course-taking in the same subject area, we explore two main outcomes: (1) the likelihood of repeating the same course (course repetition) and (2) the likelihood of taking new courses in the same subject in future terms (subject persistence). We find that online course-taking is associated with a higher likelihood of course repetition (table 11, panel A). Columns 1, 2, and 3 provide estimates of the effect of online course-taking on the likelihood of course repetition based on models that respectively incorporate college-course, student, and instructor fixed effects. Across all three models, we find that online course-taking is associated with a 2.9–5.3 percentage point increase in the likelihood of course repetition. This is not surprising given the lower rates of course-passing associated with online course-taking, but does point to specific costs accrued as a result of online course-taking.

Table 11.
Effects on Future Course-taking in Same Subject
Panel A. Retaking Same Course in Future Terms
(1)(2)(3)
Online course 0.036*** 0.053*** 0.029*** 
 (0.002) (0.002) (0.002) 
College-course fixed effects   
Student fixed effects   
Instructor fixed effects   
Initial college-courses 6,200 6,200 6,200 
Students 213,568 213,568 213,568 
Instructors 23,556 23,556 23,556 
Student-course-terms 953,933 953,933 936,681 
Panel B. Any Future Same-Subject Course-Taking 
 (1) (2) (3) 
Online course −0.042*** −0.023*** −0.034*** 
 (0.002) (0.003) (0.003) 
College-course fixed effects   
Student fixed effects   
Instructor fixed effects   
Initial College-courses 5,901 5,901 5,901 
Students 158,155 158,155 158,155 
Instructors 22,046 22,046 22,046 
Student-course-terms 638,294 638,294 626,600 
Panel A. Retaking Same Course in Future Terms
(1)(2)(3)
Online course 0.036*** 0.053*** 0.029*** 
 (0.002) (0.002) (0.002) 
College-course fixed effects   
Student fixed effects   
Instructor fixed effects   
Initial college-courses 6,200 6,200 6,200 
Students 213,568 213,568 213,568 
Instructors 23,556 23,556 23,556 
Student-course-terms 953,933 953,933 936,681 
Panel B. Any Future Same-Subject Course-Taking 
 (1) (2) (3) 
Online course −0.042*** −0.023*** −0.034*** 
 (0.002) (0.003) (0.003) 
College-course fixed effects   
Student fixed effects   
Instructor fixed effects   
Initial College-courses 5,901 5,901 5,901 
Students 158,155 158,155 158,155 
Instructors 22,046 22,046 22,046 
Student-course-terms 638,294 638,294 626,600 

Notes: Each cell represents effect of online on future outcomes in same subject. Robust standard errors clustered at school level. Sample limited to first-time students entering in the 2008—09 academic year observed in college-courses offered in both formats in the same term. All models include student, course, and instructor controls and term fixed effects. Course controls include basic skills status and whether the course transfers to the University of California or California State University systems. Student controls include age at term, units enrolled at term, financial aid receipt at term, sex, race, high school credential type, declared academic goal, and whether the student is ever observed in any basic courses. Instructor controls include contract status (temporary and pre-tenure vs. tenured), experience (less than 3 years, 3—5 years, or 6—10 years vs. 11 years or more), whether the instructor was teaching an overload, and whether the course had multiple instructors. Panel B includes only students observed enrolled in future terms.

Source: Authors’ calculations from California Community College Chancellor's Office Data.

***Significant at the 0.01 level.

Online course-taking is also associated with lower rates of subject persistence. For each class a student takes, our subject persistence measure captures whether the student is observed in any future term taking courses in the same subject area. The full set of included subject areas is given in table B.2 in the online appendix. The subject persistence measures exclude retake efforts, so that these models capture only new course attempts in the same subject area. Our sample in these models also excludes students who are not observed in future terms, so that we do not conflate a lack of subject persistence with departure from the institution. We find that online course-taking is associated with a significant, 2.3–4.2 percentage point reduction in the likelihood of taking new courses in the same subject in future terms (table 11, panel B). Given that only about 54 percent of enrollments among students who persist at least one more term are followed by future course-taking in the same subject area, this represents a roughly 4–8 percent reduction in subject persistence and suggests that online course-taking modestly discourages future enrollment in a subject.

The subject persistence results are robust to alternate ways of measuring the outcome. We created an alternative measure that categorized subject persistence according to a more granular set of categories—for instance, Physical Sciences was divided into subcategories (e.g., Physics, General; Chemistry, General; Astronomy, Geology) based on the California Community Colleges Taxonomy of Programs code system. The results were substantively similar using this alternative definition of subject persistence.

Taken together, these results suggest that online course-taking is associated with a higher chance of course repetition and a lower likelihood of taking new courses in the same subject. This represents both tangible and intangible costs to students associated with online course-taking. Notably, however, the magnitude of these coefficients is much smaller than the contemporaneous course performance impacts.

Heterogeneity by Course Subject and Student Characteristics

We next explored whether the relationship between online course-taking and student performance outcomes varied by course subject or by student characteristics. Table 12, panel A, presents results for five academic subject areas: Social Sciences, Business and Management, Humanities, Information Technology, and Math.12 Interaction terms (Online interacted with a vector of Subject indicators) were added to the college-course fixed effects models for panel A. Table 12, panel B, presents results broken down by student sex and racial/ethnic group (Hispanic, white, black, Asian, or other). For student subgroup analyses, we interact student characteristics with the Online indicator using the student fixed-effects models. Each cell in column 1 represents the difference between student performance on the Pass/A/B/C outcome in online and FtF courses (the “contemporaneous course performance gap”) for the group identified in the row label. Column 3 presents similar results for the subject persistence outcome (the “subject persistence gap”). Columns 2 and 4 list the groups with performance gaps that are statistically different from the group designated in the row label (p < 0.05). Column 5 shows the number of unique units for each subgroup. For panel A, units are college-course clusters and for panel B, units are students.

Table 12.
Heterogeneity of Online Course-taking Effects by Course Subject and Student Characteristics, Pass/A/B/C, and Subject Persistence Outcomes
Pass/A/B/CSubject Persistence
(1)Sig. Diff. from Group (2)(3)Sig. Diff. from Group (4)Units in Group (5)
Panel A. Heterogeneity by Course Subject 
Social sciences (group 1) −0.093*** 3,5 −0.048*** 2,3,4,5 1,093 
 (0.008)  (0.005)   
Business/management (group 2) −0.097*** 3,5 0.002 1,3,5 1,078 
 (0.009)  (0.010)   
Humanities (group 3) −0.122*** 1,2,4 −0.063*** 1,2,4 771 
 (0.007)  (0.005)   
Information technology (group 4) −0.087*** 3,5 0.020** 1,3,5 602 
 (0.012)  (0.010)   
Math (group 5) −0.132*** 1,2,4 −0.068*** 1,2,4 464 
 (0.011)  (0.007)   
Panel B. Heterogeneity by Student Characteristics 
Student race/ethnicity      
Hispanic (group 1) −0.153*** 2,4 −0.024***  77,535 
 (0.007)  (0.005)   
White (group 2) −0.141*** 1,3,4 −0.023***  66,373 
 (0.006)  (0.005)   
Black (group 3) −0.165*** 2,4,5 −0.015  20,081 
 (0.009)  (0.009)   
Asian (group 4) −0.075*** 1,2,3,5 −0.022**  18,201 
 (0.009)  (0.009)   
Other race (group 5) −0.142*** 3,4 −0.026***  31,378 
 (0.007)  (0.007)   
Student sex      
Female (group 1) −0.144*** −0.027*** 109,308 
 (0.006)  (0.004)   
Male (group 2) −0.134*** −0.017*** 102,784 
 (0.006)  (0.004)   
Pass/A/B/CSubject Persistence
(1)Sig. Diff. from Group (2)(3)Sig. Diff. from Group (4)Units in Group (5)
Panel A. Heterogeneity by Course Subject 
Social sciences (group 1) −0.093*** 3,5 −0.048*** 2,3,4,5 1,093 
 (0.008)  (0.005)   
Business/management (group 2) −0.097*** 3,5 0.002 1,3,5 1,078 
 (0.009)  (0.010)   
Humanities (group 3) −0.122*** 1,2,4 −0.063*** 1,2,4 771 
 (0.007)  (0.005)   
Information technology (group 4) −0.087*** 3,5 0.020** 1,3,5 602 
 (0.012)  (0.010)   
Math (group 5) −0.132*** 1,2,4 −0.068*** 1,2,4 464 
 (0.011)  (0.007)   
Panel B. Heterogeneity by Student Characteristics 
Student race/ethnicity      
Hispanic (group 1) −0.153*** 2,4 −0.024***  77,535 
 (0.007)  (0.005)   
White (group 2) −0.141*** 1,3,4 −0.023***  66,373 
 (0.006)  (0.005)   
Black (group 3) −0.165*** 2,4,5 −0.015  20,081 
 (0.009)  (0.009)   
Asian (group 4) −0.075*** 1,2,3,5 −0.022**  18,201 
 (0.009)  (0.009)   
Other race (group 5) −0.142*** 3,4 −0.026***  31,378 
 (0.007)  (0.007)   
Student sex      
Female (group 1) −0.144*** −0.027*** 109,308 
 (0.006)  (0.004)   
Male (group 2) −0.134*** −0.017*** 102,784 
 (0.006)  (0.004)   

Notes: Cells in column 1 (3) represent effect of online on Pass/A/B/C (future same-subject course-taking) outcome for the group indicated in the row label. Column 2 (4) indicates the groups from which the referent group's online coefficient is significantly different for the Pass/A/B/C (subject persistence) outcome. Robust standard errors clustered at school level. All models include student, course, and instructor controls, and term fixed effects. Course controls include basic skills status and whether the course transfers to the University of California or California State University systems. Student controls include age at term, units enrolled at term, financial aid receipt at term, sex, race, high school credential type, declared academic goal, and whether the student is ever observed in any basic courses. Instructor controls include contract status (temporary and pre-tenure vs. tenured), experience (less than 3 years, 3—5 years, or 6—10 years vs. 11 years or more), whether the instructor was teaching an overload, and whether the course had multiple instructors. Panel A includes college-course fixed effects; panel B includes student fixed effects, along with school and subject fixed effects. Units reported in column 5 are unique courses (students) for panel A (B).

Source: Authors’ calculations from California Community College Chancellor's Office Data.

**Significant at the 0.05 level; ***significant at the 0.01 level.

The pattern of results is strikingly consistent for the Pass/A/B/C outcome. Across all subgroups, online course enrollment is significantly and negatively associated with the Pass/A/B/C outcome.

The relationship between online course-taking and subject persistence is slightly less stable across subgroups. For instance, although the point estimate on the subject persistence gap was negative for black students, it was not statistically significant. More strikingly, the online course-taking was actually positively associated with subject persistence in Information Technology, and was a precisely estimated zero relationship in Business and Management.

Notably, across both outcomes, the online–FtF performance gaps were particularly pronounced for Math and Humanities. Contemporaneous performance gaps in math and humanities (which include English Language Arts classes) were about 2 to 3 percentage points (20–30 percent) higher than gaps observed in other classes. The subject persistence gap in both Math and Humanities was also significantly higher (p < 0.05) than for other subjects—students in online sections were 6.8 (6.3) percentage points less likely to take future math (humanities) courses than were their peers in FtF sections.

Likewise, we find that female students have larger performance gaps than male students for both outcomes. The direction of the contemporaneous performance difference is somewhat surprising given that past studies using similar California data have found larger course passing gaps for male students (Johnson and Cuellar Mejia 2014). Our results hold when we try to replicate Johnson and Cuellar Mejia's model specifications, but the male–female difference in gap size becomes nonsignificant when we add in sample restrictions to make our sample more similar to theirs (i.e., limiting the sample to fall entrants). This suggests that sample differences likely drive the differences in our results. Given the instability of this result across model specifications, we treat this result with caution.

Finally, we observe interesting differences in the subgroup patterns by race/ethnic group across outcomes. For instance, Asian students have much smaller contemporaneous online–FtF performance gaps than all other groups—the negative coefficient for Asian students (7.5 percentage points) is less than half as large as that for Hispanic (15.3 percentage points) or black (16.5 percentage points) students. However, there are no statistically significant differences in the size of the subject persistence gaps between the different race/ethnic groups. Although black students have no statistically significant persistence gap following online versus FtF courses, the point estimate is negative and not significantly different in magnitude than the point estimates for the other groups.

Taken together, these results suggest that online performance gaps for both outcomes are fairly stable across different types of students, and they are particularly pronounced in math and humanities classes.

6.  Discussion

We find that contemporaneous student performance in online courses is generally weaker than in FtF classes. The results hold whether we use college-course fixed effects, student fixed effects, or instructor fixed effects. Our results are consistent across multiple ways of measuring student performance, for students with different characteristics, and across different subject areas. The consistency of these results across different methods of specification and for different groups adds credence to our findings. Our results are close in magnitude to results from similar studies conducted in multiple states (Xu and Jaggars 2011, 2013; Johnson and Cuellar Mejia 2014). In addition, the coefficients’ stability and the fact that the coefficients become more negative as we add controls suggests that the degree of selection on unobservables (Altonji, Elder, and Taber 2005; Oster 2013) would have to be substantial and in the opposite direction from selection on observables to invalidate the fixed-effect results for our contemporaneous course-taking outcomes.

We particularly want to highlight our instructor and instructor-course fixed effect results. These effects are notably smaller in magnitude than in many of our other models---coefficients are roughly one-third smaller in the case of our most policy-relevant outcome: course-passing. This suggests that differences in unobserved instructor characteristics across modes accounts for some of the difference in student performance. The fact that these specifications continue to produce negative and significant estimates of the effects of online course-taking is notable, however, given that they provide a particularly robust test of the effects of course delivery mode by holding constant a number of factors that likely affect students’ course experience. For instance, ideally we would like to be able to observe course expectations as conveyed through course syllabuses and grading standards. If these systematically differed across course modes, our comparisons would be biased. Our instructor-course fixed effect models do not perfectly capture these dimensions, but they come reasonably close. One might expect instructors would build online sections of courses to mirror their FtF sections quite closely, if only because using similar assignments and readings requires less effort on their part. Thus, although our estimates do shrink as we hold instructor-course combinations constant, the fact that online students perform less well on all three measures of course performance, even when instructor-course factors are held constant, suggests that course delivery mode per se is related to contemporaneous student performance.

We find more modest evidence that online course-taking is associated with some negative downstream outcomes as well. Our findings that online course-taking is positively associated with course repetition and negatively associated with subject persistence are stable across a number of estimation techniques. Like the contemporaneous course performance results, these are consistent whether we use student, college-course, or instructor fixed effects. The subject persistence results are largely stable across student subtypes, but are nonsignificant for African American students. There is more heterogeneity across subject types—whereas subject persistence gaps are negative for math, humanities, and social science classes, the gap is nonsignificant for business classes and is actually positive for information technology courses. In all cases, however, the subject persistence gaps are much smaller in magnitude than the estimates for the contemporaneous outcomes.

These results have important implications for policy. Policy makers in California and other states are interested in exploring whether online courses can be used to expand enrollments and improve outcomes. The results suggest there may be costs to this strategy. For instance, we find that students are 3 to 5 percentage points more likely to retake classes taken online. Additional course repetition induced by the lower success rates in online courses represents a cost both to students and to taxpayers who subsidize duplicative coursework. In the future, formal cost–benefit analyses should explore whether the greater likelihood of course noncompletion or failure offsets the possible cost savings associated with online courses.

In addition, our results on how the online course penalty varies across course types should help college administrators identify which online courses may be most costly to students taking them. For instance, online course-taking is associated with particularly pronounced negative outcomes in Math and Humanities, both contemporaneously and in terms of diminished likelihood of taking future courses. The results for Math, in particular, may be salient to administrators concerned with attrition from science, technology, engineering, and math fields. Finding ways to better support online math and humanities students should be a priority for administrators and educators in those fields.

By contrast, our results suggest that administrators may be somewhat less concerned about offering information technology courses online. Although online sections in information technology are associated with negative outcomes in terms of contemporaneous outcomes, they are associated with a greater likelihood of subsequent subject persistence. Administrators and researchers may want to more closely examine what aspects of online information technology courses may drive the increase in subsequent same-subject enrollments.

Our results also have implications for student support in online classes. Instructors who teach online courses should be aware of the performance penalty associated with taking courses online and consider implementing course policies and practices that would allow them to detect student disengagement in the absence of the physical cues on which FtF instructors can rely.13 Students should be made aware that success rates are systematically lower in online than in FtF sections so they can make informed enrollment decisions, and should be introduced to study strategies and time management strategies that promote success in online formats.

The present study has several limitations that should be kept in mind. Its generalizability may be limited. If, for instance, college systems in other states have more (or less) well-developed online course delivery systems, the results presented here might not generalize well. If the current crop of courses in which online sections are offered are either better-suited or worse-suited to online delivery than courses that have not yet adopted online sections, the results may not generalize to different types of courses. Likewise, the results might not generalize cleanly to students attending other types of colleges (e.g., four-year institutions, for-profit schools) that have different organizational and instructional cultures. That said, our tests for heterogeneity of effects for different groups of students somewhat eases our concerns about external validity.

Second, it is important to realize that our results may miss an array of benefits that online courses may offer students. For instance, students may be able to retain jobs that demand flexible schedules or may save on child care costs if they can complete coursework on a nonstandard schedule. Future work should try to explore such benefits to determine the broader effects of online course-taking on welfare, aside from its effects on learning outcomes.

Finally, further research should seek to establish even stronger causal estimates of online course-taking. Although our tests suggest that selection likely plays a limited role in explaining the negative relationship between online enrollment and course performance, future randomized trials under different course conditions will be important to more firmly establish the causal link between online course-taking and student outcomes.

Acknowledgments

We thank the California Community College Chancellor's Office for access to the data analyzed in this paper. This study is a subproject under a study on outcomes for students in California community colleges by Michal Kurlaender, whom we thank as well. Thanks to Michael Hurwitz, Michal Kurlaender, Heather Rose, Kevin Gee, anonymous reviewers, and participants at the Spring Meeting of the Education Group of the National Bureau of Economic Research, the Annual Meetings of the Association for Education Finance and Policy, Association for Policy Analysis and Management, and American Educational Research Association for comments on earlier drafts. Opinions are those of the authors and do not necessarily reflect those of the agencies providing the data. All errors are ours.

REFERENCES

Adams
,
Alison E. M.
,
Shelby
Randall
, and
Tinna
Traustadottir
.
2015
.
A tale of two sections: An experiment to compare the effectiveness of a hybrid versus a traditional lecture format in introductory microbiology
.
CBE Life Sciences Education
14
(
1
):
1
8
. doi:10.1187/cbe.14-08-0118.
Allen
,
I. Elaine
, and
Jeff
Seaman
.
2014
.
Grade change: Tracking online education in the United States
.
Babson Park, MA
:
Babson Survey Research Group
.
Altonji
,
Joseph G.
,
Todd E.
Elder
, and
Christopher R.
Taber
.
2005
.
Selection on observed and unobserved variables: Assessing the effectiveness of Catholic schools
.
Journal of Political Economy
113
(
1
):
151–184
. doi:10.1086/426036.
Bartindale
,
Becky
.
2013
.
Foothill-DeAnza to lead $16.9m statewide Online Education Initiative: Statewide effort aims to increase degree attainment and university transfers
.
Available
www.insidehighered.com/sites/default/server_files/files/OEIPressReleaseFinal%20(1).pdf.
Accessed 16 March 2017
.
Bettinger
,
Eric
, and
Bridget T.
Long
.
2006
.
The increasing use of adjunct instructors at public institutions: Are we hurting students?
In
What's happening to public higher education? The shifting financial burden
,
edited by
Ronald
Ehrenberg
, pp.
51
69
.
Westport, CT
:
Greenwood Press
.
Bettinger
,
Eric
, and
Bridget T.
Long
.
2010
.
Does cheaper mean better? The impact of using adjunct instructors on student outcomes
.
Review of Economics and Statistics
92
(
3
):
598
613
.
Bowen
,
William G.
,
Matthew M.
Chingos
,
Kelly A.
Lack
, and
Thomas I.
Nygren
.
2014
.
Interactive learning online at public universities: Evidence from a six-campus randomized trial
.
Journal of Policy Analysis and Management
33
(
1
):
94
111
. doi:10.1002/pam.21728.
Calcagno
,
Juan Carlos
,
Thomas
Bailey
,
Davis
Jenkins
,
Gregory
Kienzl
, and
Timothy
Leinbach
.
2008
.
Community college student success: What institutional characteristics make a difference
?
Economics of Education Review
27
(
6
):
632
645
. doi:10.1016/j.econedurev.2007.07.003.
California Community Colleges Chancellor's Office (CCCCO)
.
2008
.
Distance education guidelines
.
Available
http://extranet.cccco.edu/Portals/1/AA/DE/de_guidelines_081408.pdf.
Accessed 25 May 2016
.
California Community Colleges Chancellor's Office (CCCCO)
.
2013
.
Distance education report
.
Available
http://californiacommunitycolleges.cccco.edu/Portals/0/reportsTB/REPORT_DistanceEducation2013_090313.pdf.
Accessed 25 May 2016
.
California Community Colleges Chancellor's Office (CCCCO)
.
2015
.
About the OEI—pilot college participation
.
Available
http://ccconlineed.org/about-the-oei/pilot-college-participation/.
Accessed 25 May 2016
.
California Community Colleges Chancellor's Office (CCCCO)
.
2016
.
California community colleges key facts
.
Available
http://californiacommunitycolleges.cccco.edu/PolicyInAction/KeyFacts.aspx.
Accessed 25 May 2016
.
Carrell
,
Scott E.
,
Marianne E.
Page
, and
James E.
West
.
2010
.
Sex and science: How professor gender perpetuates the gender gap
.
Quarterly Journal of Economics
125
(
3
):
1101
1144
. doi:10.1162/qjec.2010.125.3.1101.
Carrell
,
Scott E.
, and
James E.
West
.
2010
.
Does professor quality matter? Evidence from random assignment of students to professors
.
Journal of Political Economy
118
(
3
):
409
432
. doi:10.1086/653808.
Cornelissen
,
Thomas
.
2008
.
The Stata command felsdvreg to fit a linear model with two high-dimensional fixed effects
.
Stata Journal
8
(
2
):
170
189
.
Ehrenberg
,
Ronald G.
, and
Liang
Zhang
.
2005
.
Do tenured and tenure-track faculty matter
?
Journal of Human Resources
60
(
1
):
647
659
. doi:10.3368/jhr.XL.3.647.
Figlio
,
David N.
,
Mark
Rush
, and
Lu
Yin
.
2013
.
Is it live or is it internet? Experimental estimates of the effects of online instruction on student learning
.
Journal of Labor Economics
31
(
4
):
763
784
. doi:10.1086/669930.
Figlio
,
David N.
,
Morton O.
Schapiro
, and
Kevin B.
Soter
.
2015
.
Are tenure track professors better teachers
?
Review of Economics and Statistics
97
(
4
):
715
724
. doi:10.1162/REST_a_00529.
Freitas
,
John
, and
Christina
Gold
.
2015
.
Preparing faculty to teach online
.
Senate Rostrum
February:
17
19
.
Hoffman
,
Florian
, and
Philip
Oreopoulos
.
2013
.
Professor qualities and student achievement
.
Review of Economics and Statistics
91
(
1
):
83
92
. doi:10.1162/rest.91.1.83.
Jacoby
,
Daniel
.
2006
.
Effects of part-time faculty employment on community college graduation rates
.
Journal of Higher Education
77
(
6
):
1081
1103
. doi:10.1353/jhe.2006.0050.
Jaggars
,
Shanna S.
, and
Thomas R.
Bailey
.
2010
.
Effectiveness of fully online courses for college students: Response to a Department of Education meta-analysis
.
Available
http://academiccommons.columbia.edu/catalog/ac:172120.
Accessed 25 May 2016
.
Johnson
,
Hans
, and
Marisol Cuellar
Mejia
.
2014
.
Online learning and student outcomes in California's Community Colleges
.
San Francisco
: Public Policy Institute of California.
Joyce
,
Theodore J.
,
Sean
Crockett
,
David A.
Jaeger
,
Onur
Altindag
, and
Stephen D.
O'Connell
.
2015
.
Does classroom time matter?
Economics of Education Review
46
:
64
77
. doi:10.1016/j.econedurev.2015.02.007.
Kaupp
,
Ray
.
2012
.
Online penalty: The impact of online instruction on the Latino-white achievement gap
.
Journal of Applied Research in the Community College
19
(
2
):
8
16
.
Lewis
,
Laurie
,
Kyle
Snow
,
Elizabeth
Farris
,
Douglas
Levin
, and
Bernie
Greene
.
1999
.
Distance education at postsecondary education institutions: 1997–98
.
Washington, DC
:
National Center for Education Statistics, U.S. Department of Education
.
Means
,
Barbara
,
Yukie
Toyama
,
Robert
Murphy
,
Marianne
Bakia
, and
Karla
Jones
.
2009
.
Evaluation of evidence-based practices in online learning: A meta-analysis and review of online learning studies
.
Washington, DC
:
U.S. Department of Education
.
Murphy
,
Katy
.
2014
.
Gov. Jerry Brown, California college leaders discuss future of higher education
.
San Jose Mercury News
,
23 January
.
Oster
,
Emily
.
2013
.
Unobservable selection and coefficient stability: Theory and validation
.
NBER Working Paper No. 19054
.
Parsad
,
Basmat
, and
Laurie
Lewis
.
2008
.
Distance education at degree-granting postsecondary institutions: 2006–07
.
Washington, DC
:
National Center for Education Statistics, Institute of Education Sciences
.
Rothstein
,
Jesse
.
2010
.
Teacher quality in educational production: Tracking, decay, and student achievement
.
Quarterly Journal of Economics
125
(
1
):
175
214
. doi:10.1162/qjec.2010.125.1.175.
Streich
,
Francine E.
2014
.
Online education in community colleges: Access, school success, and labor-market outcomes
.
Doctoral dissertation, University of Michigan, Ann Arbor
.
Vai
,
Marjorie
, and
Kristen
Sosluski
.
2011
.
Essentials of online course design: A standards based guide
.
New York
:
Routledge
.
Western Association of Schools and Colleges (WASC)
.
2006
.
Protocol for review of distance and correspondence education programs, effective July 5, 2006
.
Available
docplayer.net/8189440-Protocol-for-the-review-of-distance-and-correspondence-education-programs-effective-july-5-2006.html.
Accessed 25 May 2016
.
Wilson
,
Matt
.
2013
.
Foothill-DeAnza receives grant to develop online portal for state's community colleges
.
San Jose Mercury News
,
20 November
.
Xu
,
Di
, and
Shanna
Jaggars
.
2011
.
The effectiveness of distance education across Virginia's community colleges: Evidence from introductory college-level math and English courses
.
Educational Evaluation and Policy Analysis
33
(
3
):
360
377
.
Xu
,
Di
, and
Shanna
Jaggars
.
2013
.
The impact of online learning on students' course outcomes: Evidence from a large community and technical college system
.
Economics of Education Review
37
:
46
57
.
Xu
,
Di
, and
Shanna S.
Jaggars
.
2014
.
Performance gaps between online and face-to-face courses: Differences across types of students and academic subject areas
.
Journal of Higher Education
85
(
5
):
633
659
.

Notes

1. 

Administrators in large institutions are somewhat less optimistic about potential cost savings associated with online courses—only about 45 percent of administrators in institutions with enrollments of 15,000 students or greater said that it was likely or very likely that online courses would become considerably less expensive than face-to-face courses (Allen and Seaman 2014).

2. 

Pass grades may be accepted if the community college's policy states that this is equivalent to receiving a C or better in a course.

3. 

Using other outcomes, including failure conditional on course completion and course grade conditional on course completion, gives us similar patterns of results to those reported here.

4. 

Note that college-course fixed effects implicitly include within them fixed effects for the college as well as the course, so including this term controls for time-invariant characteristics of courses and colleges. We retain the course vector because transfer status can be time-variant.

5. 

We use the felsdvreg command in Stata for these models (Cornelissen 2008).

6. 

Conceptually, this is similar to the falsification tests that Rothstein (2010) conducts to explore whether student sorting into classrooms biases estimates of teacher value-added measures.

7. 

The coefficients on the online indicator in these models are therefore identified off of roughly 59,000 students who are observed in both instructional modes, although students observed in only one mode are included to improve the precision of the estimates of the other coefficients.

8. 

Eliminating the college-course fixed effects allows us to better explore student selection into online course-taking. We estimate models with student and college-course fixed effects later.

9. 

Instructor variables for team-taught classes reflect the status of the instructor responsible for a greater share of the course (based on reported effort), or the more senior professor.

10. 

Alternative specifications using student fixed effects models for tables 8 and 9 produce similar results: Students are less successful in their classes taught by experienced and tenured professors, and are more likely to encounter these types of professors in their online classes than in their FtF classes.

11. 

Note that because we use some subjective measures of performance, it is hard to sort out whether nontenured, less experienced instructors are easier graders or promote better performance.

12. 

These represent the subjects in which over 400 distinct college-course clusters were available for analysis.

13. 

The California Community College system is already beginning to introduce new efforts that may help to improve online education. The CCCCO has introduced the Online Education Initiative (OEI) to improve online instruction. Beginning with a pilot program at twenty-four campuses Spring 2015–Fall 2016, the OEI provides funding and professional development with the goal of increasing the quality and delivery of online courses (CCCCO 2015). Efforts like these could potentially reduce the online–FtF performance gaps in the future.