Increasing the number of science, technology, engineering, and mathematics (STEM) degrees is a major federal education priority. I investigate whether providing a $4,000 financial incentive to low-income students in their junior and senior years of college induces them to major in a STEM field. Using administrative data from Ohio public colleges, I exploit a discontinuity in income eligibility for the National SMART Grant on the pursuit of science majors. Regression discontinuity results indicate financial incentives do not encourage students at the threshold of eligibility to choose STEM majors in their junior year. The null findings are fairly precise, ruling out modest, policy relevant effects for students near the Pell Grant eligibility threshold. I examine several potential explanations of this null result and argue that federal policy makers could improve the design of the program by creating the financial incentive earlier in students’ educational careers. ## 1. Introduction There are both private and public returns to science, technology, engineering, and mathematics (STEM) degrees. Working in a STEM field leads to increased likelihood of employment and higher earnings for individuals (Langdon et al. 2011). At the public level, the National Science Board advocates for an increased federal focus on maintaining the United States’ dominance in innovation and scientific discovery and suggests “a high-quality, diverse, and adequately sized workforce … is crucial to our continued leadership and is therefore a vital federal responsibility” (Crosby and Pomeroy 2004, p. 22). The National Science Board continues by pointing to a lack of growth in the number of science and engineering bachelor's degrees granted in the United States throughout the 1990s and early 2000s even while competition for science talent is growing globally. Because of the high demand for STEM degrees and positive labor market outcomes associated with STEM fields, we might expect students to flock to STEM majors in college. Yet that labor demand is not being met by student supply. A variety of factors may explain why students do not pursue science disciplines, including a preference for other fields, a lack of adequate academic preparation, or being unaware of or heavily discounting future earnings of STEM majors. These fields require increased hours of work and higher degrees of difficulty relative to non-STEM majors, and the potential for lower college grade point averages (GPAs) associated with STEM majors may lead students to worry about lower chances of admission to graduate school or future career opportunities. To overcome these concerns, capable students may need additional motivation during their collegiate years to pursue majors in the sciences. Financial incentives are one such motivational tool, and this paper empirically tests whether financial incentives received during college increase the pursuit of STEM majors among eligible students. Specifically, it investigates whether the National Science and Mathematics Access to Retain Talent (SMART) Grant improves the probability of postsecondary students majoring in science. The program provides low-income students financial incentives of$4,000 in each of an undergraduate's third and fourth years of study if they major in a STEM discipline.1 To the extent that financial incentives increase the number of students graduating with degrees in the sciences, the SMART Grant could be a valuable lever to encourage additional human capital investment in scientific knowledge.

By exploiting the Expected Family Contribution (EFC) eligibility requirement to receive a SMART Grant, the paper uses a regression discontinuity analysis to assess the student response to the financial incentive. I use the EFC of juniors to estimate the effect of program eligibility on selection of a STEM major when the financial award is distributed at the beginning of the junior year. The identifying assumption of the regression discontinuity approach is that the potential outcomes of individuals choosing a STEM major are continuous across the EFC eligibility threshold. Essentially, the approach assumes juniors close to the EFC threshold are randomly conferred eligibility for the grant and no other variables change sharply at the cutoff. I discuss this assumption in greater depth later in the paper. Although regression discontinuity identifies a causal estimate of the program, it does so only near the eligibility threshold. Hence, I am limited to making causal inferences about the program to students who are low-middle income and just eligible for the program, and cannot generalize the findings to the lowest-income students who are receiving full-value Pell Grants (the federal government's college grant program for low-income students).

To estimate the effect the discontinuity in eligibility on STEM major selection, I use administrative, student-level panel data from all of the main branch campuses of public universities in Ohio. These data provide a large enough sample for a regression discontinuity approach and contain enough academic and financial details to accurately measure the qualifying criteria.

I find small, statistically insignificant results of eligibility for the financial grant on selecting a STEM major suggesting that students are not responding to the financial incentive. Both parametric linear and local linear regression discontinuity models rule out positive effects larger than about 2.5 percentage points.

This study contributes to the literature in two important ways. First, it is a causal analysis, evaluating a federal financial aid policy intended to increase the number of STEM bachelor degrees. Second, it provides additional empirical evidence on the factors related to college major selection. The process by which students select a college major has garnered recent attention in the literature with evidence that students choose majors based on their expected earnings (Montmarquette, Cannings, and Mahseredjian 2002; Arcidiacono, Hotz, and Kang 2012), their perceived and actual ability in the major (Stinebrickner and Stinebrickner 2011), and the consumption value of the major while in college (Beffy, Fougère, and Maurel 2012). Although Stange (2015) demonstrates that students studying engineering and business are sensitive to differential tuition pricing, there is little evidence on whether financial aid can motivate students to select certain majors. This study begins to fill that gap in the literature by providing a causal estimate of the impact of a large financial incentive on major choice realized during the junior year.

The paper is organized as follows. Section 2 provides background on the national SMART Grant. Section 3 discusses the prior literature related to college major selection. Section 4 lays out the empirical strategy. Section 5 summarizes the data, and section 6 provides results. I discuss the findings and extensions to the results in section 7 and conclude in section 8.

## 2.  National SMART Grant

In their policy report to the U.S. Department of Education, Choy et al. (2011) describe the National SMART Grant in detail. The grant was authorized by the Higher Education Reconciliation Act of 2005 to promote undergraduate majors in programs deemed nationally critical. It began in the 2006–07 academic year and ended in the 2010–11 academic year, and it provided $4,000 in each of an undergraduate's third and fourth years of baccalaureate education who studied one of a number of selected majors that were predominantly STEM fields. The other eligible majors were foreign languages. Nationally, 62,400 students received the SMART Grant in the 2006–07 school year, the first year of the program. This represents approximately 5 percent of Pell Grant recipients in their third or fourth year, a number that remained stable in each subsequent year of the program (although the actual number of grant recipients in both programs increased slightly since 2006). During the first three years of the program,$610 million was awarded, far less than the congressionally authorized amount. The Department of Education notified by mail and e-mail those students who were financially eligible to receive the grant, but students still had to meet the nonfinancial criteria to receive the award.

Students do not separately apply for the SMART Grant. If a student completes the FAFSA and is in the third or fourth year, the institutional financial aid office checks whether that student is enrolled in an eligible major and meets the other eligibility criteria. The financial aid office notifies the student of the award and disburses the money in the same way they disburse Pell Grant funds.

## 3.  Related Literature

The literature on major choice suggests that students consider several factors when selecting a college major, including their expected future income, ability in certain fields of study, the consumption value of the major, and monetary costs of the major while enrolled. Students must often make tradeoffs between enrolling in majors which they might find more difficult or time consuming but offer greater economic rewards.

Several papers show that students consider their future earnings along with their ability in that field when choosing which major to study. Results of a survey eliciting students’ chosen major and alternative majors indicate that students consider their expected future earnings as well as their ability and the comparative advantage it bestows, in each major they consider (Arcidiacono, Hotz, and Kang 2012). Montmarquette, Cannings, and Mahseredjian (2002) confirm that students evaluate both ability and future earnings but they find that future expected income drives major choice as the elasticity of major choice is greater with respect to future earnings than with respect to the probability of successfully completing college with that major. Ryoo and Rosen (2004) provide further evidence that students respond to career prospects and earnings when choosing to enter the engineering field.

Students also update their beliefs about their likelihood of succeeding in a major in response to new information about their own abilities gathered as they spend time taking coursework. Zafar (2011) longitudinally elicits students’ predicted GPAs in their major and alternative majors to observe how new information about their academic performance changes their predicted success. He finds that students who receive GPAs much higher (lower) than their predictions alter their new prediction up (down). Stinebrickner and Stinebrickner's (2011) work supports Zafar's study by also concluding that a student's final major is the result of a learning process. Using a longitudinal survey to investigate changes in majors at Berea College, they find that many students who initially select science majors switch out of those majors in large part because of poor academic performance. Both studies find that poor academic performance is one of the primary reasons students switch majors, supporting the theory that in-college utility and ability drive major choice.

Other in-college factors also appear to affect choice of major. Stange (2015) uses the introduction of differential tuition charges across institutions to examine the effect of the price of a major on degree completion in that major. He finds that higher tuition and fees for engineering and business majors result in reduced engineering and business degrees, although the finding is not statistically significant for business majors. Beffy, Fougère, and Maurel (2012) rely on variation in earnings in the French labor market to discover that the elasticity of major choice with respect to future earnings, while statistically significant, is low. They conclude that measures associated with the consumption value of a major (such as enjoying the work in the major, parental approval of the major, etc.) while in college motivate students’ major choice more than expected earnings. Both papers suggest that students respond to in-college considerations, both financial and nonmonetary.

Given that students respond to new information and that in-college utility impacts major choice, it is reasonable to suspect a financial incentive realized in college could alter a student's major. A direct financial benefit for selecting a certain major may induce a student who would not otherwise select that major to choose it.

Prior research has studied the ability of conditional financial incentives to alter nonmajor educational outcomes in a variety of contexts. Specific to higher education, randomized trials of performance-based scholarships have been shown to increase college enrollment, persistence, and credit accumulation (Richburg-Hayes et al. 2009) and grades among women (Angrist, Lang, and Oreopoulos 2009). Additionally, Fryer (2011) examines field experiments in which primary and secondary school students received financial incentives for educational outcomes such as reading books and grades. There is also a large literature on conditional cash transfers in development economics using educational outcomes. This paper contributes to that literature by examining the impact of a conditional financial incentive on college major choice.

A potential puzzle is why any incentive is necessary to encourage the take-up of STEM majors as the labor market already pays a wage premium to STEM graduates. There are two reasons a financial incentive may be necessary to encourage the study of STEM fields. First, as argued by Beffy, Fougère, and Maurel (2012), students may overweigh their in-college utility relative to their future expected utility. Second, students may be credit-constrained and need to earn money while in college. The added time costs of a STEM major may reduce in-college earnings, and the grant may make up the difference. Evidence from the National Survey of Student Engagement shows that students studying engineering or physical sciences spend 19 and 18 hours, respectively, preparing for class, whereas students in education, social sciences, and business spend 15, 14, and 14 hours, respectively (NSSE 2011). These additional hours spent studying are, in part, supplanting hours worked for pay, as engineers spend 9 hours per week working for pay whereas social sciences students spend 13, and business majors spend 16. The SMART Grant potentially addresses both cases such that a financial incentive realized in college might encourage students to switch into a STEM major.

I use a regression discontinuity design to obtain a causal estimate of the impact of a $4,000 financial incentive for college undergraduates to choose a STEM major. The treatment is eligibility to receive the financial incentives if a student chooses a STEM major, and the dependent variable is whether or not the student chooses a STEM major at the beginning of the junior year when he would receive the financial incentive. In accordance with program eligibility criteria, I restrict all analyses to U.S. citizens who are first-time full-time enrollees. The other two eligibility criteria, income and GPA, represent two potential continuous forcing variables and two cutoff thresholds that can be used to measure the treatment effect. I rely predominantly on the EFC-forcing variable and restrict the analyses to all students who meet the GPA eligibility criteria of 3.0.2 To be eligible for a SMART Grant, students must be eligible to receive a Pell Grant. The EFC determines students’ Pell Grant eligibility: Students with EFCs at or below an eligibility threshold were eligible to receive a Pell Grant (and therefore a SMART Grant if they met the other criteria), and students with EFCs strictly greater than the threshold were ineligible. I use junior year EFC and the junior year eligibility threshold as the grant first becomes available at the beginning of the junior year. For students who entered their junior year in the fall of 2008, the eligibility threshold is$4,041, and for students who entered their junior year in the fall of 2009 it is $4,617. Attempting to alter EFC to gain access to the program is virtually impossible. The federal government calculates the EFC annually using a complex, opaque formula that accounts for parental and student income and assets, family size, and the number of siblings attending college. Data used in the formula to calculate EFC are derived from the prior year tax returns, but the threshold for eligibility changes based on congressional appropriations to the Pell program. Therefore, the threshold is not known at the time of filing the FAFSA, which reports the information required to calculate the EFC. This creates an EFC threshold for eligibility for the Pell Grant that can change annually. Below I formally test whether there is any evidence of manipulation in the EFC rating score. Because the treatment of interest is eligibility for the SMART Grant, and eligibility is completely determined by the program rules, this is a sharp regression discontinuity design. Financial aid offices at institutions of higher education do have the legal authority to alter a student's federal aid award through a process called professional judgment, but this process alters inputs on the FAFSA and is reflected in students’ EFC, which is fully observed. Following standard practice, I estimate the treatment effect using both parametric and nonparametric approaches (Lee and Lemieux 2010). Both approaches offer the potential for bias (parametric because of an incorrect functional form over the whole data and nonparametric because of the potential for a local linear regression to ignore nonlinearities for observations close to the threshold). Using both methods serves as a robustness check on the findings. I utilize a linear and cubic model for the parametric approach and implement the leave-one-out cross-validation procedure suggested by Ludwig and Miller (2007) and Imbens and Lemieux (2008) to select the appropriate bandwidth for the nonparametric local linear regression analysis. In practice, both approaches use the same general estimation equation: 1 Individual students are indexed by i. The equation allows for separate functional forms f and g on either side of the discontinuity. For ease of interpretation, I fit linear probability models, as suggested by Angrist and Pischke (2009), to estimate the effect on the binary outcome. EFC is centered at the junior year threshold for each cohort; therefore, the coefficient of interest is α1, which estimates the difference in the intercept at the threshold. I also include covariates to explain more of the variation in the outcomes to improve precision. I estimate several models including controls for students’ race, gender, in-state residency status, parental education, and ACT scores. To ensure a consistent analytical sample, I drop observations with missing ACT scores and parental education. I also include campus dummies to account for differences in major availability and popularity across campuses. I estimate two different outcomes. The first uses junior year EFC to estimate the impact of eligibility on STEM major choice at the beginning of the junior year when the financial incentive is actually disbursed. Second, I condition on students who began college with a STEM major and investigate whether eligibility for the financial incentive encourages them to persist in a STEM major into their junior year. Students who initially selected a STEM major signal their interest in the field and might be encouraged to continue in the STEM major by the grant. As STEM fields typically involve lower grades and longer hours studying, the financial incentive may motivate students with the initial interest and ability to persist in the field. It is also possible that students with high math ability are unequally affected by the grant, a possibility I return to in the discussion. Each outcome relies on a different sample. The first uses students who began as a first-time full-time student at an Ohio public university and persisted to the junior year. The second outcome relies on a subset of the first sample to include only students who initially chose a STEM major in their freshman year and persisted to the junior year. The causal identification assumption is that these potential outcomes in the absence of the treatment are continuous across the threshold in the forcing variable (Bloom 2012). Equation 1 provides a causal estimate using one forcing variable but there is the potential to rely on both forcing variables because eligibility to receive the financial incentive in the junior year is contingent upon both EFC and GPA. As noted above, there is evidence of GPA manipulation among the full set of juniors, but this evidence disappears when restricting the sample to students who initially chose a STEM field upon enrollment. Hence, for the second outcome, I use a multiple forcing variable strategy. Several papers discuss the estimation of regression discontinuity in the case of multiple forcing variables.3 The threshold is no longer along a single dimension but is instead a two-dimensional boundary. There are two advantages of this estimation technique over conducting the analysis separately over two different thresholds. The first lies in sample size. Instead of dropping all of the observations that meet the EFC criterion and estimating the effect of the GPA threshold or vice versa, I simultaneously estimate the effect of being eligible on both forcing variables including all of the observations. Although not of any consequence to the current analysis, it is also possible to estimate how the impact of one forcing variable changes as a student becomes eligible on the other. I use Papay, Willett, and Murnane's (2011) sixteen-parameter estimation equation that includes a four-way interaction (and all lower order interactions) with both forcing variables and both binary eligibility indicators. The coefficient of interest is the interaction between the two eligibility indicators which estimates the additional effect of being eligible on both measures, and therefore eligible to receive the financial incentive, above the effect of being eligible on either one separately. One downside of linking SMART Grant eligibility to that of the Pell Grant is that all students eligible to receive major dependent financial incentives also receive a small Pell Grant, but students just on the other side of the threshold do not receive a Pell Grant.4 This means the regression discontinuity design is not a perfectly clean causal estimate of the effect of being eligible for financial incentives for STEM majors. Instead, it is the causal effect of eligibility for the STEM major incentive plus a small amount of Pell Grant aid. For the 2006 entering cohort, those just over the threshold for eligibility received$400 in Pell Grant aid. Because the $4,000 SMART Grant is so large relative to the small amount of Pell Grant received, I argue that it should dominate any observed effect. ## 5. Data The study relies on institutional data for all of the entering four-year college students in Ohio's thirteen public university main branch campuses. These administrative data are obtained directly from the Ohio Board of Regents (OBR). Data are available for all of the first-time, full-time college students who began their enrollment in the fall of 2006 or 2007. The dataset contains term by term enrollment through the spring of 2010 for each student who remains in the state's system of public higher education. These longitudinal data allow for observation of junior year major selection for both cohorts. Institution of attendance, number of credits, and cumulative GPA are available in each term. Students at Ohio universities select a major upon initial matriculation (although they can select “undeclared”) and can switch majors at any time. Important for the identification strategy, a student receives notification of the junior-year eligibility for the Pell Grant during the spring term of the sophomore year and could change majors prior to the beginning of the fall of the junior year in order to capture the SMART Grant. I observe each student's currently selected major at the beginning of each term. The reported majors are recorded as the exact Classification of Instructional Programs (CIP) code so that they can be matched precisely with SMART Grant eligible CIP codes (see Choy et al. 2011). The OBR data also have demographic variables, ACT scores, and extensive financial information taken from the FAFSA. For each student who filed a FAFSA, the dataset provides the exact EFC which can be used to determine eligibility for the Pell Grant and therefore the SMART Grant. Unfortunately, the state does not record actual financial aid receipt but eligibility can be inferred precisely from the existing data and program regulations. Table 1 provides summary statistics. The sample contains approximately 24,000 first-time full-time entering students in each of the 2006 and 2007 cohorts who persisted in the Ohio public university system to their junior year. The students are slightly more likely to be female, as is common in higher education. Ohio has a low percentage of Hispanic and Asian students, so the sample is dominated by white and black matriculants. A little over half of the sample has a parent who graduated from college, and over 87 percent of students are Ohio residents. Approximately two-thirds of the students took the ACT. This number is lower than may be expected because some students took the SAT, whose data are not recorded by OBR, and a few of the institutions are open enrollment and therefore do not require the submission of test scores for admission. The average ACT score of 23.6 is above the national average of 21.1, reflecting the fact that higher scoring students are more likely to enroll in four-year colleges and persist into their junior year. Table 1. Summary Statistics of First-Time Full-Time Students for the Entering 2006 and 2007 Cohorts at Ohio Public Universities Who Enrolled in Their Junior Year VariableMeanObservations Female 0.529 48,345 White 0.839 48,345 Black 0.074 48,345 Asian 0.027 48,345 Hispanic 0.021 48,345 Other race 0.038 48,345 Ohio resident 0.869 48,345 Took ACT 0.694 48,345 ACT composite score 23.63 33,539 (4.24) Father completed college 0.544 36,672 Mother completed college 0.561 36,723 STEM major as junior 0.196 48,345 GPA as junior 3.09 48,345 (0.54) Filed FAFSA as junior 0.616 48,345 EFC as junior 14,800 29,804 (16,935) Pell eligible as junior 0.298 29,804 VariableMeanObservations Female 0.529 48,345 White 0.839 48,345 Black 0.074 48,345 Asian 0.027 48,345 Hispanic 0.021 48,345 Other race 0.038 48,345 Ohio resident 0.869 48,345 Took ACT 0.694 48,345 ACT composite score 23.63 33,539 (4.24) Father completed college 0.544 36,672 Mother completed college 0.561 36,723 STEM major as junior 0.196 48,345 GPA as junior 3.09 48,345 (0.54) Filed FAFSA as junior 0.616 48,345 EFC as junior 14,800 29,804 (16,935) Pell eligible as junior 0.298 29,804 Notes: The standard deviation of non-binary variables is given in parentheses. The table includes all first-time, first-year students, enrolling full-time at the 13 main branch campuses of Ohio four-year public universities for the entering fall cohorts of 2006 and 2007, who persisted to their junior year. Over 60 percent of juniors applied for federal financial aid and have an average junior-year EFC of$14,800. Of those who filed a FAFSA, 29.8 percent were eligible to receive a Pell Grant. Their average cumulative GPA when they reach their junior year is 3.09. The eligibility restrictions of filing a FAFSA and having at least a 3.0 GPA in the junior year reduce the analytical sample to approximately 19,000 students.

Finally, the outcome of interest is whether students choose a STEM major in the junior year. Of the students remaining to the junior year, 19.6 percent choose a STEM field. The most popular non-science majors for students who reach their junior year are psychology, nursing, accounting, and marketing. The most popular STEM fields are biology, engineering, and zoology.

To provide context, a univariate regression of probability of STEM major on EFC over the entire range of data indicates that a $1,000 increase in EFC is associated with a highly significant 0.06 percentage point increase in the likelihood of choosing a STEM major at junior enrollment. If the range is restricted to EFCs below the mean of$14,800, the slope increases to 0.27 percentage points. As incomes rise, so does the probability of majoring in a STEM field, and this relationship is more pronounced at EFCs below $14,800. ## 6. Results This section first presents the regression discontinuity analysis of the effect of SMART Grant eligibility on junior major selection. I then examine the impact of the financial incentives on persistence in STEM majors among students who selected a STEM degree upon initial enrollment. ### Major Choice at Junior Enrollment Because the SMART Grant is first distributed to students in the junior year, I examine whether eligible students (using junior year EFC) are more likely to choose a SMART Grant–eligible major at that time. This analysis only includes students who persisted into the third year. This limitation is not a threat to internal validity, although it would limit the external validity of this analysis if the financial incentive were applied in the freshman or sophomore years. #### Continuity of Baseline Covariates around the Threshold One standard check of the validity of regression discontinuity as an estimation strategy is examining whether any baseline covariates exhibit a discontinuity at the eligibility threshold. I directly test this question through a local linear regression in the form of equation 1 using each baseline covariate as the outcome. I restrict the sample to include only observations within$500 of the eligibility threshold, although results are qualitatively similar using a $1,000 bandwidth. Table 2 reports these results. Only one of the twelve variables appears significantly different. Students on the treatment side of the threshold are 13 percentage points more likely to have a mother who completed college than students on the control side of the threshold. Controlling for parental education in the regression discontinuity analysis eliminates any bias associated with this difference. Given all of the other covariates demonstrate no discontinuity, this test suggests that proceeding with the analysis is justified. Table 2. Inspecting Continuity of Baseline Covariates at the Threshold within$500 of Expected Family Contribution (EFC) Threshold at Junior Enrollment
VariableControl Mean at ThresholdTreatment Difference from Control at Thresholdp-Value of Difference
Female 0.497 0.019 0.751
Black 0.061 0.012 0.724
Asian 0.034 0.005 0.816
Hispanic 0.021 0.004 0.838
Other race 0.024 −0.006 0.765
Ohio resident 0.991 0.005 0.661
Father completed college 0.342 0.098 0.098
Mother completed college 0.395 0.131 0.032*
ACT English score 22.072 0.776 0.247
ACT Math score 22.405 0.700 0.266
ACT Reading score 22.895 0.658 0.358
ACT Science score 22.453 0.718 0.193
VariableControl Mean at ThresholdTreatment Difference from Control at Thresholdp-Value of Difference
Female 0.497 0.019 0.751
Black 0.061 0.012 0.724
Asian 0.034 0.005 0.816
Hispanic 0.021 0.004 0.838
Other race 0.024 −0.006 0.765
Ohio resident 0.991 0.005 0.661
Father completed college 0.342 0.098 0.098
Mother completed college 0.395 0.131 0.032*
ACT English score 22.072 0.776 0.247
ACT Math score 22.405 0.700 0.266
ACT Reading score 22.895 0.658 0.358
ACT Science score 22.453 0.718 0.193

Notes: Each row is from a linear regression discontinuity model with a bandwidth of $500 of the covariate regressed on EFC centered at the eligibility threshold, an indicator for treatment, and the interaction of treatment and centered EFC using robust standard errors. N = 1,140 for covariates other than for the ACT outcomes; N = 933 for the ACT outcomes. *p ≤ 0.05. #### Density of Forcing Variable around the Threshold SMART Grant eligibility in the junior year rests on two continuous components: EFC eligibility and GPA eligibility. Figures 1 and 2 show the McCrary (2008) density tests for each of the potential forcing variables. Although the EFC variable exhibits a drop in density around the threshold, this occurs on both sides and shows no signs of manipulation. In contrast, the GPA variable does exhibit evidence of manipulation. More students are found just above the threshold for eligibility at a cumulative 3.0 GPA than just below it, indicating that some form of manipulation exists.5 This evidence of manipulation precludes using the GPA measure as a forcing variable for the analysis, therefore, the analyses relies only on the EFC eligibility criterion while restricting the data to only those students who meet the GPA criterion. Figure 1. Density of the EFC Distribution around the Threshold at Junior Enrollment (McCrary Test; McCrary 2008). Figure 1. Density of the EFC Distribution around the Threshold at Junior Enrollment (McCrary Test; McCrary 2008). Figure 2. Density of the GPA Distribution around the Threshold at Junior Enrollment (McCrary Test). Figure 2. Density of the GPA Distribution around the Threshold at Junior Enrollment (McCrary Test). #### Visual Inspection of Grant Eligibility Impact on Major Selection Before turning to the regression results of equation 1, I first present visual evidence of any potential discontinuity of the effect of the financial incentive on students choosing an eligible major. To visually inspect whether a discontinuity exists around the threshold, figure 3 presents a scatter plot of the probability of majoring in a STEM field against the forcing variable centered at the threshold using a bin size of$100 to smooth the data and restricting the range to $2,000 around the threshold. Treated students (on the left of the figure) show a remarkably consistent likelihood of 19 percent of choosing a STEM major with no observable slope. The variance in probability of selecting a STEM major is greater for untreated students, with an intercept of 23.5 percent and a negative slope (although a regression demonstrates that the slope is not significantly different from zero). There is an observable, slight discontinuity at the threshold that suggests eligible students are actually less likely to major in a STEM field at junior enrollment. Given that regression discontinuity results are often sensitive to functional form, I test both linear and cubic relationships below, although an obvious nonlinear relationship is not apparent from figure 3. Figure 3. Scatter Plot of the Probability of Selecting a STEM Major at Junior Enrollment around the EFC Eligibility Threshold. Figure 3. Scatter Plot of the Probability of Selecting a STEM Major at Junior Enrollment around the EFC Eligibility Threshold. #### Parametric Regression Analysis I now turn to the regression results of fitting equation 1 and report results in tables 3 and 4. In contrast to the graphical analysis, which focuses on a range of data near the threshold, the parametric regression analysis uses all of the data on either side of the threshold to predict the limits of the intercept at the threshold from the left and the right. The difference in those two limits is the causal effect of eligibility on selection of a STEM major and is represented by the Eligible variable in the tables. Table 3. Linear Parametric Regression Discontinuity Results for STEM Major Selection at Junior Enrollment Model (1) BasicModel (2) DemographicsModel (3) ACT ScoresModel (4) Parents’ EducationModel (5) Campus Dummies Eligible 0.0019 −0.0017 −0.0004 −0.0009 0.0007 (0.0114) (0.0121) (0.0112) (0.0110) (0.0113) EFC/1000 0.0005 0.0004 −0.0002 −0.0001 0.0001 (0.0004) (0.0004) (0.0003) (0.0003) (0.0002) EFC/1000XPell 0.0043 0.0020 −0.0031 −0.0030 −0.0033 (0.0035) (0.0036) (0.0033) (0.0034) (0.0034) Female −0.1889*** −0.1200*** −0.1203*** −0.1164*** (0.0219) (0.0167) (0.0168) (0.0162) Black −0.0211 0.0577** 0.0574** 0.0609** (0.0193) (0.0170) (0.0172) (0.0170) Asian 0.1750*** 0.1235*** 0.1242*** 0.1126*** (0.0257) (0.0271) (0.0271) (0.0236) Hispanic 0.0152 0.0549* 0.0546* 0.0647** (0.0161) (0.0195) (0.0196) (0.0173) Other race 0.0157 0.0341** 0.0342** 0.0279 (0.0180) (0.0106) (0.0108) (0.0136) Ohio resident −0.0869 −0.0534 −0.0535 −0.0407 (0.1076) (0.0809) (0.0814) (0.0803) ACT English −0.0034* −0.0033* −0.0028** (0.0012) (0.0011) (0.0009) ACT Math 0.0209*** 0.0210*** 0.0212*** (0.0021) (0.0021) (0.0025) ACT Reading −0.0040* −0.0040* −0.0036 (0.0016) (0.0016) (0.0017) ACT Science 0.0129*** 0.0129*** 0.0125*** (0.0022) (0.0022) (0.0023) Father college −0.0072 0.0014 (0.0057) (0.0047) Mother college 0.0012 0.0056 (0.0048) (0.0047) Campus dummies constant 0.1947 0.3905 −0.3312 −0.3303 −0.2897 (0.0252) (0.1070) (0.0972) (0.0984) (0.0987) Adjusted R2 0.0007 0.0608 0.1344 0.1344 0.1506 N 19,037 19,037 19,037 19,037 19,037 Model (1) BasicModel (2) DemographicsModel (3) ACT ScoresModel (4) Parents’ EducationModel (5) Campus Dummies Eligible 0.0019 −0.0017 −0.0004 −0.0009 0.0007 (0.0114) (0.0121) (0.0112) (0.0110) (0.0113) EFC/1000 0.0005 0.0004 −0.0002 −0.0001 0.0001 (0.0004) (0.0004) (0.0003) (0.0003) (0.0002) EFC/1000XPell 0.0043 0.0020 −0.0031 −0.0030 −0.0033 (0.0035) (0.0036) (0.0033) (0.0034) (0.0034) Female −0.1889*** −0.1200*** −0.1203*** −0.1164*** (0.0219) (0.0167) (0.0168) (0.0162) Black −0.0211 0.0577** 0.0574** 0.0609** (0.0193) (0.0170) (0.0172) (0.0170) Asian 0.1750*** 0.1235*** 0.1242*** 0.1126*** (0.0257) (0.0271) (0.0271) (0.0236) Hispanic 0.0152 0.0549* 0.0546* 0.0647** (0.0161) (0.0195) (0.0196) (0.0173) Other race 0.0157 0.0341** 0.0342** 0.0279 (0.0180) (0.0106) (0.0108) (0.0136) Ohio resident −0.0869 −0.0534 −0.0535 −0.0407 (0.1076) (0.0809) (0.0814) (0.0803) ACT English −0.0034* −0.0033* −0.0028** (0.0012) (0.0011) (0.0009) ACT Math 0.0209*** 0.0210*** 0.0212*** (0.0021) (0.0021) (0.0025) ACT Reading −0.0040* −0.0040* −0.0036 (0.0016) (0.0016) (0.0017) ACT Science 0.0129*** 0.0129*** 0.0125*** (0.0022) (0.0022) (0.0023) Father college −0.0072 0.0014 (0.0057) (0.0047) Mother college 0.0012 0.0056 (0.0048) (0.0047) Campus dummies constant 0.1947 0.3905 −0.3312 −0.3303 −0.2897 (0.0252) (0.1070) (0.0972) (0.0984) (0.0987) Adjusted R2 0.0007 0.0608 0.1344 0.1344 0.1506 N 19,037 19,037 19,037 19,037 19,037 Notes: The dependent variable is selecting a SMART Grant–eligible major at junior-year enrollment. Regression discontinuity results are linear probability models using the EFC discontinuity of$4,041 for juniors in 2008 and $4,617 in 2009 for all juniors with cumulative GPAs ≥ 3.0. Standard errors are clustered at the institution level and included in parentheses. The sample includes all students who were first-time students at the 13 main branch campuses of Ohio four-year public universities for the entering fall cohorts of 2006 and 2007 who enrolled full-time in their junior year, filed a FAFSA in the junior year, and are not missing ACT scores or parental income. *p ≤ 0.05; **p ≤ 0.01; ***p ≤ 0.001. Table 4. Treatment Coefficients and 95 Percent Confidence Intervals for STEM Major Selection at Junior Enrollment Panel A: Linear Regression Model (1) BasicModel (2) DemographicsModel (3) ACT ScoresModel (4) Parents’ EducationModel (5) Campus Dummies Eligible 0.0019 −0.0017 −0.0004 −0.0009 0.0007 (0.0114) (0.0121) (0.0112) (0.0110) (0.0113) [−0.0230 [−0.0281 [−0.0248 [−0.0248 [−0.0240 0.0267] 0.0247] 0.0240] 0.0231] 0.0254] Adjusted R2 0.0007 0.0608 0.1344 0.1344 0.1506 N 19,037 19,037 19,037 19,037 19,037 Panel B: Cubic Regression Model (1) Basic Model (2) Demographics Model (3) ACT Scores Model (4) Parents’ Education Model (5) Campus Dummies Eligible 0.0152 0.0120 0.0097 0.0099 0.0147 (0.0291) (0.0172) (0.0173) (0.0171) (0.0167) [−0.0266 [−0.0254 [−0.0279 [−0.0275 [−0.0217 0.0470] 0.0494] 0.0474] 0.0472] 0.0511] Adjusted R2 0.0008 0.0609 0.1345 0.1344 0.1507 N 19,037 19,037 19,037 19,037 19,037 Panel C: Local Linear Regression Using a$2,500 Bandwidth
Model (1) Basic Model (2) Demographics Model (3) ACT Scores Model (4) Parents’ Education Model (5) Campus Dummies
Eligible −0.0400 −0.0267 −0.0355 −0.0355 −0.0289
(0.0220) (0.0250) (0.0218) (0.0219) (0.0247)
[−0.0878 [−0.0811 [−0.0830 [−0.0833 [−0.0828
0.0079] 0.0277] 0.0121] 0.0123] 0.0250]
Adjusted R2 0.0007 0.0775 0.1356 0.1353 0.149
N 3,523 3,523 3,523 3,523 3,523
Panel A: Linear Regression
Model (1) BasicModel (2) DemographicsModel (3) ACT ScoresModel (4) Parents’ EducationModel (5) Campus Dummies
Eligible 0.0019 −0.0017 −0.0004 −0.0009 0.0007
(0.0114) (0.0121) (0.0112) (0.0110) (0.0113)
[−0.0230 [−0.0281 [−0.0248 [−0.0248 [−0.0240
0.0267] 0.0247] 0.0240] 0.0231] 0.0254]
Adjusted R2 0.0007 0.0608 0.1344 0.1344 0.1506
N 19,037 19,037 19,037 19,037 19,037
Panel B: Cubic Regression
Model (1) Basic Model (2) Demographics Model (3) ACT Scores Model (4) Parents’ Education Model (5) Campus Dummies
Eligible 0.0152 0.0120 0.0097 0.0099 0.0147
(0.0291) (0.0172) (0.0173) (0.0171) (0.0167)
[−0.0266 [−0.0254 [−0.0279 [−0.0275 [−0.0217
0.0470] 0.0494] 0.0474] 0.0472] 0.0511]
Adjusted R2 0.0008 0.0609 0.1345 0.1344 0.1507
N 19,037 19,037 19,037 19,037 19,037
Panel C: Local Linear Regression Using a $2,500 Bandwidth Model (1) Basic Model (2) Demographics Model (3) ACT Scores Model (4) Parents’ Education Model (5) Campus Dummies Eligible −0.0400 −0.0267 −0.0355 −0.0355 −0.0289 (0.0220) (0.0250) (0.0218) (0.0219) (0.0247) [−0.0878 [−0.0811 [−0.0830 [−0.0833 [−0.0828 0.0079] 0.0277] 0.0121] 0.0123] 0.0250] Adjusted R2 0.0007 0.0775 0.1356 0.1353 0.149 N 3,523 3,523 3,523 3,523 3,523 Notes: The dependent variable is selecting a SMART Grant–eligible major at junior-year enrollment. Results are from regression discontinuity models around the EFC discontinuity of$4,041 for juniors in 2008 and $4,617 in 2009 using a linear probability model for all juniors with cumulative GPAs ≥ 3.0. The sample includes all students who were first-time students at the 13 main branch campuses of Ohio four-year public universities for the entering fall cohorts of 2006 and 2007 who enrolled full-time in their junior year, filed a FAFSA in the junior year, and are not missing ACT scores or parental income. Demographics include controls for gender, race, and in-state residency. Standard errors are clustered at the institution level. Table 3 reports results from five different linear models. In each model, the EFC is divided by 1,000 to make the results easier to interpret. For example, a$1,000 increase in EFC is associated with a 0.05 percentage point increase in the likelihood of selecting a STEM major at junior enrollment in model 1.

Model 1 reports the basic regression discontinuity model allowing for different linear trends on either side of the discontinuity. There is a positive point estimate of the effect, but it is small in magnitude and statistically indistinguishable from zero. Models 2–5 add demographic variables, ACT scores, and indicator variables for each campus. If the regression discontinuity assumption holds that only the treatment is discontinuous across the threshold, then students should be randomly distributed around the threshold. This implies that the addition of covariates should not change the observed effect of the discontinuity. Although the inclusion of covariates tends to slightly reduce the point estimates, this is most likely due to noise. The estimates remain small, and the changes in the point estimates across models in absolute terms are very small. Collectively, the results suggest that there is no discontinuity in the propensity to select a STEM major upon junior enrollment around the EFC threshold for eligibility for the SMART Grant.

The standard errors for estimated treatment effects are many times larger than the point estimates, so it is interesting to consider the range of plausible values of the treatment effect. Panel A of table 4 displays the point estimates, standard errors, and 95 percent confidence intervals of the treatment effect for models 1–5. The 95 percent confidence intervals show that plausible treatment effects range from eligible students being about 2.5 percentage points less likely to choose a STEM major to 2.5 percentage points more likely. These results rule out any large effects of the program.

Because the parametric regression model estimates are potentially sensitive to functional form specification, I also implement a cubic model as a robustness check. Table 4 reports in panel B the point estimates, standard errors, and 95 percent confidence intervals for the treatment coefficient for the cubic regression models. Although the point estimates for the cubic regressions are larger in magnitude than the linear regressions, in each case, the point estimate remains small and insignificant. The standard errors in the cubic regression are greater than those from the linear regression, however, so the estimates are less precise.

#### Nonparametric Regression Analysis

Using observations with very high and low EFCs to predict the outcome at the threshold may introduce unnecessary bias to the estimation procedure. If the chosen functional form is inaccurate, the estimated points at the threshold from the left and the right will be incorrect, resulting in a biased treatment effect. Therefore, I use a nonparametric approach to check the sensitivity of my results. In practice, I use local linear regression on a bandwidth of data around the eligibility cutoff and implement a cross-validation procedure to choose the optimal bandwidth. This procedure eliminates any bias caused by points very far from the eligibility threshold by estimating a linear regression using only points closer to the threshold, but it imposes a linear functional form. Panel C of table 4 presents the results of this analysis using a bandwidth of $2,500 on either side of the eligibility threshold.6 As demonstrated in table 5, results are robust to bandwidth selection: The point estimate is negative for every bandwidth above$100, and the result is statistically indistinguishable from zero.

Table 5.
Robustness to Bandwidth Selection at Junior Year Enrollment for Local Linear Regressions
Bandwidth Around Threshold ($)Treatment Point EstimateClustered Standard ErrorNumber of Observations 100 0.0123 0.2355 118 200 −0.0451 0.1539 264 500 −0.0174 0.0625 670 1,000 −0.0710 0.0398 1,414 1,500 −0.0280 0.0315 2,133 2,000 −0.0095 0.0277 2,828 3,000 −0.0321 0.0207 4,221 3,500 −0.0255 0.0209 4,923 Bandwidth Around Threshold ($)Treatment Point EstimateClustered Standard ErrorNumber of Observations
100 0.0123 0.2355 118
200 −0.0451 0.1539 264
500 −0.0174 0.0625 670
1,000 −0.0710 0.0398 1,414
1,500 −0.0280 0.0315 2,133
2,000 −0.0095 0.0277 2,828
3,000 −0.0321 0.0207 4,221
3,500 −0.0255 0.0209 4,923

Notes: The dependent variable is selecting a SMART Grant–eligible major at junior-year enrollment. Results are from local linear regression discontinuity models for the appropriate bandwidth around the EFC discontinuity of $4,041 for juniors in 2008 and$4,617 in 2009 using a linear probability model for all juniors with cumulative GPAs ≥ 3.0. Each regression includes controls for gender, race, in-state residency, parental education, ACT score, and institution of attendance indicators. Standard errors are clustered at the institution level.

The estimated treatment effect using local linear regression at the optimal bandwidth is approximately −3 percentage points, although the result is statistically indistinguishable from zero due to the high standard error caused by the smaller sample size near the threshold. The nonparametric estimation technique confirms what the graphical analysis suggests and linear parametric regression results show: The program likely had a negative or null effect, and positive effects larger than 2.77 percentage points can be ruled out. Eligibility for the SMART Grant does not increase the probability of a student choosing a STEM major in his junior year of college enrollment.

### Persistence in STEM Major

There does not appear to be a positive effect of financial incentives on major selection at junior year enrollment, although it is possible the financial incentives encourage students who are initially interested in a STEM major to persist within that major. Many students leave STEM fields to pursue other avenues, such as business or social sciences (Bettinger 2010), but perhaps the financial incentive keeps eligible students engaged in a STEM major in their junior year. I estimate the causal effect of the financial incentive on persistence within STEM majors by replicating the above analysis with one additional sample restriction. As in the above analysis, the sample only includes students who began college in 2006 or 2007 and persisted to the junior year, such that junior year major choice is observed. The additional restriction is that the sample only includes students who choose a STEM major when they initially enroll in college. Any observed difference in the likelihood of being a STEM major at the threshold in the junior year serves as an estimated treatment effect on the subpopulation initially interested in STEM fields.

This analysis begins with the subset of 2006 and 2007 entrants who initially choose a STEM major. It then examines whether eligibility for the $4,000 grant in the junior and senior years improves the probability of still being enrolled in a science major at the beginning of the junior year. I consider concerns regarding manipulation of the forcing variables. Because eligibility in the junior year is conditional on both an EFC and GPA cutoff, I examine both thresholds. As shown in figures 4 and 5, neither forcing variable exhibits evidence of manipulation, therefore, both forcing variables can be used in the data analysis. Figure 4. Density of the EFC Distribution around the Threshold at Junior Enrollment for Students Initially Selecting a STEM Major (McCrary Test). Figure 4. Density of the EFC Distribution around the Threshold at Junior Enrollment for Students Initially Selecting a STEM Major (McCrary Test). Figure 5. Density of the GPA Distribution around the Threshold at Junior Enrollment for Students Initially Selecting a STEM Major (McCrary Test). Figure 5. Density of the GPA Distribution around the Threshold at Junior Enrollment for Students Initially Selecting a STEM Major (McCrary Test). As discussed in the empirical strategy section, I estimate the effect of simultaneously being eligible on both the EFC and GPA criteria. I fit a linear regression using all of the available data instead of limiting the analysis to a bandwidth around the thresholds.7 The interaction of eligibility on the EFC and GPA measures is the coefficient of interest and is reported in table 6 for several different specifications. This coefficient estimates the effect of being eligible on both measures over and above the impact of being eligible on just one of the criteria. The estimated result using the full model including covariates and institutional dummies is −0.012 with a standard error of 0.0876. Consistent with the previous findings, the point estimate is close to zero, indicating that the effect of eligibility for the SMART Grant does not improve the likelihood of students persisting in a STEM major if they initially choose a STEM major upon matriculation. Unfortunately, the measure is very imprecise as the standard error has ballooned to about seven times the point estimate. Only slightly more than 5,000 students who began with a STEM major in their first year of college maintained enrollment and filed a FAFSA in their junior year, which severely limits the sample size. Additionally, the covariates explain less of the variation in STEM major selection in the junior year among students who initially selected a STEM major, and the estimation procedures estimate additional parameters resulting in fewer degrees of freedom, which also increases the standard error. Despite these flaws, the results do not provide evidence that financial incentives encouraged students who initially began with a STEM major to persist in their STEM intentions. Table 6. Treatment Coefficients and 95 Percent Confidence Intervals for STEM Major Selection at Junior Enrollment Conditional on Initially Selecting a STEM Major at Freshman Enrollment Using Both Expected Family Contribution (EFC) and GPA Eligibility Thresholds Linear Regression Model (1) BasicModel (2) DemographicsModel (3) ACT ScoresModel (4) Parents’ EducationModel (5) Campus Dummies Eligible 0.0006 −0.0274 −0.0067 −0.0053 −0.0121 (0.1064) (0.0964) (0.0914) (0.0916) (0.0876) [−0.2312 0.2325] [−0.2373 0.1826] [−0.2057 0.1924] [−0.2050 0.1944] [−0.2029 0.1787] Adjusted R2 0.0134 0.0334 0.0541 0.0542 0.0596 N 5,047 5,047 5,047 5,047 5,047 Linear Regression Model (1) BasicModel (2) DemographicsModel (3) ACT ScoresModel (4) Parents’ EducationModel (5) Campus Dummies Eligible 0.0006 −0.0274 −0.0067 −0.0053 −0.0121 (0.1064) (0.0964) (0.0914) (0.0916) (0.0876) [−0.2312 0.2325] [−0.2373 0.1826] [−0.2057 0.1924] [−0.2050 0.1944] [−0.2029 0.1787] Adjusted R2 0.0134 0.0334 0.0541 0.0542 0.0596 N 5,047 5,047 5,047 5,047 5,047 Notes: The dependent variable is selecting a SMART Grant–eligible major at junior-year enrollment. Results are regression discontinuity models using both the GPA discontinuity of 3.0 and the EFC discontinuity of$4,041 for juniors in 2008 and $4,617 in 2009, using a linear probability model for all juniors who initially selected a SMART Grant–eligible major in their freshmen year among all first-time freshmen at the 13 main branch campuses of Ohio four-year public universities for the entering fall cohorts of 2006 and 2007. It only includes students who enrolled full-time and filed a FAFSA in the junior year and are not missing ACT scores or parental income. Demographics include controls for gender, race, and in-state residency. Standard errors are clustered at the institution level. ## 7. Discussion and Extensions The results provide fairly clear support that financial incentives in the junior year do not encourage students at the threshold of eligibility to study STEM fields in higher education. This null result is somewhat unexpected in the context of financial aid literature. Although positive impact estimates for the Pell Grant program are difficult to find (Kane 1995), the field has developed consensus that$1,000 of financial aid increases college enrollment probabilities by 3 to 4 percentage points (Deming and Dynarski 2010). The impact of financial aid on college persistence has also garnered attention. In two papers, Bettinger (2004, 2015) finds that Pell Grants likely improve persistence and that a $750 increase in state grant aid decreased dropout rates by 2 percentage points. Castleman and Long (2013) focus on degree receipt and use regression discontinuity to estimate that$1,000 in grant aid increases the chance of obtaining a degree by 4.6 percentage points. At least one other study, however, finds a null effect of financial aid: Goldrick-Rab et al. (2011) find neither an enrollment nor a persistence effect in a randomized experiment of providing need-based financial aid to students.

Given the predominantly positive results from prior literature, it may be surprising that $4,000 in grant aid is not enough to shift at least some students into studying a STEM field among a low-income population. There are several possible explanations for this result that are worth exploring. One commonly cited reason why studies of Pell Grants have failed to produce similar results to other aid programs is that applying for the aid is difficult because of the complex nature of the FAFSA (Bettinger et al. 2012). That argument does not apply to the findings in this study because it is limited only to students who have already filed a FAFSA. As noted above, there is the potential for the interplay of other forms of financial aid at the eligibility threshold for the SMART Grant because students also receive a small amount of Pell Grant. If several hundred dollars of Pell Grant has a positive effect on majoring in STEM, it would make detecting an effect more likely. If the Pell Grant has a negative effect on choosing a STEM major, it would counter the presumed positive effect of the financial incentive and could partly explain the null findings. Although any effect of Pell Grant on major choice is absent in the literature, several studies have investigated the relationship between other forms of financial aid and major choice with varying results. Rothstein and Rouse (2011) investigate the impact of student loans on major choice and demonstrate that debt shifts students toward “employment” majors, such as economics and engineering, at one highly selective university. Their findings imply that grant aid would shift students away from STEM fields, a finding that is directly tested by Sjoquist and Winters (2015) using an interrupted time series strategy to investigate the effects of the Georgia HOPE scholarship on STEM major choice. They find that that the HOPE scholarship, which covers in-state tuition at public institutions, reduces the probability of completing a STEM major by 2.5 percentage points. One plausible explanation for this effect is that a minimum GPA is required to maintain the scholarship, thus inducing students to switch out of STEM majors (which have lower average GPAs). In contrast to the negative effect of grant aid on STEM major selection, DesJardins and McCall (2014) use regression discontinuity to show that receiving an average of$8,000 to $11,000 of grant aid for Gates Millennium Scholarship recipients has no effect on major choice. Without observing other forms of aid receipt, it is difficult to know exactly how financial aid that is not conditional on major choice would interplay with grant aid that is conditional on major choice within this dataset. Although Ohio does not have an extensive merit aid program like Georgia, perhaps the SMART Grant is countering other forms of grant aid that are deterring students from majoring in STEM either because of reduced need to choose an “employment major” or to maintain a certain GPA such that the net effect is zero. For this rationale to explain the null findings, the receipt or effect of other sources of aid would also have to be discontinuous across the threshold for SMART Grant eligibility. Although I cannot test this in the Ohio dataset, it is possible to do so in nationally representative data using the National Postsecondary Student Aid Study in 2007–08 (NPSAS:08). By implementing regression discontinuity and regression kink designs using NPSAS:08, Turner (2014) provides evidence that other forms of grant aid are discontinuous across the EFC threshold. She demonstrates that Pell Grants can supplant institutional grant aid. If students know, or suspect, that this practice occurs for the SMART Grant, it would reduce the financial incentive of the grant. Tuner shows empirically that this practice is less likely to reduce overall aid at public institutions (the pertinent sample in my dataset) because of the countervailing force of giving students more institutional aid in order to attract Pell recipients to the institution. The net effect of these aid shifts is only on the order of a few hundred dollars at public institutions, which suggests that the large$4,000 incentive of the SMART Grant would not be meaningfully affected.

I replicate part of Turner's analysis using NPSAS:08 but restrict the sample to those eligible for the SMART Grant (juniors and seniors who are U.S. citizens enrolled full-time with GPAs greater than or equal to 3.0). The regression discontinuity analysis shows that, at public institutions, the effect on institutional grant aid of being just eligible at the EFC threshold is only $86, which is not statistically significant. There is, however, a positive effect on state grant aid of approximately$700. Instead of the Pell and SMART Grants supplanting other forms of grant aid, it appears these students are more likely to receive slightly higher aid amounts. The effects remain small relative to the $4,000 incentive of the SMART Grant, which we would expect to dominate, so this explanation does not fully explain the null findings. Another potential explanation for the null findings is the need for early preparation for science degrees. By the time a student reaches the junior year and determines that he is eligible for the program, it is likely too late to switch into a STEM major because of the extensive prerequisites of most STEM degrees. Undertaking those prerequisites and completing the STEM major would likely extend the time to degree, radically diluting the value of the incentive. This analysis highlights two poor design features of the SMART Grant program. First, because of the lengthy preparation required for STEM majors, starting prerequisite coursework early is essential. Yet the financial incentive only exists two years after full-time students begin their coursework. A better program design would provide a financial incentive in the first two years so that students can immediately benefit by enrolling in STEM coursework. To the extent that students have high discount rates, the financial incentive two years away will be a weak motivation to take action when they initially enroll. Second, this problem is compounded by the varying eligibility requirements thanks to changes in EFC and the changing EFC eligibility threshold. Potentially, some low-income students would, upon initial enrollment, be motivated to choose a STEM major because of the$4,000 incentive that they would receive in two years. However, many of those students would likely misestimate their future eligibility if they based their predicted eligibility on their first-year EFC. Not only do EFCs change annually due to shifts in parental and student income and family situation, but the threshold for eligibility also changes based on Congressional appropriations. For students close to the threshold of eligibility, it is likely impossible to predict eligibility two years in the future.

An empirical example is illustrative: If the junior year eligibility criteria were applied to the 2006 and 2007 cohorts’ freshmen year EFC, then 1,656 students who eventually persisted to the junior year would have been eligible within $1,000 of the EFC cutoff. Forty-six percent of those students were no longer eligible when they actually reached their junior year. It's close to a fifty–fifty proposition for students initially close to the eligibility threshold as to whether they will be eligible two years later.8 These changing eligibility criteria substantially weaken the financial incentive's power to shift students into studying STEM fields. If the program was designed to provide an early incentive that can be realized at college entry, it might induce some students to prepare for a science major at the beginning of college, thereby leading to an increase in STEM degrees. An additional explanation of the findings is that students might not be aware that the program exists. Zafar's (2011) work indicates that students will respond to new information and adjust their expectations, but if the information never reaches them, they have no reason to adjust their major based on a potential large improvement in their financial aid award. The extent to which students were aware of the SMART Grant is an open question. During the first year of the program, the U.S. Department of Education sent e-mail and snail mail notifications to students who, based on their financial aid application, appeared eligible on the nonacademic requirements for the ACG and/or SMART Grants (Choy et al. 2012). Such notifications do not appear to have been sent during subsequent years. Instead, the Department of Education relied on outreach efforts from states and institutions to include information about the grant in their financial aid materials. Institutions varied in their efforts to promote the program and conduct outreach to students to increase awareness, but there is concern that many students were unaware of the program (Choy, Lee, and Topper 2012). A policy report on the status of the SMART Grant program uses student interviews from NPSAS:08 to show that only 5 percent of low-income college juniors and seniors indicated familiarity with the SMART Grant program (Choy et al. 2011). On the surface this seems an extremely low percentage but it does not account for the fact that only a subset of low-income students have eligible GPAs and will consider STEM fields, so the proportion of truly eligible students aware of the program may be much higher. Accounting for these additional qualifications, my own analysis using NPSAS:08 reveals that over 35 percent of qualified students were aware of the program. Although that is seven times larger than among all low-income juniors and seniors, the lack of program awareness could be a substantial explanation for why the program had no effect near the threshold for eligibility. This conclusion speaks to the poor program design and implementation as opposed to students not actually being motivated by financial incentives. A final potential explanation for the findings is that the grant is actually quite small compared with the lifetime earnings of STEM graduates. The average annual income of science majors with only a bachelor's degree was$66,750 in 2010 compared with non-science majors with only a bachelor's degree who earned, on average, $55,333 (Carnevale, Cheah, and Strohl 2012). The$8,000 in total financial incentive over the junior and senior years is dwarfed by the greater than $10,000 difference in annual incomes between these two groups. If the earnings differential over a lifetime of earnings is not enough to motivate students to select a STEM major, it is unlikely that the additional SMART Grant money can induce them to change their behavior unless their discount rates are exceptionally high. This interpretation suggests that some other factor may deter students from pursuing science fields despite the financial reward. The most likely explanation is ability. Arcidiacono (2004) considers the sorting of high-ability college students into majors and finds that high ability students are more likely to sort into STEM fields. He attributes this phenomenon to student preference for STEM fields over the competing explanation of a desire for high earnings because he finds that high math ability students have greater earnings across all majors. Combined with Stinebrickner and Stinebrickner's (2011) results showing that students select out of science majors because of poor academic performance, a clear pattern emerges that ability and academic performance are the factors driving STEM major selection. I investigate whether the grant impacts high-ability students in my sample by fitting the regression discontinuity model for junior year enrollment with an interaction for ACT math score. I do not find any effect. This result implies higher-ability students are not moving into STEM fields as a result of the grant, although many of those students are already enrolled in STEM majors. It is still likely that many students with inadequate academic preparation are interested in science but avoid STEM majors for fear of poor academic performance. There is an important policy implication if ability and academic performance are the leading factors deterring students from STEM majors. Investing in mathematics and scientific preparation before college, and quality teaching and academic support during college, may have a greater payoff than providing financial incentives to students who are not able to maintain a STEM major even if they want to. It is also possible that the sample in this study, four-year public colleges in Ohio, is not representative of the broader population. Ohio is a fairly typical state demographically, but it is possible the effect of the program at private colleges is different. Still, two-thirds of all SMART Grant recipients nationally attend four-year public institutions, so these results suggest the program is not having an impact on a large majority of students receiving the funds. Regression discontinuity imposes a few other limitations. Because the threshold for eligibility of the SMART Grant is dependent on EFC, the results are only applicable for financial aid applicants. It seems unlikely, however, that providing financial incentives to upper income students would have a larger impact on major selection than for lower income students. The major disadvantage of the regression discontinuity method is that it relies only on variation occurring around the threshold to estimate a treatment effect; hence, the estimates are only causal for students close to the threshold. Although the estimates apply to low-income students who are marginally eligible for Pell Grants, I cannot estimate the impact of financial incentives on the very poorest students who have an EFC of$0. It is possible that the impact of the financial incentive is larger for those students; those students are already receiving over $4,000 in Pell Grant awards, however, so the additional financial incentive might have less impact. Interestingly, the SMART Grant program ended in 2011. As no causal estimate of the impact of the program existed before now, this decision was not based on evidence. Discussions with policy makers involved in federal financial aid policy lead me to conclude that the program was ended for three reasons. The first is predominately political. The program was established under President George W. Bush, and the Obama administration was interested in implementing its own policies. Second, many financial aid administrators complained about the additional requirements imposed upon them to check eligibility requirements. Finally, there were some (very valid) concerns over the design of the program and discussions are ongoing to determine if a more effective design could be implemented. It is also interesting to consider how this program compares with other similar federal initiatives designed to increase the STEM workforce. The National Science Foundation's (NSF’s) Graduate Research Fellowship and NSF's Scholarships in STEM are two such programs. Although we do not have good causal evidence on the effect of either program, we do have some evidence of their efficacy. In an analysis of over fifty years of data on the NSF Graduate Research Fellowship applicants and recipients, Freeman, Chang, and Chiang (2009) find that graduate students are responsive to the size of the grant and conclude that the fellowship does encourage top academic students to continue studying STEM fields. Although these results are based on a large amount of data, they are neither causal nor necessarily applicable to other STEM-based scholarship programs, especially at the undergraduate level. Still, it is plausible that by increasing the size of the SMART Grant award, students would be more responsive. The NSF Scholarships in STEM program provides grants directly to institutions to distribute to needy students studying science at all levels. The investment of$50 to $70 million annually is only a fraction of the$200 million spent annually on the SMART Grant, but the scholarships may be better targeted to students who are actually going to pursue STEM degrees and careers. The downside of both of these NSF investments is that neither may serve to actually increase interest in STEM majors at the undergraduate level.

## 8.  Conclusion

This paper uses regression discontinuity on student-level longitudinal administrative data to estimate the treatment effect of a $4,000 financial incentive in the junior and senior years of college on STEM major choice. The treatment effect estimates are close to zero and are robust to functional form for parametric estimation techniques and bandwidth selection for local linear regression. The estimates rule out positive effects of the program larger than about 2.5 percentage points. Not only does the SMART Grant program not encourage students to choose a STEM field at junior year enrollment, but the financial aid does not appear to encourage students who initially select a STEM field to persist in that major at higher rates than ineligible students (although the sample size for this estimate is small, resulting in imprecise measurements). The U.S. federal government spends$60 billion annually on science research, and this investment is justified largely in terms of the economic benefits basic and applied research generates to keep our economy competitive in the global marketplace (Saha and Weinberg 2010). The success of this investment inherently rests on the labor supply of human capital capable of conducting and consuming scientific research. If we are to enhance the number of domestically produced STEM majors entering the national labor market, we must find more effective methods of encouraging STEM major selection and persistence in our institutions of higher education. Better targeted financial incentives may induce students to prepare for STEM fields but investments in improving academic preparation in math and science at the secondary level may lead to even better outcomes.

## Notes

1.

There is some debate over the exact definition of a STEM major or field. For the purpose of this paper, I take all of the majors eligible under the federal SMART Grant program as STEM majors. Choy et al. (2011) list the complete set of eligible majors.

2.

Using GPA as a forcing variable is suspect because of manipulation. Unlike the EFC eligibility criteria, whose exact value was unknown when completing the FAFSA, the 3.0 GPA requirement was publicly announced. Students could have manipulated their class schedules and petitioned instructors for higher grades to just exceed the GPA eligibility threshold. Evidence of this manipulation exists in the density around the threshold as shown through the McCrary test (figure 2).

3.

See Reardon and Robinson (2012), Imbens and Zajonc (2011), and Papay, Willett, and Murnane (2011).

4.

It is likely that some of the students eligible for the SMART Grant are also eligible to receive the Academic Competitiveness Grant (ACG), which was established concurrently with the SMART Grant. The ACG gives eligible students $750 in their first year and$1,300 in their second year of postsecondary study, but it does not provide any funds during the third and fourth years when they are eligible to receive SMART Grant funds. Hence, this does not bias the current results.

5.

An alternative explanation to manipulation is that GPA is lumpy and large spikes exist at GPAs of 3.0 or 3.3 (B+). A histogram of GPAs from 2.5 to 3.5 demonstrates that the 3.0 bunching is anomalous, confirming the evidence of manipulation.

6.

I use a range of 5 percent of the data on either side of the discontinuity to estimate the optimal bandwidth, although using 10 percent and 25 percent of the data provide similar results.

7.

Because of the number of coefficients estimated and conditioning on initial STEM major selection and persistence to the junior year, the standard errors are already very high. Further limiting the bandwidth around both thresholds results in extremely imprecise estimates.

8.

A more formal regression discontinuity analysis using freshman year EFC as a forcing variable and junior year eligibility as the dependent variable confirms there is no discontinuity in junior eligibility around the eligibility threshold in the freshman year at bandwidths close to the threshold. Results available from the author upon request.

## Acknowledgments

I am grateful to Eric Bettinger, Caroline Hoxby, and two anonymous referees for their excellent and insightful comments. I also received helpful comments from seminar participants at the Center for Education Policy Analysis at Stanford University and the Association for Education Finance and Policy annual meeting. I wish to thank Andy Lechler at the Ohio Board of Regents for assistance with the data and Chris Marsicano for his assistance with the NPSAS analysis. Funding from the Institute of Education Sciences (IES) grant #R305B090016 made this research possible, but the findings and conclusions are my own and not approved by IES or the U.S. Department of Education.

## REFERENCES

Angrist
,
Joshua D.
,
Daniel
Lang
, and
Philip
Oreopoulos
.
2009
.
Incentives and services for college achievement: Evidence from a randomized trial
.
American Economic Journal Applied Economics
1
(
1
):
136
163
. doi:10.1257/app.1.1.136.
Angrist
,
Joshua D.
, and
Jörn-Steffen
Pischke
.
2009
.
Mostly harmless econometrics: An empiricist's companion
.
Princeton, NJ
:
Princeton University Press
.
Arcidiacono
,
Peter
.
2004
.
Ability sorting and returns to college major
.
Journal of Econometrics
121
(
1–2
):
343
375
. doi:10.1016/j.jeconom.2003.10.010.
Arcidiacono
,
Peter, V.
Joseph
Hotz
, and
Songman
Kang
.
2012
.
Modeling college major choices using elicited measures of expectations and counterfactuals
.
Journal of Econometrics
166
(
1
):
3
16
. doi:10.1016/j.jeconom.2011.06.002.
Beffy
,
Magali
,
Denis
Fougère
, and
Arnaud
Maurel
.
2012
.
Choosing the field of study in postsecondary education: Do expected earnings matter
?
Review of Economics and Statistics
94
(
1
):
334
347
. doi:10.1162/REST_a_00212.
Bettinger
,
Eric P.
2004
.
How financial aid affects persistence
. In
College choices: The economics of where to go, when to go, and how to pay for it
, edited by
Caroline M.
Hoxby
, pp.
207
237
.
Chicago
:
University of Chicago Press
. doi:10.7208/chicago/9780226355375.003.0006.
Bettinger
,
Eric P.
2010
.
To be or not to be: Major choice in budding scientists
. In
American universities in a global market
, edited by
Charles T.
Clotfelter
, pp.
69
98
.
Chicago
:
University of Chicago Press
. doi:10.7208/chicago/9780226110455.003.0003.
Bettinger
,
Eric P.
2015
.
Need-based aid and college persistence: The effects of the Ohio College Opportunity Grant
.
Educational Evaluation and Policy Analysis
37
(
1S
):
102S
119S
. doi:10.3102/0162373715576072.
Bettinger
,
Eric P.
,
Bridget T.
Long
,
Philip
Oreopoulos
, and
Lisa
Sanbonmatsu
.
2012
.
The role of application assistance and information in college decisions: Results from the H&R Block FAFSA experiment
.
Quarterly Journal of Economics
127
(
3
):
1205
1242
. doi:10.1093/qje/qjs017.
Bloom
,
Howard
.
2012
.
Modern regression discontinuity design
.
Journal of Research on Educational Effectiveness
5
(
1
):
43
82
. doi:10.1080/19345747.2011.578707.
Castleman
,
Benjamin L.
, and
Bridget T.
Long
.
2013
.
Looking beyond enrollment: The causal effect of need-based grants on college access, persistence, and graduation
.
NBER Working Paper No. 19306
.
Carnevale
,
Anthony P.
,
Ban
Cheah
, and
Jeff
Strohl
.
2012
.
Hard times: College majors, unemployment and earnings
.
Washington, DC
:
Georgetown University Center on Education and the Workforce
.
Choy
,
Susan P.
,
Lutz
Berkner
,
John
Lee
, and
Amy
Topper
.
2011
.
Academic competitiveness and national SMART grant programs: 2006–07 through 2008–09
.
Washington, DC
:
U.S. Department of Education, Office of Planning, Evaluation, and Policy Development
.
Choy
,
Susan P.
,
John
Lee
, and
Amy
Topper
.
2012
.
Academic competitiveness and national SMART grant programs: Lessons learned, 2006–07 through 2009–10
.
Washington, DC
:
U.S. Department of Education, Office of Planning, Evaluation, and Policy Development
.
Crosby
,
Michael P.
, and
Jean M.
Pomeroy
.
2004
.
What will it take for the United States to maintain global leadership in discovery and innovation
? In
The U.S. scientific and technical workforce: Improving data for decisionmaking
, edited by
Terrence K.
Kelly
,
William P.
Butz
,
Stephen
Carroll
,
David M.
, and
Gabrielle
Bloom
, pp.
21
27
.
Santa Monica, CA
:
RAND Corporation
.
Deming
,
David
, and
Susan
Dynarski
.
2010
.
Into college, out of poverty? Policies to increase the postsecondary attainment of the poor
. In
Targeting investments in children: Fighting poverty when resources are limited
, edited by
Philip
Levine
and
David
Zimmerman
, pp.
283
302
.
Chicago
:
University of Chicago Press
. doi:10.7208/chicago/9780226475837.003.0011.
DesJardins
,
Stephen L.
, and
Brian P.
McCall
.
2014
.
The impact of the Gates Millennium Scholars Program on college and post-college related choices of high ability, low-income minority students
.
Economics of Education Review
38
:
124
138
. doi:10.1016/j.econedurev.2013.11.004.
Freeman
,
Richard B.
,
Tanwin
Chang
, and
Hanley
Chiang
.
2009
.
Supporting “the best and brightest” in science and engineering: NSF graduate research fellowships
. In
Science and engineering careers in the United States: An analysis of markets and employment
, edited by
Richard B.
Freeman
and
Daniel L.
Goroff
, pp.
19
57
.
Chicago
:
University of Chicago Press
. doi:10.7208/chicago/9780226261904.003.0002.
Fryer
,
Roland G.
2011
.
Financial incentives and student achievement: Evidence from randomized trials
.
Quarterly Journal of Economics
126
(
4
):
1755
1798
. doi:10.1093/qje/qjr045.
Goldrick-Rab
,
Sara
,
Douglas N.
Harris
,
James
Benson
, and
Robert
Kelchen
.
2011
.
Conditional cash transfers and college persistence: Evidence from a randomized need-based grant program
.
Discussion Paper No. 1393–11, Institute for Research on Poverty, Madison, WI.
Imbens
,
Guido W.
, and
Thomas
Lemieux
.
2008
.
Regression discontinuity designs: A guide to practice
.
Journal of Econometrics
142
(
2
):
615
635
. doi:10.1016/j.jeconom.2007.05.001.
Imbens
,
Guido
, and
Tristan
Zajonc
.
2011
.
Regression discontinuity design with multiple forcing variables
.
Unpublished paper, Harvard University
.
Kane
,
Thomas J.
1995
.
Rising public college tuition and college entry: How well do public subsidies promote access to college
?
NBER Working Paper No. 5164
.
Langdon
,
David
,
George
McKittrick
,
David
Beedle
,
Beethika
Khan
, and
Mark
Doms
.
2011
.
STEM: Good jobs now and for the future
.
ESA Issue Brief No. 03–11
.
Washington, DC
:
U.S. Department of Commerce Economics and Statistics Administration
.
Lee
,
David S.
, and
Thomas
Lemieux
.
2010
.
Regression discontinuity designs in economics
.
Journal of Economic Literature
48
(
2
):
281
355
. doi:10.1257/jel.48.2.281.
Ludwig
,
Jens
, and
Douglas L.
Miller
.
2007
.
Does Head Start improve children's life chances? Evidence from a regression discontinuity design
.
Quarterly Journal of Economics
122
(
1
):
159
208
. doi:10.1162/qjec.122.1.159.
McCrary
,
Justin
.
2008
.
Manipulation of the running variable in the regression discontinuity design: A density test
.
Journal of Econometrics
142
(
2
):
698
714
. doi:10.1016/j.jeconom.2007.05.005.
Montmarquette
,
Claude
,
Kathy
Cannings
, and
Sophie
Mahseredjian
.
2002
.
How do young people choose college majors
?
Economics of Education Review
21
(
6
):
543
556
. doi:10.1016/S0272-7757(01)00054-1.
National Survey of Student Engagement (NSSE)
.
2011
.
Fostering student engagement campuswide--annual results 2011
.
Bloomington
:
Indiana University Center for Postsecondary Research
.
Papay
,
John P.
,
John B.
Willett
, and
Richard J.
Murnane
.
2011
.
Extending the regression-discontinuity approach to multiple assignment variables
.
Journal of Econometrics
161
(
2
):
203
207
. doi:10.1016/j.jeconom.2010.12.008.
Reardon
,
Sean F.
, and
Joseph P.
Robinson
.
2012
.
Regression discontinuity designs with multiple rating-score variables
.
Journal of Research on Educational Effectiveness
5
(
1
):
83
104
. doi:10.1080/19345747.2011.609583.
Richburg-Hayes
,
Lashawn
,
Thomas
Brock
,
Allen
LeBlanc
,
Christina H.
Paxson
,
Cecilia E.
Rouse
, and
Lisa
Barrow
.
2009
.
Rewarding persistence: Effects of a performance-based scholarship program for low-income parents
.
New York
:
MDRC
.
Rothstein
,
Jesse
, and
Cecilia E.
Rouse
.
2011
.
Constrained after college: Student loans and early-career occupational choices
.
Journal of Public Economics
95
(
1–2
):
149
163
. doi:10.1016/j.jpubeco.2010.09.015.
Ryoo
,
Jaewoo
, and
Sherwin
Rosen
.
2004
.
The engineering labor market
.
Journal of Political Economy
112
(
1
):
S110
S140
. doi:10.1086/379946.
Saha
,
Subhra S.
, and
Bruce A.
Weinberg
.
2010
.
Estimating the indirect economic benefits from science
.
Paper presented at the Workshop on the Science of Science Measurement, Washington DC, December
.
Sjoquist
,
David L.
, and
John V.
Winters
.
2015
.
The effect of Georgia's HOPE scholarship on college major: A focus on STEM
.
IZA Journal of Labor Economics
4
(
15
):
1
29
.
Stange
,
Kevin
.
2015
.
Differential pricing in undergraduate education: Effect on degree production by field
.
Journal of Policy Analysis and Management
34
(
1
):
107
135
. doi:10.1002/pam.21803.
Stinebrickner
,
Todd R.
, and
Ralph
Stinebrickner
.
2011
.
Math or science? Using longitudinal expectations data to examine the process of choosing a college major
.
NBER Working Paper No. 16869
.
Turner
,
Leslie J.
2014
.
The road to Pell is paved with good intentions: The economic incidence of federal student grant aid
.
Unpublished paper, University of Missouri
.
Zafar
,
Basit
.
2011
.
How do college students form expectations
?
Journal of Labor Economics
29
(
2
):
301
348
. doi:10.1086/658091.