Do students respond to sticker prices or actual prices when applying to college? These costs differ for students eligible for financial aid. Students who do not understand this may not apply to some colleges because of the perceived high cost. We test for this form of “sticker shock” using College Board data on SAT scores sent, as a proxy for applications, to leading public institutions for students entering college in 2006–13. Some of these institutions guarantee financial aid will meet full financial need. Sticker price increases at those schools would not affect the actual cost after factoring in financial aid and should not affect decisions for those eligible for aid. We exploit the large and variable increases in sticker prices during the Great Recession of 2008. We also control for local labor market conditions to abstract from the recession's impact on individual educational decisions. We find evidence of sticker shock—students unaffected by virtue of institutional aid policies still apply less often. Using data from the National Student Clearinghouse, we also find that price increases at public flagship institutions reduce enrollment of high-achieving students, regardless of financial aid status, who often choose private colleges instead.

Students from low- and moderate-income families are substantially underrepresented at leading public universities (Hoxby and Avery 2013). For instance, only 27 percent of students at public flagship institutions come from the bottom 60 percent of the parental income distribution.1 A student at a public flagship is nine times more likely to come from the top 20 percent of the parental income distribution than the bottom 20 percent. Notably, the underrepresentation of those children at leading public institutions has been increasing over time. Three fourths of public flagships enroll a lower percentage of these students today than they did in the late 1990s.2 These patterns are troubling given that these institutions have the potential to be engines of upward mobility (Chetty et al. 2020).

A lack of understanding of college pricing and the financial aid system may contribute to this pattern. Prior work, which has focused almost exclusively on low-income, high-achieving students, documents that it is at the college application stage—rather than at the point of admissions or matriculation—where the behavior of low-income students differs most from their higher income peers (Hoxby and Avery 2013; Hoxby and Turner 2015). Perceived costs are one of the most important factors influencing where students apply. Sixty-seven percent of families report factoring in the price of college when finalizing college application lists, and a majority of high school seniors report ruling out colleges based on sticker prices alone, without considering their likely financial aid awards (The College Board and Arts & Sciences Group 2012; Sallie Mae 2016). In another survey, 44 percent of students destined for public colleges and universities reported rejecting colleges at the application stage based on published sticker prices alone (Longmire and Company 2013).

“Sticker shock” occurs when students are discouraged from applying to schools based on the sticker price, ignoring the potential availability of financial aid. Although sticker prices for in-state students at leading public institutions are considerably lower than those at highly selective private colleges, they are still generally higher than alternative public options, like community colleges or regional, less-selective public four-year colleges. Inadequate knowledge of financial aid and the true cost of attending a top public institution may contribute to the observed enrollment patterns of low- and moderate-income students.

The 2008–09 financial crisis provides us with an opportunity to examine the existence and extent of sticker shock. Many states dramatically increased tuition and fees at public postsecondary institutions at that time due to dramatic declines in state appropriations (Long 2015). Because these tuition increases were heavily covered by the news media, it is likely that they were well known by students of college-going age or their parents, and some students may not have applied to these public institutions as a result.3

At some public institutions, though, financial aid policies were in place that would have protected aid-eligible students from experiencing increases in costs of attendance. A handful of leading public universities (including the University of Michigan, the University of North Carolina at Chapel Hill, the University of Virginia, and several institutions in the University of California system) have a policy of “meeting full demonstrated financial need,” at least for state residents. That is, they provide enough financial aid through grants, work study, and loans to fill the gap between the sticker price and the Expected Family Contribution (EFC), which is the amount a student and her family can ”afford" to pay as determined by the financial aid system. For aid-eligible students at an institution that meets full financial need, an increase in the sticker cost of attendance (COA) would not increase their EFCs, which are determined only by their finances.4 Moreover, because loan and work-study expectations in financial aid packages did not increase following the price increases, net prices are unchanged for aid-eligible students at public institutions that meet full need.

This institutional framework sets up a quasi-experiment that we use to examine the impact of the sticker price on college-going behavior. In this paper, we focus on students' decisions to apply to leading public universities, taking advantage of changes in tuition sticker prices and the fact that low- and moderate-income students in states with public institutions that meet full need were not subject to those price increases. We highlight the impact on applications to public flagships, but we extend our analysis to a broader group of public institutions that includes others designated R1 (very high research activity), and “elite” subsets of these two categories that typically admit students with higher test scores. We instrument sticker prices using the “state budget shocks” approach used by Deming and Walters (2017) to further focus our analysis on the impact of recession-related budget cuts. We also control for the local labor market conditions, which could directly impact students’ resources and college-going.

When tuition prices increased after the financial crisis hit, colleges became more expensive for financial aid recipients at public institutions that do not meet full need, but not at those that do meet full need. If we observe a reduction in applications among students likely to receive financial aid in public institutions that meet full need in the years immediately following the financial crisis, this would support the notion that sticker shock exists.

Our results indicate that sticker shock is indeed an issue that affects application decisions of potential college students who would be eligible for financial aid. Overall, students are sensitive to changes in sticker prices in the application stage of the process. A 10 percent increase in sticker prices at leading public institutions generates a 1 to 2 percentage point reduction in applications, as proxied by sending SAT scores to that institution.5 Importantly, we find little difference in that impact between students likely to be eligible for financial aid at schools that meet full need compared to those that do not. Because aid-eligible students should not respond to changes in sticker prices at meet-full-need schools, we interpret these results as evidence of sticker shock. These findings are consistent regardless of the definitions we use to define “leading public institutions.”

We also investigate a number of possible threats to internal validity. First, our results are robust to focusing just on those states where the SAT is the dominant exam relative to the ACT. This is important because we proxy applications by students sending SAT scores to an institution. Second, we obtain similar results when we omit California from our sample, which is important given its size and the dramatic protests that occurred there following large price increases at its public institutions. Third, we attempt to rule out the possibility that students are reacting to potentially higher loan amounts that could have accompanied higher sticker prices. To that end, we limit states with meet-full-need policies to Virginia and North Carolina, where the flagship institutions do not include loans in financial aid packages for low-income students during the time period of our study. We obtain similar but less-precise results. Finally, we find that minor modifications to the way we measure aid eligibility does not alter our results.

Another result of our analysis is that these price increases at public flagship institutions do not affect overall enrollment, but they do affect who enrolls in them. We observe a decline in the average SAT score of enrolled students and a commensurate increase in enrollments among high achievers at private not-for-profit colleges. Because applications greatly exceed enrollment slots, all seats can still be filled even if applications fall. It is possible that these institutions maintain their enrollment by lowering their admissions standards, enrolling fewer high-achieving students. Alternatively, yields of high-achieving students could be declining relative to lower-achieving students in response to sticker price increases.

Past Research

Some prior work has investigated whether making net prices more salient affects college-going. The net price represents the sum of all resources that a family needs to pay, including cash, loans, and work-study funds, to cover the cost of attendance. It is equivalently calculated as the COA minus the direct grant aid offered to the student from any source (federal, state, institutional, or other).

The evidence on whether students respond to net prices is mixed, depending on whether the information provided is aggregate in nature or individualized. Providing students with the average price of attendance after factoring in financial aid does not have a large impact on college-going intentions (Bleemer and Zafar 2018). Similarly, making average net price information more easily available through the College Scorecard has had no discernible effect on college application as proxied by SAT score sending or online college search behavior (Huntington-Klein 2016; Hurwitz and Smith 2018).

Levine (2014) argues that what students want are estimates that are specific to them, not on average. This is consistent with Oreopoulos and Dunn (2013), who find that providing students with estimates of their own cost of college through the use of a net price calculator can help change those perceptions. It is also consistent with the findings of Bettinger et al. (2012), which show that providing individualized cost information along with assistance completing financial aid forms had a large impact on college attendance. Similarly, Hoxby and Turner (2013) find that providing semi-customized information on college net costs along with other information regarding academic “fit” causes high-achieving, low-income students to apply and be admitted to more colleges.6 Dynarski et al. (2021) find that a marketing campaign touting free tuition for those with incomes under $60,000, which was not a change from previous pricing policy, had a large impact on applications and enrollments. Although unrelated to costs, Mulhern (2021) finds that providing individualized information regarding institutions where students are likely to be accepted changed their application lists.

Other studies that examine the impact of tuition changes at public institutions on enrollment include Hemelt and Marcotte (2011, 2016) and Deming and Walters (2017). Hemelt and Marcotte (2016) use data on individual students in twelfth grade in 1992 and 2004; the other analyses rely on institution-level Integrated Postsecondary Data System (IPEDS) data. Deming and Walters (2017) directly tackle the potential endogeneity problem in determining levels of tuition and estimate their models using state budget shares as an instrument, which we describe in more detail below. They use data from 1990 through 2013 in their analysis. Neither of these papers, though, focuses on sticker shock.

Because we have student-level data with a rich set of covariates, we can distinguish students by their likely eligibility for financial aid and institutions by meet-full-need status, which allows us to study sticker price shock, something these prior papers were unable to do. We are also able to estimate application and enrollment elasticities for high-achieving versus low-achieving students, which generates important insights, as described in section 5.

Finally, we focus on a segment of the market— leading public institutions—where application rather than aggregate enrollment is the outcome of interest. At these institutions, applications routinely exceed the number of slots available for freshman students. Mechanically, we would expect no effect of a tuition increase on aggregate enrollment because schools can adjust admissions standards to maintain a desired enrollment level. However, tuition increases can cause important changes in the composition of enrolled students, fueling continued underrepresentation of students from low- and middle-income backgrounds.

Financial Aid Policy and the Impact of Price Increases

The interaction between a meet-full-need financial aid policy and price increases is critical to our empirical strategy; we describe that here. A critical component of a financial aid system is a determination of what the family can afford to pay. Ignoring the obvious difficulty in setting that amount, this is the purpose of completing the FAFSA. The EFC is the calculated ability to pay, which is constructed based on a family's financial attributes as reported on the FAFSA.

At meet-full-need institutions, students with an EFC below COA receive enough financial aid (through grants, loans, or work-study) so that the net price they pay combined with their financial aid equals the COA.7 This is not true at institutions that do not meet full need. Students who are eligible for financial aid at those institutions are required to pay more than their EFC.8

When COA rises, families with sufficient financial resources to afford the COA at the original and new levels pay full price either way. They face the entire price increase. The impact of the price increase on financial aid recipients depends on the school's meet-full-need status. At a school that meets full need, financial aid adjusts to fill in the new, larger gap between COA and the student's EFC. Assuming loan and work-study expectations do not change, an increase in the COA has no bearing on their net price. If the school does not meet full need, then the student will be expected to pay more as tuition increases.

In theory, colleges could continue to meet full need and increase their revenue by increasing their loan and work-study expectations in their financial aid packages. In practice, most colleges and universities set their loan expectations at the maximum allowed by federal Stafford Loans (currently $3,500 subsidized or $5,500 total a year including an additional $2,000 of unsubsidized loans for freshman students). Loan burdens increased at many institutions in 2007–08, in response to a federal increase in the maximum Stafford Loan. However, year fixed effects in our regression specifications capture this change. Beyond that, we are unable to identify any school-specific changes in loan expectations.9 We have no definitive information regarding changes in work-study policies during this period, but our informal conversations with financial aid directors indicate that no systematic changes like that took place at that time.

One other complication in our discussion of the effect of COA increases on net prices concerns the group of students who are just on the cusp of financial aid eligibility before the COA increase at an institution that meets full need. That COA increase will qualify those students for aid, protecting them from some portion of the higher cost. In practice, the part of the income distribution where students are on that financial aid margin is thin enough that the number of students affected in this way is small. We do not address that issue directly in our main analysis as we cannot observe aid eligibility perfectly but do conduct robustness checks with different definitions of aid eligibility.

Overall, this institutional detail sets up the natural experiment we exploit in our empirical analysis. At institutions that do not meet full need, an increase in the sticker price affects all students. At institutions that meet full need, an increase in the sticker price only affects those students who can afford to pay that higher price. Therefore, the spike in sticker prices following the financial crisis should not have affected low- and moderate-income students at meet-full-need schools. It would have affected higher income students at those schools and all students at schools that do not meet full need.

Financial Aid Policy in Practice

To demonstrate the variability in financial aid systems across leading public institutions, we restrict our attention to public flagships and use each school's net price calculator to estimate financial aid awards for a dependent student with household income of $50,000 and no assets.10 According to the FAFSA4caster (now called the Federal Student Aid Estimator), an online tool that estimates eligibility for federal financial aid, this family is estimated to have an EFC of $2,600 (with rounding) in 2019.11 Loans and work-study would be expected in addition to the EFC. First-year undergraduate students can borrow up to $5,500 through the federal student loan system (including subsidized and unsubsidized loans), and it is not uncommon for schools to offer students up to $3,500 in work-study funds. Accordingly, an institution that charged this student $11,600 after grant aid would be meeting her full need. If a school charged this student more than $11,600 after grant aid, the school would not have met full need. The gap is labeled “unmet need.”

Because of variation in whether an institution meets full financial need and other policies, students at schools with very similar full costs of attendance can face vastly different financial burdens after factoring in financial aid. Figure 1 documents the amount of unmet financial need across states for a hypothetical dependent student living on-campus with household income of $50,000, no assets, two married parents, with one other sibling who is not in college, and living in the state. All calculations are based on information provided by net price calculators posted on university Web sites in fall 2019. In some states (California, Michigan, North Carolina, Virginia, Delaware, Illinois, Washington, Wisconsin, and Wyoming), students with that financial profile would have their entire financial need met. All of these states, other than Wyoming, have policies dictating that they meet full need, at least for those with modest incomes.12 This student would have unmet financial need in the remainder of the states, facing a price higher than she could “afford.” The amount of this unmet need ranges from $2,000 in Florida to $16,000 in Alabama.13
Figure 1.
Estimated Unmet Financial Need at Public Flagships for Families with Household Income of $50,000 and No Assets, Fall 2019

Notes: Authors’ calculations using output from net price calculators posted on public flagship Web sites in Fall 2019. Self-help (loans and work-study) is assumed to be $8,500, except in California, which meets full need by packaging $9,500 in self-help. Calculations assume no assets, a family of four with two married parents, each of whom makes $25,000, and one child in college. For schools offering merit scholarships, unmet need is calculated for students at the 25th and 75th percentiles of the GPA and SAT distribution of enrolled students as reported in the school's Common Data Set.

Figure 1.
Estimated Unmet Financial Need at Public Flagships for Families with Household Income of $50,000 and No Assets, Fall 2019

Notes: Authors’ calculations using output from net price calculators posted on public flagship Web sites in Fall 2019. Self-help (loans and work-study) is assumed to be $8,500, except in California, which meets full need by packaging $9,500 in self-help. Calculations assume no assets, a family of four with two married parents, each of whom makes $25,000, and one child in college. For schools offering merit scholarships, unmet need is calculated for students at the 25th and 75th percentiles of the GPA and SAT distribution of enrolled students as reported in the school's Common Data Set.

Close modal

For the sample period we use in our empirical analysis, we categorize the flagship institutions in California (Berkeley), Michigan (UM), North Carolina (UNC), and Virginia (UVA) as meeting full need, at least for residents of each state. Illinois and Wisconsin adopted their policies after our sample period ended (and only do so now for students with lower incomes), so for our analysis they are coded as not meeting full need. Delaware and Washington implemented their policies partway through our 2006–13 sample period, requiring us to drop them from our analysis altogether. The other public R1 institutions beyond flagships that meet full need throughout our sample period are all in the state of California (Davis, Irvine, Los Angeles, Riverside, San Diego, Santa Barbara, and Santa Cruz; Washington State University adopted such a policy at the same time as the University of Washington).

Also, we are forced to drop public institutions in Pennsylvania and Wyoming from our analysis. In Pennsylvania, SAT score sends cannot be reliably matched to specific public institutions in that state. The University of Wyoming does not have an official policy to meet full need. However, calculations from its net price calculator suggest that low-income students have no unmet need. How an increase in COA translates to a student's net price is unclear in such a system, so we have chosen to drop it from our analysis. This leaves us with forty-six states used in our primary analysis of flagship institutions (the District of Columbia has no flagship institution), four of which meet full need and forty-two that do not. There are eighty-eight institutions among our broader sample that includes other public R1 institutions, eleven of which meet full need (note that all of the additional institutions in this category are in California).14

Changes in cost of attendance, which are largely driven by changes in tuition and fees (and, typically, smaller changes in room and board), have very different impacts on actual net prices depending on whether the school meets full financial need or not. If a meet-full-need institution raises its tuition and fees by $1,000, the price paid by the student with household income of $50,000 will not change. Her net price is determined solely by the self-help expectation and EFC, and her grant aid would increase by exactly $1,000 to offset the price increase. The same student, however, at a school that does not meet full need would see her net price increase by the full amount of the tuition increase.

This is the intuition that underlies our empirical strategy. Aid-eligible students in states where the leading public institutions meet full financial need should not respond to sticker price increases because their actual price paid does not change with the increase. Aid-ineligible students and aid-eligible students in states where the leading public institutions do not meet full financial need may respond by decreasing application and enrollments.

Trends in College Pricing

Despite the importance of the full cost of attendance in determining the actual cost to students after factoring in financial aid as illustrated above, in the remainder of this analysis we focus on tuition (and fees) to capture “sticker price.” The difference between the two measures reflects room and board (most incoming students at leading public institutions live on campus), along with an estimate of the cost of books, travel, and other miscellaneous expenses. We focus on tuition because changes in tuition prices reflect what the media typically reports (cf. Duke 2009; Asimov 2009; Gordon and Khan 2009). We explore the sensitivity of our results to this decision subsequently, but that is what students are likely to know and to respond to.

Another advantage of using tuition rather than cost of attendance is that the former is the price that is often set by the state government or the state board of higher education.15 Expenses beyond tuition and fees generally are not set at the state level and are largely designed to cover costs. Our instrumental variable (IV) strategy is based on the relationship between state economic conditions, budget issues, and public universities’ need for increased revenues. Tuition is the component of college costs that captures this.

The Great Recession that started in 2008 led to a large jump in sticker tuition at public universities. This analysis again focuses specifically on public flagship institutions. Figure 2 uses data from IPEDS to show yearly increases in real tuition and fees from 2006 to 2013 (measured in August 2013 dollars using the Consumer Price Index for All Urban Consumers [CPI-U] for adjustment) for schools distinguished by the severity of the recession in the state (whether the unemployment rate increased by more or less than 5 percentage points).16 On average, public flagships increased their tuition and fees by 9 percent in real terms at the height of the recession in 2009. In California, for example, the California Board of Regents instituted a 32 percent (nominal) tuition hike because of the state's large budget deficit, and students protested at several campuses (Duke 2009). Tuition increases were much larger in states that were more heavily affected by the recession. In states where the recession was more severe, tuition increased by an average of 12.8 percent compared with the average 7.7 percent increase in states where the recession was less severe.17 Increases in each of the next two years followed the same pattern, although at a lower level.
Figure 2.
Annual Increase in Tuition and Fees at State Flagship Institutions, by Severity of the Recession

Notes: Data come from IPEDS and reflect increases in Consumer Price Index–adjusted tuition and fees from 50 public flagships (UC Berkeley in California and UT-Austin in Texas).

Figure 2.
Annual Increase in Tuition and Fees at State Flagship Institutions, by Severity of the Recession

Notes: Data come from IPEDS and reflect increases in Consumer Price Index–adjusted tuition and fees from 50 public flagships (UC Berkeley in California and UT-Austin in Texas).

Close modal

Low- and moderate-income students living in states where the public flagship meets full financial need were insulated from the increases in tuition and fees. Any increase in tuition would be met with additional grant aid for financial aid recipients, wiping out any revenue gain from those students. Any increase in revenue at those institutions would have been realized from students not receiving financial aid. As a result, larger tuition increases are required at meet-full-need flagships to generate the same revenue as that raised from flagships that do not meet full need. Indeed, that is what we see in the data. In 2010–11, institutions that meet full need increased their tuition at almost twice the rate of schools that do not (11.5 percent versus 5.9 percent).18

This finding sets up an interesting potential paradox. Schools with meet-full-need financial aid policies do so to benefit aid-eligible students. Yet when financial circumstances require them to increase tuition, they must increase tuition by a larger amount since those increases are only paid by a subset of students. But if students respond to the sticker price and not their true cost, the aid-eligible students (and others) may respond by reducing their likelihood of applying. Whether there is empirical support for this response is the focus of the remainder of our analysis.

Data Description

To investigate the application response to prices, we use data on SAT score sends from the College Board.19 These data cover all students who graduated from high school in 2006 through 2013 who took the SAT and sent at least one score report to a college or university. Following prior work by Pallais (2015) and Hurwitz and Smith (2018), we use the sending of a test score to proxy for applying to a particular college. Smith (2018) shows that SAT score sends are a particularly good proxy for applications when they are to colleges with lower tuition, higher graduation rates, and are relatively near a student's home. Public flagships and other R1-level public institutions fit all of these criteria. Nevertheless, we later investigate whether our focus on SAT score sends is problematic. Note that we are not able to reestimate any of our results using application counts from IPEDS because IPEDS reports only aggregate application numbers that include out-of-state students. Our analysis is concerned only with state residents. Besides, testing the sticker shock hypothesis requires further disaggregating by aid eligibility, which is not available in IPEDS data.

We also have information on each student's SAT scores, demographics (race, gender, and parental education) and zip code. Our primary strategy is to use each student's zip code to determine whether the student is “likely aid-eligible.” We assume that the student is aid-eligible if the median family income in her zip code is $75,000 or less based on our analysis of data from the 2013 five-year sample of the American Community Survey. Eighty-four percent of families in these zip codes have incomes less than $100,000 and 95 percent have incomes less than $150,000. At a public institution with a cost of attendance around $30,000, the income cutoff for aid eligibility for families with typical asset values is in the vicinity of $125,000. Perhaps 90 percent of families we define as “likely aid-eligible” are actually eligible for financial aid. We report below sensitivity analyses designed to test alternative proxies for financial aid eligibility, including self-reported income.20

Although we have data on the universe of SAT takers who sent at least one score, we do not have data corresponding to ACT score send reports. The SAT is the dominant test on the coasts, but not the Midwest, as shown in online appendix figure A.1. It is unlikely that changes in student preferences for sending an SAT versus ACT score would be impacted by the tuition of public institutions in their state, so we do not believe that our use of SAT score sending alone is an issue for our empirical strategy. Nevertheless, in our regressions, we control for the percent of high school graduates who take the SAT and send at least one score.21 We also include year fixed effects to control for the secular decline in SAT score sending relative to ACT score sending over our study period.22

Among the 26.7 million graduating high school seniors in the 2006 to 2013 cohorts, 11.6 million (43 percent) had taken the SAT and 8.5 million (74 percent of SAT takers) sent scores to at least one college. Around 2.6 and 7.3 million SAT takers had sent scores to state flagships and the broader group of flagships and R1 institutions, respectively, during this period. Among elite institutions, those where the average combined SAT score of enrolled students in the 2013–14 school year is 1200 or over, the analogous statistics on score sending are 1.9 million and 3.6 million, respectively.

We examine enrollment behavior among applicants using merged College Board and National Student Clearinghouse (NSC) data. The NSC data cover about 98 percent of all undergraduate enrollment in the United States. It records up to four postsecondary institutions in which a student enrolls. For our analysis, we consider a student enrolled in a flagship if either of her first two postsecondary institutions within 180 days of high school graduation is a flagship. Using this definition, 9 percent of total SAT senders (or 23 percent of flagship SAT senders) in our analysis enrolled in a flagship.

Information on sticker prices comes from IPEDS. As we described earlier, we use in-state tuition and fees as our primary sticker price measure and CPI-adjust all prices to 2013 dollars using the CPI-U index in our analysis. Our state budget shock instrument is constructed using data on total state appropriations per student per year and the share of each flagship institution's total revenue that comes from state appropriations in a base year. The institution-specific data comes from IPEDS, the state appropriations data comes from the State Higher Education Executive Officers Association (2018), and the number of high school students in the state comes from the Western Interstate Commission for Higher Education (2016).

Preliminary Analysis

Before providing a more technical assessment of sticker prices on score sending and enrollments, we start with a descriptive analysis of the raw data, again focusing specifically on public flagship institutions. We documented above the differential price changes by severity of the recession and meet-full-need status, respectively. In figure 3, we examine whether trends in SAT score sending to students’ home state's flagship are correlated with those differential price increases.
Figure 3.
Trends in SAT Score-Sending Rates to Flagship Institutions (a) by Severity of the Recession and (b) by Meet Full Need Status

Notes: Percent of students sending scores to flagship institution is defined as the percent of students appearing in the College Board SAT score send sample who send a score report to their home state's flagship.

Figure 3.
Trends in SAT Score-Sending Rates to Flagship Institutions (a) by Severity of the Recession and (b) by Meet Full Need Status

Notes: Percent of students sending scores to flagship institution is defined as the percent of students appearing in the College Board SAT score send sample who send a score report to their home state's flagship.

Close modal

The upper plot of figure 3 shows that between 2006 and 2013, the percent of students sending an SAT score to the public flagship declined in all states regardless of the severity of the recession. The decline is more than twice as large, though, in states where the state unemployment rate increased by more than 5 percentage points relative to other states (a 12 percentage-point drop rather than a 5 percentage-point drop). These data are consistent with a significant score-sending response to the larger increase in tuition and fees in states where the recession was more severe. Of course, these descriptive patterns are also consistent with the direct impact of the recession on family income and the ability to afford a college education regardless of public flagship pricing decisions. We address this complication subsequently in our full econometric analysis.

The lower plot shows that declines in scores sent to flagships were also larger for states that meet full financial need. In combination, these data provide preliminary evidence that students respond to changes in tuition by reducing the likelihood of “applying” (i.e., sending SAT scores) to public flagship institutions. We pursue this further in a more fully specified econometric model below.

We have repeated this analysis for enrollment rates, describing patterns in enrollments by meet-full-need status and by the magnitude of the recession like we did in figure 3. No obvious patterns exist in these data (see online appendix figure A.2). We provide a more complete discussion of enrollment outcomes when we present the results from our full econometric analysis.

Empirical Specification

We start by estimating the score send response to tuition among all students with the following model:
(1)
where Sicst is an indicator for whether an individual student i who lives in county c submitted test scores to a leading public in-state institution (school) s in year t. They key explanatory variable is lnTs,t−1, which represents the log of real tuition and fees at institution s in academic year t − 1.23 Other covariates include Uc,t−1, which is the unemployment rate in student i’s county in calendar year t − 1, and Xi, which is a matrix of student covariates (verbal and math SAT scores, parental education indicators, race indicators, female indicator, percent of the state's high school graduates taking the SAT and sending a score report, median family income in her zip code, and an indicator for residing in low-income zip code), αt are cohort fixed effects, and αs are institution fixed effects. We cluster standard errors at the institution level. The unit of observation in this analysis is the student/institution pair.
In addition to estimating equation (1) by ordinary least squares (OLS), we also estimate an IV regression to address the potential endogeneity of tuition and fees. We follow the lead of Deming and Walters (2017) and use state budget shocks as our instrument. Technically, state budget shocks are defined as
The first term captures annual changes in state spending per student at each institution.24 The second term reflects how important state appropriations were to a school's total revenue, with both values measured in 2005.25 That baseline period occurred prior to the sample used in our study and is chosen as an exogenous measure of state support to each flagship institution. If a state experienced a more severe recession and faced greater spending cuts to higher education, that would matter more for schools that receive a larger share of their spending from the state.
In combination, it measures the budget shock a school experiences if state appropriations are cut, which they were during the recession. We measure the state budget shock in year t − 2, the academic year prior to when the academic year t − 1 tuition rate is set as the instrument (see online appendix A for a full discussion of timing issues and lag lengths in our analysis). Importantly, we also control for the county unemployment rate to capture direct local effects of the recession that could impact family finances and college-going. The first stage in this IV specification takes the form
(2)

Although we use a Deming and Walters–style instrument, we note an important distinction between our work and theirs. We focus only on the 2006 through 2013 period.26 The impact of the financial crisis on school budgets, which began in the 2009–10 academic year, was by far the largest factor influencing state budgets during this period. State variation in the severity of the recession provides an important source of exogenous variation on its own, reducing the potential influence of endogenous pricing decisions. The sensitivity of our results to using IV instead of OLS may be limited because of this.

Our focus on leading public institutions also alleviates one other concern that the Deming and Walters research raises regarding our analysis. These authors find that state budget shocks also affect spending at higher education institutions, not just tuition. This would pose a problem in our analysis because we may be conflating sticker shock, which should only affect some groups of students, with “spending shock,” which would affect all students. Bound et al. (2019) also examine this issue and find support for a relationship between business cycle conditions and spending at higher education institutions, in general. They also find, though, that research-intensive universities, like those we consider, do not adjust their spending in response to state budget shocks. Our sensitivity analysis supports this conclusion. The second column of online appendix table A.2 shows that our budget shock instrument is not correlated with total spending per student in year t − 1 for the sets of leading public institutions we examine.

The aggregate analysis represented by equation 1 does not inform the question of whether sticker shock occurs. Our test of sticker shock is based on the differences in estimated elasticities between “aid-eligible” and “aid-ineligible” students in states that do and do not meet full financial need. To estimate these elasticities, we estimate a triple-difference specification of the form
(3)
where Ei is an indicator that equals 1 if student i resides in a lower income zip code (a proxy for financial aid eligibility), and MFNs is an indicator for whether the school meets full need (MFN). Note that the Ei main effect is included in Xi. The triple difference reflects changes in the likelihood of sending scores between high sticker shock and low sticker shock states (first difference), students in meet-full-need and not-meet-full-need states (second difference), and aid-eligibility status (third difference). Also note that county-level data is only relevant in holding constant labor market conditions at the student's local level. Income data to distinguish likely aid-eligibility is based on more granular zip-code level data.

Adding the appropriate coefficients provides the overall effect of a tuition increase for four different groups of students: “aid-ineligible” students in not-MFN states (β1), “aid-eligible” students in not-MFN states (β1 + β2), “aid-ineligible” students in MFN states (β1 + β3), and “aid-eligible” students in MFN states (β1 + β2 + β3 + β4). These estimates can be converted into elasticities by dividing by the mean rate of score sending within each group.

If “aid-eligible” students applying to institutions that meet full need are aware of net prices and respond accordingly, we should find that their elasticity is 0. In other words, whatever impact higher sticker prices have on score sending for the other groups of students (i.e., β1 + β2 + β3) should be completely counteracted by the fact that “aid-eligible” students in MFN states are the only group that face no net price increase when sticker tuition rises (i.e., β1 + β2 + β3 = −β4). If this is not true and if the elasticity for this group is negative and significant, it would provide evidence of sticker shock. In fact, if β4 = 0, then students in this group would fully incorporate the sticker price change the same way that other groups do despite the fact it does not apply to them.27 If β4 > 0 but β1 + β2 + β3 + β4 < 0, aid-eligible students in MFN states still exhibit sticker price shock, but they are relatively less responsive to a price increase than other groups.

Score-Sending Results for Public Flagship Institutions

Our initial results focus on an analysis of public flagship institutions. Table 1 reports the estimated coefficients obtained from the models described in equations 1 through 3. Table 2 uses the coefficients from table 1 to estimate the absolute impact of a tuition increase on the probability of sending SAT scores to students’ own states’ public flagship institution. It also converts these estimates to elasticities.

Table 1.

Estimated Impact of Increases in Tuition and Fees on Likelihood of Students Sending SAT Scores to Public Flagship Institutions

OLSIV
Aggregate 
ln(in-state tuition and fees, lagged one year) −0.183 −0.119 
 (0.051) (0.061) 
First stage F-statistic — 20.8 
Triple Difference 
ln(tuition) −0.180 −0.023 
 (0.041) (0.094) 
ln(tuition * “aid eligible”) 0.033 −0.116 
 (0.038) (0.088) 
ln(tuition * meet-full-need) −0.047 −0.139 
 (0.036) (0.073) 
ln(tuition * meet-full-need * “aid eligible”) 0.015 0.129 
 (0.042) (0.094) 
Sample size 7,589,048 7,589,048 
Kleibergen and Paap Wald Rank F-test — 7.9 
OLSIV
Aggregate 
ln(in-state tuition and fees, lagged one year) −0.183 −0.119 
 (0.051) (0.061) 
First stage F-statistic — 20.8 
Triple Difference 
ln(tuition) −0.180 −0.023 
 (0.041) (0.094) 
ln(tuition * “aid eligible”) 0.033 −0.116 
 (0.038) (0.088) 
ln(tuition * meet-full-need) −0.047 −0.139 
 (0.036) (0.073) 
ln(tuition * meet-full-need * “aid eligible”) 0.015 0.129 
 (0.042) (0.094) 
Sample size 7,589,048 7,589,048 
Kleibergen and Paap Wald Rank F-test — 7.9 

Notes: Estimates are obtained from a model of the form of equations 1 through 3 and based on the authors’ analysis of College Board data on scores sent to students’ in-state flagship university among those who took the SAT and sent at least one score report to any college. Additional explanatory variables include: the students’ SAT math and verbal scores, maternal and paternal education indicators, race/ethnicity, the one-year lagged county unemployment rate, and the fraction of high school graduates in a student's state of residence who sent SAT scores to at least one school. A student is “aid-eligible” if she resides in a zip code with median family income of $75,000 or less per year. Standard errors are clustered at the state level. OLS = ordinary least squares; IV = instrumental variable.

Table 2.

Estimated SAT Score Send Elasticities Associated with an Increase in Tuition and Fees at Public Flagship Institutions

Doesn't Meet Full NeedMeets Full Need
AllAid IneligibleAid EligibleAid IneligibleAid Eligible
Mean score-sending rate 0.337 0.381 0.351 0.291 0.220 
 OLS 
Absolute impact −0.183 −0.180 −0.146 −0.227 −0.179 
 (0.051) (0.041) (0.052) (0.057) (0.055) 
Elasticity −0.543 −0.472 −0.416 −0.780 −0.814 
 IV 
Absolute impact −0.119 −0.023 −0.139 −0.162 −0.149 
 (0.061) (0.094) (0.068) (0.060) (0.070) 
Elasticity −0.353 −0.060 −0.396 −0.557 −0.677 
Doesn't Meet Full NeedMeets Full Need
AllAid IneligibleAid EligibleAid IneligibleAid Eligible
Mean score-sending rate 0.337 0.381 0.351 0.291 0.220 
 OLS 
Absolute impact −0.183 −0.180 −0.146 −0.227 −0.179 
 (0.051) (0.041) (0.052) (0.057) (0.055) 
Elasticity −0.543 −0.472 −0.416 −0.780 −0.814 
 IV 
Absolute impact −0.119 −0.023 −0.139 −0.162 −0.149 
 (0.061) (0.094) (0.068) (0.060) (0.070) 
Elasticity −0.353 −0.060 −0.396 −0.557 −0.677 

Notes: The absolute impact is the effect of a change in tuition on the absolute rate of sending scores to flagship institutions. The elasticity adjusts for the mean rate of score sending to flagship institutions within each group. A student is considered aid-eligible if she resides in a zip code with median family income $75,000 or less per year. OLS = ordinary least squares; IV = instrumental variable.

The first row of table 1 corresponds to the estimates using scores sent from all students. The OLS estimate indicates that a 10 percent increase in tuition and fees at the public flagship decreases the probability that an SAT taker from that state sends a score to the public flagship by 1.8 percentage points (a 5.4 percent decrease relative to the mean score-sending rate of 33.7 percent). This corresponds to a price elasticity of −0.54, reported in table 2, because a 10 percent increases in price decreases score sends by 5.4 percent. The IV estimate is similar: A 10 percent increase in tuition and fees leads to a 1.2 percentage point reduction in score sending. This corresponds to a price elasticity of score sends of −0.35.28 These results suggest that the demand curve for college applications, as measured by SAT score sends, is downward sloping, but inelastic.

The remainder of table 1 reports the estimated coefficients from the triple difference specification represented by equation 3. Table 2 reports the absolute impacts and associated elasticities by group obtained by adding the appropriate coefficients from table 1. For example, to obtain the absolute impact of an increase in price for aid-ineligible students in MFN states, we add the coefficients on ln(tuition) and ln(tuition * meet-full-need): −0.180 + (−0.047) = −0.227. The elasticity is −0.780 because a 2.27 percentage-point drop in score sending is a 7.80 percent decrease relative to the mean score send rate of 0.291 for aid-eligible students in non-MFN states.

All interacted coefficients are small and statistically insignificant, as shown in table 1, including the triple interaction. In OLS, the estimated impact is similar across all groups. The IV estimates are mainly consistent with OLS but less precise in this specification.29 The estimated impact for aid-ineligible students in states that do not meet full need is smaller in an absolute sense, but it is not estimated precisely enough to statistically distinguish it from the other groups. In both OLS and IV, we cannot reject the null hypothesis that absolute impacts are equal. Elasticities are somewhat bigger (although not significantly different) for students in MFN states, which is attributable to similar point estimates, but lower mean rates of score sending.

Importantly, the estimated absolute impact and elasticity for “aid-eligible” students in MFN states is negative and statistically significant (the p-value on the absolute impact is very small in OLS and equal to 0.033 in IV). The OLS point estimate indicates that a 10 percent increase in tuition and fees decreases score sends by 1.79 percentage points for this group. The 95 percent confidence interval for the elasticity is [−0.32, −1.30]. As discussed earlier, the actual net price paid by these students would not change with an increase in tuition, so the fact that “aid-eligible” students in MFN states are less likely to send scores to public flagships when tuition increases is consistent with sticker shock.

Sensitivity of Results to Included Institutions

Examining only flagship institutions may be too restrictive or too inclusive. On the one hand, many highly regarded public universities are not flagships (UCLA, Texas A&M, Virginia Tech) and some of them meet full need. We may yield additional power by including these other institutions. It is relevant to note, though, that all seven of the additional R1 institutions that meet full need are in California, potentially placing too much weight on the idiosyncrasies of one state. On the other hand, one might be concerned that our sample of institutions is too inclusive, combining schools whose level of selectivity differs significantly. All of the public flagships and some of the other R1 institutions that meet full need are highly selective. Perhaps the set of other schools that do not meet full need are not an adequate control group for these institutions.

To examine whether these selection issues affect our findings, we consider four categories of institutions. First, we augment our sample of flagship institutions with other R1-level public institutions. Second, we create an indicator of “elite” public institutions, distinguishing those with an average combined SAT score of 1200 or over.30 The interaction of those two groups creates our four subsamples—all flagships, all flagships plus other R1 public institutions, elite flagships, and elite flagships plus other elite R1 public institutions.

The results of this analysis are reported in table 3. They indicate that the specific sample chosen does not meaningfully alter the results. The overall responsiveness of score sending to tuition increases is similar across the four categories of schools. In each case, a 10 percent increase in tuition reduces scores sent by 1 to 2 percentage points; IV estimates are slightly smaller than OLS estimates. Differences in impacts on score-sending behavior among students that differ by aid-eligibility and the meet-full-need status of the school are largely unsystematic (recognizing the greater imprecision in the IV estimates).

Table 3.

Absolute Impact of Increases in Tuition and Fees on SAT Score Sending at Different Categories of Institutions

Doesn't Meet Full NeedMeets Full Need
Category of InstitutionsMethodAllAid IneligibleAid EligibleAid IneligibleAid Eligible
All flagships (46 schools, N = 7,589,048) OLS −0.183 −0.180 −0.146 −0.227 −0.179 
  (0.051) (0.041) (0.052) (0.057) (0.055) 
 IV −0.119 −0.023 −0.139 −0.162 −0.149 
  (0.061) (0.094) (0.068) (0.060) (0.070) 
Flagships and other R1 institutions (88 schools, N = 29,816,254) OLS −0.188 −0.125 −0.262 −0.104 −0.168 
  (0.032) (0.036) (0.036) (0.031) (0.032) 
 IV −0.054 −0.053 −0.166 −0.070 −0.170 
  (0.069) (0.080) (0.097) (0.056) (0.068) 
Elite flagships (17 schools, N = 5,565,617) OLS −0.136 −0.167 −0.100 −0.173 −0.122 
  (0.050) (0.042) (0.059) (0.041) (0.052) 
  [0.035] [0.026] [0.181] [0.123] [0.151] 
 IV −0.097 −0.085 −0.093 −0.130 −0.117 
  (0.054) (0.096) (0.065) (0.053) (0.075) 
  [0.269] [0.558] [0.268] [0.149] [0.365] 
Elite flagships and other R1 institutions (27 schools, N = 13,239,478) OLS −0.199 −0.153 −0.267 −0.099 −0.177 
  (0.051) (0.050) (0.052) (0.056) (0.051) 
  [0.009] [0.014] [0.002] [0.157] [0.016] 
 IV −0.144 −0.179 −0.256 −0.102 −0.179 
  (0.076) (0.098) (0.084) (0.074) (0.086) 
  [0.090] [0.139] [0.013] [0.235] [0.069] 
Doesn't Meet Full NeedMeets Full Need
Category of InstitutionsMethodAllAid IneligibleAid EligibleAid IneligibleAid Eligible
All flagships (46 schools, N = 7,589,048) OLS −0.183 −0.180 −0.146 −0.227 −0.179 
  (0.051) (0.041) (0.052) (0.057) (0.055) 
 IV −0.119 −0.023 −0.139 −0.162 −0.149 
  (0.061) (0.094) (0.068) (0.060) (0.070) 
Flagships and other R1 institutions (88 schools, N = 29,816,254) OLS −0.188 −0.125 −0.262 −0.104 −0.168 
  (0.032) (0.036) (0.036) (0.031) (0.032) 
 IV −0.054 −0.053 −0.166 −0.070 −0.170 
  (0.069) (0.080) (0.097) (0.056) (0.068) 
Elite flagships (17 schools, N = 5,565,617) OLS −0.136 −0.167 −0.100 −0.173 −0.122 
  (0.050) (0.042) (0.059) (0.041) (0.052) 
  [0.035] [0.026] [0.181] [0.123] [0.151] 
 IV −0.097 −0.085 −0.093 −0.130 −0.117 
  (0.054) (0.096) (0.065) (0.053) (0.075) 
  [0.269] [0.558] [0.268] [0.149] [0.365] 
Elite flagships and other R1 institutions (27 schools, N = 13,239,478) OLS −0.199 −0.153 −0.267 −0.099 −0.177 
  (0.051) (0.050) (0.052) (0.056) (0.051) 
  [0.009] [0.014] [0.002] [0.157] [0.016] 
 IV −0.144 −0.179 −0.256 −0.102 −0.179 
  (0.076) (0.098) (0.084) (0.074) (0.086) 
  [0.090] [0.139] [0.013] [0.235] [0.069] 

Notes: Standard errors clustered on states are reported in parentheses and wild bootstrap p-values are reported in brackets (only for specifications focused on elite schools because of the smaller number of clusters). The mean score sending rate is 33.7%, 33.8%, 24.3%, and 27.6% for each group of institutions in the order listed in the table. OLS = ordinary least squares; IV = instrumental variable.

Our interpretation of these results is that our main findings described above regarding public flagships are generally robust to modifications in the types of schools included in the analysis. For the remainder of the analysis, we return to our focus on public flagship institutions.

Heterogeneity in Responsiveness to Tuition Increases

We also consider whether any heterogeneity exists in the results across population subgroups that may be differentially affected by sticker shock in the application decisions to public flagship institutions. Of course, differences in behavioral responsiveness to price changes across groups is a possibility, but a simpler explanation would be the differences in underlying propensity to apply to one of these schools in the first place. If a student is less interested in applying to a school in the first place, that student is less likely to respond to a price increase by withholding their application.

In our analysis, we separate students into population subgroups with different underlying propensities to apply to a public flagship. We measure these propensities by the average SAT score send rate for the 2006 through 2008 cohorts who graduated from high school before the tuition price spikes brought about by the financial crisis. We distinguish two groups that are more likely to apply to a public flagship—students whose combined math and verbal SAT scores are above the state-specific flagship “median” among enrolled students31 and students who have a parent who graduated from college. In our data, 45.6 percent and 41.0 percent of these students from the 2006 through 2008 cohorts who took the SAT sent their scores to a public flagship, respectively. We also distinguish students by race/ethnicity, separating them into categories of white, underrepresented minorities (black or Hispanic), or Asian. Among these three groups, our baseline SAT score send rate to public flagships is highest for Asian students (51.6 percent) and lowest for underrepresented minorities (32.2 percent).

The results of our analysis are reported in table 4. We report absolute impacts on the probability of sending SAT scores to a public flagship in response to tuition increases, using OLS and IV as before. Across all these groups, we see results similar to our earlier findings that the responsiveness of score sending to tuition increases is generally comparable regardless of aid-eligibility and the MFN status of the public flagship in the student's state of residence. Sticker shock appears to be a problem for all population subgroups.

Table 4.

Absolute Impact of Increases in Tuition and Fees at Public Flagship Institutions on SAT Score Sending among Different Population Subgroups

Doesn't Meet Full NeedMeets Full Need
Population SubgroupMethodAllAid IneligibleAid EligibleAid IneligibleAid Eligible
SAT total above flagship median (N = 1,487,920) OLS −0.299 −0.191 −0.198 −0.516 −0.532 
  (0.100) (0.041) (0.045) (0.068) (0.057) 
 IV −0.176 −0.011 −0.170 −0.454 −0.507 
  (0.122) (0.083) (0.092) (0.068) (0.077) 
At least one parent with college degree (N = 4,073,558) OLS −0.188 −0.165 −0.147 −0.224 −0.212 
  (0.049) (0.035) (0.043) (0.051) (0.046) 
 IV −0.109 −0.028 −0.135 −0.166 −0.153 
  (0.055) (0.069) (0.065) (0.057) (0.057) 
White students only (N = 4,184,386) OLS −0.144 −0.146 −0.156 −0.142 −0.127 
  (0.040) (0.034) (0.044) (0.044) (0.049) 
 IV −0.163 −0.136 −0.139 −0.236 −0.172 
  (0.084) (0.140) (0.079) (0.096) (0.111) 
Underrepresented minorities (N = 2,026,872) OLS −0.081 −0.116 −0.074 −0.105 −0.072 
  (0.064) (0.058) (0.060) (0.075) (0.068) 
 IV −0.005 0.035 −0.012 0.025 −0.032 
  (0.074) (0.084) (0.081) (0.097) (0.079) 
Asian American (N = 786,149) OLS −0.321 −0.223 −0.190 −0.404 −0.323 
  (0.084) (0.060) (0.067) (0.072) (0.071) 
 IV −0.198 0.123 −0.035 −0.255 −0.215 
  (0.148) (0.228) (0.206) (0.115) (0.125) 
Doesn't Meet Full NeedMeets Full Need
Population SubgroupMethodAllAid IneligibleAid EligibleAid IneligibleAid Eligible
SAT total above flagship median (N = 1,487,920) OLS −0.299 −0.191 −0.198 −0.516 −0.532 
  (0.100) (0.041) (0.045) (0.068) (0.057) 
 IV −0.176 −0.011 −0.170 −0.454 −0.507 
  (0.122) (0.083) (0.092) (0.068) (0.077) 
At least one parent with college degree (N = 4,073,558) OLS −0.188 −0.165 −0.147 −0.224 −0.212 
  (0.049) (0.035) (0.043) (0.051) (0.046) 
 IV −0.109 −0.028 −0.135 −0.166 −0.153 
  (0.055) (0.069) (0.065) (0.057) (0.057) 
White students only (N = 4,184,386) OLS −0.144 −0.146 −0.156 −0.142 −0.127 
  (0.040) (0.034) (0.044) (0.044) (0.049) 
 IV −0.163 −0.136 −0.139 −0.236 −0.172 
  (0.084) (0.140) (0.079) (0.096) (0.111) 
Underrepresented minorities (N = 2,026,872) OLS −0.081 −0.116 −0.074 −0.105 −0.072 
  (0.064) (0.058) (0.060) (0.075) (0.068) 
 IV −0.005 0.035 −0.012 0.025 −0.032 
  (0.074) (0.084) (0.081) (0.097) (0.079) 
Asian American (N = 786,149) OLS −0.321 −0.223 −0.190 −0.404 −0.323 
  (0.084) (0.060) (0.067) (0.072) (0.071) 
 IV −0.198 0.123 −0.035 −0.255 −0.215 
  (0.148) (0.228) (0.206) (0.115) (0.125) 

Notes: The absolute impact is the effect of a change in tuition on the absolute rate of sending scores to flagship institutions. OLS = ordinary least squares; IV = instrumental variable.

Yet the absolute impact of the score send response is different across groups. Our analysis yields point estimates that are largest for students with high SAT scores and for Asian students. These groups have the highest baseline propensity to apply to a public flagship.

Additional Robustness Checks

Another potential issue in interpreting our results is that we restrict our sample to those students who sent any SAT scores, and those students may reflect a selected sample. It is possible that students are less likely to send any SAT scores if sticker prices are high. One might anticipate, though, that altering test-taking among the types of students who would attend leading public institutions is considerably less elastic. We have no direct evidence on this point. Other studies that focus on state ACT (Goodman 2016; and Hyman 2017), SAT mandates (Hurwitz et al. 2015), or making it easier to take the SAT (Bulman 2015) provide mixed evidence regarding the responsiveness of high-scoring students to these interventions. The two studies focusing on the SAT, though, find that SAT-taking among high-achieving students is not affected, supporting our point.

Another specification check examines the sensitivity of our results to alternative measures of aid-eligibility status. Our median family income in the student's zip code measure is imperfect in that some higher income families still reside in zip codes with relatively low median incomes, and vice versa. We estimated several alternative measures of likely aid-eligibility to test the sensitivity of our results. Instead of zip codes with median family income below $75,000, we tried an analogous measure with median family income less than $50,000. We also experimented with student-reported family income and parental education to distinguish those who are likely to be eligible for financial aid. The results of these analyses are reported in online appendix table A.3. They all yield similar estimates of the impact of tuition increases on scores sent to public flagship institutions.32

We also consider the ACT as a substitute for the SAT as an additional specification check. An “application” in our data is sending an SAT score to a school, but in some parts of the country, that is unlikely just because the ACT is much more common. A student in one of those areas who takes and sends SAT scores may be selected in some nonrandom way that could have an impact on our results. To test this hypothesis, we restricted our sample to students residing in states where SAT score-sending rates are the highest—a third or more of a high school graduation cohort take the SAT. This occurs in about half the states. The results of this analysis are also reported in online appendix table A.3; the responsiveness of SAT score sends to changes in tuition are no different in this sample of the students.

We also report in online appendix table A.3 an additional specification check that examines the influence that California plays in driving the results in our analysis of scores sent to flagship institutions. As the most populous state and considering the protests that resulted following the announced tuition increases, perhaps the aggregate price sensitivity is driven primarily by UC Berkeley. We explore this possibility by simply dropping Berkeley from the sample. Again, we find our results to be strongly robust to this sample restriction.33

The final section of online appendix table A.3 reports results for dropping both California and Michigan from the sample; this leaves only North Carolina and Virginia as MFN states. These two states did not include loans in their financial aid packages for low-income students during the sample period, so there is no scope for students to be reacting to increased loans amounts that could have theoretically accompanied sticker price increases. The results from excluding California and Michigan are less precise than in the specification with all four MFN states, but the results are qualitatively similar, suggesting that sticker price shock is indeed the most plausible explanation for our results.

The final set of specification checks we report addresses how score sending responds to changes in cost of attendance rather than tuition and fees. The cost of attendance represents tuition and fees plus other components of costs (room and board, transportation and personal expenses, and books and supplies). If, as we argue in the section “Trends in College Pricing,” students respond to changes in tuition because that is what is publicized, then incorporating these other elements of cost introduces a random component to our key explanatory variable. In terms of our IV strategy, these additional elements of cost (which are typically set by the campuses, not the state) are also less affected by state budget conditions, weakening our identification strategy.

The results of our analysis using COA rather than tuition and fees, shown in online appendix table A.4, support these propositions. OLS estimates for all students are similar between the COA and tuition and fees specifications, but the results by MFN status and aid eligibility are considerably less precisely estimated. In the IV models, the state budget shock instrument is a weaker predictor of COA compared to tuition and fees; the F-statistic from the first stage fell from 20.8 to 4.87. Consequently, our results for the impact of changes in COA on score sending are less precise than the results for the impact of changes in tuition and fees in the aggregate and in models broken down by MFN status and aid eligibility. Overall, we believe this analysis supports our preference to focus on tuition and fees in this analysis.

If students respond to changes in college pricing with full information regarding their actual financial impact, then a tuition increase has the potential to increase socioeconomic diversity at MFN colleges. At these schools, low- and moderate-income students are protected from those tuition increases and should not reduce their likelihood of applying. Those at other schools and higher income students will all face the higher cost and could reduce their likelihood of applying. What we just saw, however, is that aid-eligible students responded to an increase in the sticker price the same way as others even though it did not affect them. Applications of all students declined in response to tuition increases, regardless of their aid eligibility or their state flagship's MFN status.

This raises the question of where students eventually enrolled. If all students, including aid-eligible students in states whose public flagship meets full need, are less likely to apply to those flagships, what impact does that have on enrollments at those institutions and other institutions with which they compete? We can use similar methods to those described earlier to address this question. We simply replace the dependent variable, using enrollment rather than application. For this exercise, we use the NSC data for the same students included in our application analysis, focusing on enrollment at public flagship institutions.

Before reporting the results of this analysis, we note that the likely estimated impact on enrollments, if any, will be smaller than that on applications since only a relatively small percentage of applicants to public flagships are accepted and enroll. Even if some students did not apply because of the price increase, they may not have been accepted or may enrolled elsewhere anyway, reducing the potential impact on enrollment. The smaller estimated impact also creates a power issue, particularly in our IV specifications. With smaller anticipated effects and the larger standard errors associated with them (based on the properties of the binomial distribution), it becomes much more difficult to generate results precise enough to reject the null hypothesis of no effect.

Indeed, our attempts to estimate IV models of the same form as earlier generated mainly insignificant coefficients. We report only OLS estimates in the remainder of this discussion as a result. Despite this limitation, it is important to keep in mind that OLS estimates were similar, if perhaps slightly larger, than IV estimates in our analysis of applications. Endogeneity in tuition setting does not seem like a major problem during our sample window, but it may have a minor influence on our results.

The first issue we address is whether enrollments change at all at public flagships. As the most competitive public institution in the state, these schools receive considerably more applications than there are students who are accepted. A reduction in applications does not necessarily generate a reduction in enrollment. They can always adjust their admissions standards to maintain it.

In fact, this is what we see. The first row of table 5 reports the estimated impact on overall enrollment at public flagships in response to changes in tuition. We find no statistically significant changes in flagship enrollment in the aggregate and by MFN status or aid eligibility. This result is consistent with Barrow and Davis (2012), Long (2015), and Charles, Hurst, and Notowidigdo (2018), who show strong evidence of cyclicality in college enrollments at two-year public institutions and for part-time enrollments at four-year public institutions, but the impact on full-time enrollments at four-year public institutions is clearly smaller or perhaps non-existent. Hemelt and Marcotte (2011) and Deming and Walters (2017) also find small or zero estimated aggregate enrollment elasticities in their samples of four-year colleges and universities.

Table 5.

Estimated Impact of Tuition Increases on SAT Scores at Public Flagships and Enrollment at Different Types of Institutions

Doesn't Meet Full NeedMeets Full Need
AllAid IneligibleAid EligibleAid IneligibleAid Eligible
 All Students (N = 7,589,048) 
Public flagship enrollment −0.015 −0.020 −0.011 −0.031 0.005 
 (0.009) (0.010) (0.010) (0.011) (0.015) 
 All SAT Takers Enrolled in Public Flagships (N = 671,830) 
Public flagship average combined SAT score −22.4 −18.7 −27.0 −16.0 −41.8 
 (7.8) (9.6) (7.0) (18.7) (21.1) 
 High-Achieving Students (N = 1,487,920) 
Public flagship enrollment −0.074 −0.059 −0.052 −0.114 −0.117 
 (0.023) (0.022) (0.021) (0.015) (0.022) 
Public 4-year non-flagship enrollment −0.048 −0.016 −0.049 −0.082 −0.086 
 (0.023) (0.020) (0.024) (0.017) (0.021) 
Public out-of-state flagship enrollment 0.013 0.002 0.005 0.033 0.045 
 (0.011) (0.011) (0.009) (0.008) (0.015) 
Private 4-year enrollment 0.075 0.033 0.080 0.116 0.115 
 (0.024) (0.019) (0.021) (0.016) (0.015) 
Private 4-year MFN enrollment 0.009 −0.009 0.022 0.015 0.026 
 (0.009) (0.009) (0.011) (0.009) (0.015) 
Doesn't Meet Full NeedMeets Full Need
AllAid IneligibleAid EligibleAid IneligibleAid Eligible
 All Students (N = 7,589,048) 
Public flagship enrollment −0.015 −0.020 −0.011 −0.031 0.005 
 (0.009) (0.010) (0.010) (0.011) (0.015) 
 All SAT Takers Enrolled in Public Flagships (N = 671,830) 
Public flagship average combined SAT score −22.4 −18.7 −27.0 −16.0 −41.8 
 (7.8) (9.6) (7.0) (18.7) (21.1) 
 High-Achieving Students (N = 1,487,920) 
Public flagship enrollment −0.074 −0.059 −0.052 −0.114 −0.117 
 (0.023) (0.022) (0.021) (0.015) (0.022) 
Public 4-year non-flagship enrollment −0.048 −0.016 −0.049 −0.082 −0.086 
 (0.023) (0.020) (0.024) (0.017) (0.021) 
Public out-of-state flagship enrollment 0.013 0.002 0.005 0.033 0.045 
 (0.011) (0.011) (0.009) (0.008) (0.015) 
Private 4-year enrollment 0.075 0.033 0.080 0.116 0.115 
 (0.024) (0.019) (0.021) (0.016) (0.015) 
Private 4-year MFN enrollment 0.009 −0.009 0.022 0.015 0.026 
 (0.009) (0.009) (0.011) (0.009) (0.015) 

Notes: All estimates are obtained from ordinary least squares models of the same form as those reported in table 1 using National Student Clearinghouse data for all SAT score senders. “High achievers” are defined as those whose combined SAT scores are above the “median” of students enrolled in the student's public flagship institution. The “median” is defined to be the midpoint between the 25th and 75th percentile of combined SAT scores, as reported in IPEDS. MFN = meets full need.

To determine the extent to which this result is attributable to changes in selectivity among public flagships in our sample, we examine the impact of changes in sticker prices on the combined math and verbal SAT scores of enrolled students who sent their SAT scores to the flagship. The second row of table 5 reports the results of estimating models that are identical in specification except that the dependent variable is now the combined SAT score of enrolled SAT score-sending students at public flagships. The results indicate that a tuition increase reduces those scores. A 10 percent increase in tuition leads to a 2.2-point drop in combined SAT scores in the aggregate. The differences in effects by MFN and aid-eligibility status are small and not statistically significant.

We extend this analysis by focusing solely on “high-achieving” students, whose SAT scores are above the “median” (midpoint of the 25th and 75th percentile) of enrolled students at each institution. For these students, in OLS specifications we see a statistically significant reduction in enrollment; overall, a 10 percent increase in tuition leads to a 0.7 percentage point decline in enrollments among this group.

This raises the question of where high-achieving students who otherwise would have enrolled in their home state's public flagship end up enrolling? One possibility is that they could have chosen to reduce their college cost by living closer to home and attending a non-flagship four-year state university.34 Identifying the impact of flagship tuition on public, non-flagship enrollment, however, is complicated because flagship and non-flagship tuition are both set within the context of the same state's budgetary environment and are often determined through a similar process. Indeed, the correlation in tuition between flagships and non-flagships across states and years in our data is 0.91. Non-flagships do not necessarily have the same excess supply of applicants to maintain enrollment if a price increase reduces demand. If increases in flagship tuition are matched by tuition increases in non-flagship institutions, then we would expect to see a reduction in enrollments at those institutions, not an increase associated with substitution. In other words, the results of this analysis would have a negative bias.

By restricting our sample to high-achieving students, though, this should be less of a problem. In our data, high achievers are 2.5 times more likely to attend a public flagship than a public non-flagship institution (17.0 percent versus 6.8 percent). By restricting our sample to just high achievers, we reduce the bias associated with the correlated prices and can better identify substitution. The results of this analysis are reported in the fourth row of table 5. We find no statistically significant change in the enrollment of high achievers at public non-flagships in response to a tuition increase.

The remainder of the table considers the impact on other enrollment options among high-achieving students. We consider the following alternatives: out-of-state flagship institutions, four-year private not-for-profit institutions, and the subset of those institutions that have “meet full need” financial aid policies. The final category is likely to have the lowest cost of attendance for lower income students, but they are not that common and typically highly selective. We note that the huge jump in tuition at public institutions during the financial crisis was not matched by tuition increases at private institutions (Ma et al. 2019), suggesting that identifying the cross-price effect is easier here than in the case of public non-flagships. Own-price effects are adequately controlled for with year fixed effects since private nonprofit institutions tend to have national (or at least regional) markets. Competition among those institutions also restricts local differences in price variation over time. Similarly, the out-of-state flagship market is a national one, so year fixed effects will capture the average effect of price changes at those institutions.

The results indicate that high achievers are more likely to attend a private not-for-profit four-year institution when public flagship prices rise. A 10 percent increase in tuition at public flagships increased enrollment of these students at these private colleges by 0.8 percentage points. This roughly matches the decline in enrollment of high achievers at public flagships. We also see modest evidence of increases in enrollments at out-of-state flagships and MFN private institutions, but those effects are spotty and considerably smaller. These results are consistent with those found by Hemelt and Marcotte (2016).

Our student-level enrollment results may also help us better understand the small estimated aggregate own-price enrollment elasticities of Hemelt and Marcotte (2011) and Deming and Walters (2017). Hemelt and Marcotte (2011) estimate an average tuition and fee elasticity of total headcount of −0.1. Deming and Walters (2017) cannot reject the null of a zero price elasticity, but their confidence intervals contain the Hemelt and Marcotte (2011) average elasticity. Our results suggest that focusing on aggregate enrollment elasticities may obscure important distributional effects at selective institutions. Individual students—especially high-achieving students from low- and middle-income backgrounds—have enrollments much more responsive to prices than aggregate elasticities imply.

Price discrimination in the form of high sticker prices with generous financial aid for lower income students is a common pricing strategy at selective universities across the United States. In theory, this pricing strategy can maximize tuition revenue while maintaining access for lower income students because generous financial aid awards keep prices affordable. However, if students are unaware that they will qualify for financial aid and are discouraged from applying based on high sticker prices, this pricing strategy may not have its intended effect of increasing access to higher education. Instead, the high sticker price–high aid strategy may discourage low- and middle- income students from applying, defeating its purpose.

In this paper, we examine the existence of sticker shock using exogenous variation in sticker prices at leading public higher education institutions during and after the 2008 financial crisis. Using data on SAT score sends corresponding to students in the high school classes of 2006 to 2013, we investigate whether low- and moderate-income students responded to price increases by reducing their application rates at institutions that meet full financial need. Notably, in those institutions, aid-eligible students would have been fully insulated from sticker price increases. Yet we find that they apply less often, similar to higher income students and to those considering institutions that do not meet full need. These results suggest the existence of important informational frictions.

Interestingly, had low- and moderate-income students not exhibited sticker price shock, tuition price increases during the 2008 financial crisis would have led to significantly greater socioeconomic diversity at these public institutions that meet full need. As higher income students experienced actual price increases and reduced applications to them, unaffected low- and moderate-income students would not have adjusted their application behavior. Consequently, they would have constituted a larger share of the applicant pool and proportion of admitted students. However, our results indicate this did not occur.

Our results have important implications for public policy. One common proposal to address the complexity of the financial aid system is to simplify the FAFSA, which is used to determine a student's eligibility for Pell Grants and other forms of financial aid (cf. Dynarski and Scott-Clayton 2007).35 Though simplifying the FAFSA might increase participation in the financial aid system and college enrollment more generally, it may not go far enough to eliminate sticker price shock. To get students to apply in the first place, they need the ability to better forecast what their college costs will be at an early stage of the admissions process to overcome sticker shock.

Our results document the direct existence of sticker shock; students eligible for financial aid respond to the sticker price, not the actual price. The work by Dynarski et al. (2021), among others, documents that providing better pricing information changes their behavior. These analyses are complementary. In the end, both highlight the need to better communicate pricing information and available financial aid to overcome student misinformation regarding what college will really cost them so that they can make informed educational decisions.

We are grateful to seminar participants at the National Bureau of Economic Research Fall Education Program Meeting, Dartmouth College, Harvard University Graduate School of Education, Lafayette College, Swarthmore College, Wellesley College, and the DC Economics of Education seminar for their comments and suggestions. This paper uses proprietary data from the College Board, which has reviewed this manuscript and granted permission to release it. The views and opinions expressed in this paper are those of the authors alone.

1. 

Authors’ analysis of data from Chetty et al. (2017). These statistics are based on those flagship institutions that can be separately identified in the Mobility Report Cards and reflect the most recent data year available (children born in 1991).

2. 

Authors’ analysis of data from Chetty et al. (2017).

3. 

For example, see Duke (2009), Asimov (2009), and Gordon and Khan (2009). Because the news media focused primarily on increases in tuition and fees rather than costs of attendance, we use tuition and fees as our primary price measure. We also explore the sensitivity of our results to the use of cost of attendance.

4. 

Most public institutions, including all those who do not meet full need, use the federal methodology (FM) to determine a family's EFC. This methodology only uses information reported on the Free Application for Federal Student Aid (FAFSA), including family size, the number of children enrolled in higher education, the age of the older parent, and the income and assets of the parent and student. Unlike the institutional methodology (IM), used by a few public institutions and hundreds of private institutions, the FM does not consider the net value of a family's primary residence or small businesses owned by the family.

5. 

To interpret the relative magnitude of this estimate, one quarter to one third of students who send SAT scores to any college or university sent them to the different categories of leading public institutions we examine.

6. 

It is important to note that Gurantz et al. (2019) report that they did not find significant enrollment effect from a series of large randomized control trials that focused on reducing information barriers for low- and middle-income high-achieving or on-track students.

7. 

There is a distinction between schools that meet full need and those that are “need-blind” in admission, meaning that acceptance decisions are independent of financial need. All schools that are need-blind also meet full need, but not vice versa. In our analysis, all state flagships that meet full need are at least need-blind for students residing in that state, which is the group our analysis focuses on.

8. 

The extent to which students and parents are aware of the meet-full-need status of their flagship is unclear. Students could use a net price calculator to discover whether a school meets full need, but these policies are rarely stated clearly on university Web sites.

9. 

We are able to document that overall borrowing at UC Berkeley and the University of Michigan did not change noticeably in 2009–10. We are also able to document that the University of Virginia and the University of North Carolina had policies in place through this period that eliminate loans from the financial aid package for lower income students.

10. 

We also assumed the student was eighteen years old, unmarried, with no dependents, was applying to college for the first time, and had one younger sibling. We further assumed that the student's parents were forty-seven and forty-eight years old, married, and paid no income tax. Table A.1, available in a separate online appendix that can be accessed on Education Finance and Policy's Web site at https://doi.org/10.1162/edfp_a_00372, provides a list of these flagship institutions. The state flagship institutions in California and Texas are Berkeley and Austin, respectively.

11. 

The exact EFC depends on the state and ranges from $2,111 to $3,036. We use $2,600 as an average for illustrative purposes in this discussion.

12. 

The following Web pages identify each state's policy of meeting full demonstrated financial need: https://finaid.umich.edu/how-aid-is-awarded/ (University of Michigan—Ann Arbor), https://sfs.virginia.edu/need (University of Virginia), http://admission.universityofcalifornia.edu/paying-for-uc/how-aid-works/index.html (University of California System), https://admissions.unc.edu/files/2013/09/Financial-Aid-Fact-Sheet.pdf (University of North Carolina—Chapel Hill), https://www.udel.edu/apply/undergraduate-admissions/financing-your-degree/ (University of Delaware; effective fall 2009), and https://www.washington.edu/huskypromise/ (University of Washington). The University of Wisconsin—Madison began the Bucky's Tuition Promise program that guarantees free tuition and no fees for incoming freshman for families earning less than $56,000 per year in the 2018--2019 school year (https://financialaid.wisc.edu/uw-madison-free-tuition-for-families-making-less-than-56k/). The University of Illinois began the Illinois Commitment program that provides free tuition for families earning less than $61,000 per year in the 2019–20 school year (https://osfa.illinois.edu/illinois-commitment/).

13. 

These amounts are for a student at the 25th percentile of the school's GPA and SAT distribution who would likely not qualify for merit aid.

14. 

The only other public institution of which we are aware that meets full need for state residents is the College of William and Mary in Virginia. It is classified as an R2 in the Carnegie system (high research activity). In preliminary analyses, we experimented with including it and doing so had no obvious impact on the results.

15. 

See Zinth and Smith (2012) for a categorization of tuition-setting authority by state.

16. 

The statistics reported here are calculated using one observation per school. We have also estimated analogous statistics weighted by the number of SAT score senders in each state. The patterns in the data are similar, but with even larger spikes in tuition and fees during the recession.

17. 

Our econometric models rely on state appropriations to distinguish economic activity, but within-state change in this measure and the unemployment rate are very highly correlated. A regression of the state budget shock on the unemployment rate in a model including state fixed effects has an R2 of 0.97. We use the change in the unemployment rate here for the purposes of ease of interpretation.

18. 

We sought to investigate the hypothesis that marginal revenue generated from a tuition increase is lower at meet-full-need schools. Unfortunately, we were unable to locate data on revenue from tuition restricted to undergraduates.

19. 

These data are derived from data provided by the College Board (Copyright © 2006–13 The College Board www.collegeboard.org).

20. 

Self-reported income would be our preferred measure if it were well-measured, but it is often reported by the student and may not reflect an accurate assessment. Additionally, self-reported income is missing for about a third of the students.

21. 

Data on the number of high school graduates are from Western Interstate Commission for Higher Education (2016).

22. 

See Adams (2017) for a discussion of how the popularity of the SAT and ACT has changed over time.

23. 

We obtain almost identical results when we use nominal tuition levels, but in models that control for cohort fixed effects, the distinction between nominal and real values is obscured.

24. 

Our measure of student population is the number of high school seniors graduating in year t.

25. 

As of 2005, states varied considerably in their reliance on state appropriations. The University of Virginia, the least reliant institution in our sample, derived 6 percent of its total revenue from appropriations while the State University of New York at Buffalo, the most reliant, derived 42 percent.

26. 

The considerably shorter sample period that we use also prevents us from extending our analysis to introduce Deming and Walters’ alternative instrument, caps and freezes on state tuition. We experimented with this approach, but we found too little power in our first stage to implement this approach.

27. 

Another potential weakness in our identification strategy is that institutions facing budget constraints may also increase their recruiting budget to attract higher income students. Such efforts designed to attract more international students and those from out-of-state would not affect our analysis of state residents. For them, there would be no reason to spend more on recruiting higher income students since they are likely already in the applicant pool at these leading public institutions.

28. 

Note that the first-stage F-statistic is strong, with a value of 20.8. Table A.2 in the online appendix displays the first-stage results for our analysis of public flagships and for other types of institutions, as described below. Because tuition and fees is the key lever institutions can use to compensate for changes in state appropriations, we would expect our budget shock instrument to have a strong negative effect on tuition and fees. Interpreting the exact value is not intuitive.

29. 

There are currently no tests for weak instruments in the case of multiple endogenous regressors with non–independent and identically distributed (i.i.d.) errors (Baum, Schaffer, and Stillman 2007). However, we follow the advice of Baum, Schaffer, and Stillman (2007) and nevertheless report the Kleibergen and Paap Wald Rank F-test statistic in table 1 for our triple difference specification (Kleibergen and Paap 2006). This statistic cannot be compared to the usual Stock-Yogo weak id test critical values because the Stock-Yogo values assume i.i.d. errors. Critical values have not yet been tabulated for the Kleibergen-Paap rk statistic since the thresholds depend on the type of violation of the i.i.d. assumption, which differ across applications (Bazzi and Clemens 2013). Angrist and Pishke (2009) show using Monte Carlo simulations that two-stage least squares is approximately median-unbiased in the just-identified case even with weak instruments, so we believe weak instruments bias is unlikely to be a problem in our setting.

30. 

The flagship institutions that satisfy these conditions include those in CA, CT, FL, GA, IA, IL, KS, MA, MD, MI, MN, NC, NJ, OH, TX, VA, and WI. The other R1 institutions that satisfy them are: Clemson, Georgia Tech, NC State, Binghamton University, Stony Brook University, UT Dallas, UCLA, UC Santa Barbara, UC San Diego, and Virginia Tech. Given the relatively small number of institutions included in our analyses of these more selective institutions, we report wild bootstrap p-values along with more traditional clustered standard errors.

31. 

The “median” is defined as the midpoint between the 25th and 75th percentiles, based on data available from IPEDS.

32. 

We have also estimated models using a continuous measure of financial-aid eligibility based on the probability that a student's family income is below $75,000 using this zip code–level measure of median family income. The results are qualitatively similar to those reported below. We chose to report the discrete version because it is more easily interpretable in a quasi-experimental framework.

33. 

We do note, however, that IV estimates become very unstable when we drop California from the sample because there appears to be too little variation in labor market conditions across other meet-full-need states to provide sufficient identification.

34. 

High-achieving students are very unlikely to attend community college; just 2.1 percent do so in our sample. Perhaps because of this, we find no evidence of substitution to these institutions.

35. 

This is the system that is used for calculating financial aid at most, but not all, public flagship institutions. Many private colleges also require the CSS Profile—simplifying FAFSA would have no impact on schools that rely on that method for determining ability to pay.

Adams
,
Caralee J.
2017
.
In race for test-takers, ACT outscores SAT-for now
.
Education Week
,
24 May
.
Angrist
,
Joshua
, and
Jörn-Steffen
Pischke
.
2009
.
A note on bias in just identified IV with weak instruments
.
Available
http://econ.lse.ac.uk/staff/spischke/mhe/josh/solon_justid_April14.pdf.
Asimov
,
Nanette.
2009
.
UC president recommends huge tuition increases
.
San Francisco Chronicle
,
11 September. Available
https://www.sfgate.com/education/article/UC-president-recommends-huge-tuition-increases-3218630.php.
Barrow
,
Lisa
, and
Jonathan M. V.
Davis
.
2012
.
The upside of down: Postsecondary enrollment in the Great Recession
.
Economic Perspectives
36
(
4
):
117
129
.
Baum
,
Christopher F.
,
Mark E.
Schaffer
, and
Steven
Stillman
.
2007
.
Enhanced routines for instrumental variables/generalized methods of moments estimation and testing
.
Stata Journal
7
(
4
):
465
506
.
Bazzi
,
Samuel
, and
Michael A.
Clemens
.
2013
.
Blunt instruments: Avoiding common pitfalls in identifying the causes of economic growth
.
American Economic Journal: Macroeconomics
5
(
2
):
152
186
.
Bettinger
,
Eric P.
,
Bridget Terry
Long
,
Philip
Oreopoulos
, and
Lisa
Sanbonmatsu
.
2012
.
The role of application assistance and information in college decisions: Results from the H&R Block FAFSA experiment
.
Quarterly Journal of Economics
127
(
3
):
1205
1242
.
Bleemer
,
Zachary
, and
Basit
Zafar
.
2018
.
Intended college attendance: Evidence from an experiment on college returns and costs
.
Journal of Public Economics
157
:
184
211
.
Bound
,
John
,
Breno
Braga
,
Gaurav
Khanna
, and
Sarah
Turner
.
2019
.
Public universities: The supply side of building a skilled workforce
.
RSF: The Russell Sage Foundation Journal of the Social Sciences
5
(
5
):
43
66
.
Bulman
,
George.
2015
.
The effect of access to college assessments on enrollment and attainment
.
American Economic Journal: Applied Economics
7
(
4
):
1
36
.
Charles
,
Kerwin Kofi
,
Erik
Hurst
, and
Matthew J.
Notowidigdo
.
2018
.
Housing booms and busts, labor market opportunities, and college attendance
.
American Economic Review
108
(
10
):
2947
–-
2994
.
Chetty
,
Raj
,
John N.
Friedman
,
Emmanuel
Saez
,
Nicholas
Turner
, and
Danny
Yagan
.
2017
.
Mobility report cards: The role of colleges in intergenerational mobility. Mobility statistics and student outcomes by college and birth cohort
.
Available
http://www.equality-of-opportunity.org/data/.
Chetty
,
Raj
,
John
Friedman
,
Emmanuel
Saez
,
Nicholas
Turner
, and
Danny
Yagan
.
2020
.
Income segregation and intergenerational mobility across colleges in the United States
.
Quarterly Journal of Economics
135
(
3
):
1567
1633
.
College Board and Arts & Sciences Group.
2012
.
A majority of students rule out colleges based on sticker price
.
studentPOLL
9
(
1
).
Available
https://www.artsci.com/insights/studentpoll/volume-9-issue-1.
Deming
,
David J.
, and
Christopher R.
Walters
.
2017
.
The impact of price caps and spending cuts on U.S. postsecondary attainment
.
NBER Working Paper No. 23736
.
Duke
,
Alan.
2009
.
University of California students protest 32 percent tuition increase
.
CNN, 19 November. Available
http://www.cnn.com/2009/US/11/19/california.tuition.protests/index.html.
Dynarski
,
Susan
,
C. J.
Libassi
,
Katherine
Michelmore
, and
Stephanie
Owen
.
2021
.
Closing the gap: The effect of a targeted, tuition-free promise on college choices of high-achieving, low-income students
.
American Economic Review
11
(
6
):
1721
1756
.
Dynarski
,
Susan M.
, and
Judith
Scott-Clayton
.
2007
.
College grants on a postcard: A proposal for simple and predictable federal student aid
.
Hamilton Project Discussion Paper No. 2007-01
.
Goodman
,
Serena.
2016
.
Learning from the test: Raising selective college enrollment by providing information
.
Review of Economics and Statistics
98
(
4
):
671
684
.
Gordon
,
Larry
, and
Amina
Khan
.
2009
.
UC regents approve fee hike amid loud student protests
.
Los Angeles Times
,
November 19. Available
https://latimesblogs.latimes.com/lanow/2009/11/uc-regents-approve-fee-hike-amid-loud-student-protests.html.
Gurantz
,
Oded
,
Jessica
Howell
,
Michael
Hurwitz
,
Cassandra
Larson
,
Matea
Pender
, and
Brooke
White
.
2019
.
Realizing your college potential? Impacts of College Board's RYCP campaign on postsecondary enrollment
.
EdWorkingPaper No. 19-40.
Annenberg Institute at Brown University
. Available
https://edworkingpapers.com/ai19-40.
Hemelt
,
Stephen W.
, and
Dave E.
Marcotte
.
2011
.
The impact of tuition increases on enrollment at public colleges and universities
.
Educational Evaluation and Policy Analysis
33
(
4
):
435
457
.
Hemelt
,
Stephen W.
, and
Dave E.
Marcotte
.
2016
.
The changing landscape of tuition and enrollment in American public higher education
.
RSF: The Russell Sage Foundation Journal of the Social Sciences
2
(
1
):
42
68
.
Hoxby
,
Caroline M.
, and
Christopher
Avery
.
2013
.
The missing “one-offs”: The hidden supply of high-achieving, low-income students
.
Brookings Papers on Economic Activity
2013
(
1
):
1
65
.
Hoxby
,
Caroline
, and
Sarah
Turner
.
2013
.
Expanding college opportunities for high-achieving, low income students
.
SIEPR Discussion Paper No. 12-014. Available
https://siepr.stanford.edu/sites/default/files/publications/12-014paper_6.pdf.
Hoxby
,
Caroline M.
, and
Sarah
Turner
.
2015
.
What high-achieving low-income students know about college
.
American Economic Review: Papers & Proceedings
105
(
5
):
514
517
.
Huntington-Klein
,
Nick.
2016
.
The search: The effect of the college scorecard on interest in colleges
.
Unpublished Working Paper. Available
https://www.aeaweb.org/conference/2017/preliminary/paper/hf7A8bfB.
Accessed 3 February 2020
.
Hurwitz
,
Michael
, and
Jonathan
Smith
.
2018
.
Student responsiveness to earnings data in the college scorecard
.
Economic Inquiry
56
(
2
):
1220
1243
.
Hurwitz
,
Michael
,
Jonathan
Smith
,
Sunny
Niu
, and
Jessica
Howell
.
2015
.
The Maine question: How is 4-year college enrollment affected by mandatory college entrance exams?
Educational Evaluation and Policy Analysis
37
(
1
):
138
159
.
Hyman
,
Joshua.
2017
.
ACT for all: The effect of mandatory college entrance exams on postsecondary attainment and choice
.
Education Finance and Policy
12
(
3
):
281
311
.
Kleibergen
,
Frank
, and
Richard
Paap
.
2006
.
Generalized reduced rank tests using the singular value decomposition
.
Journal of Econometrics
133
:
97
126
.
Levine
,
Phillip B.
2014
.
Transparency in college costs
.
Brookings Institution Working Paper. Available
http://www.brookings.edu/research/papers/2014/11/12-transparency-in-college-costs-levine.
Long
,
Bridget Terry
.
2015
.
The financial crisis and college enrollment: How have students and their families responded?
In
How the financial crisis and the Great Recession affected higher education
,
edited by
J. R.
Brown
and
C. M.
Hoxby
, pp.
209
233
.
Chicago: University of Chicago Press
.
Longmire & Company
.
2013
.
Your value proposition: How prospective students and parents perceive value and select colleges
.
Available
https://www.longmire-co.com/documents/studies/Value_Proposition_Study_Report.pdf.
Ma
,
Jennifer
,
Sandy
Baum
,
Matea
Pender
, and
C. J.
Libassi
.
2019
.
Trends in college pricing 2019
.
New York
:
The College Board
.
Mulhern
,
Christine.
2021
.
Changing college choices with personalized information at scale: Evidence on Naviance
.
Journal of Labor Economics
39
(
1
):
219
262
.
Oreopoulos
,
Philip
, and
Ryan
Dunn
.
2013
.
Information and college access: Evidence from a randomized field experiment
.
Scandinavian Journal of Economics
115
(
1
):
3
26
.
Pallais
,
Amanda.
2015
.
Small differences that matter: Mistakes in applying to college
.
Journal of Labor Economics
33
(
2
):
493
520
.
Sallie
Mae
.
2016
.
How America pays for college 2016. Sallie Mae's national study of college students and parents
.
Available
https://news.salliemae.com/files/doc_library/file/HowAmericaPaysforCollege2016FNL.pdf.
State Higher Education Executive Officers Association
.
2018
.
SHEF: FY 2018 State Higher Education Finance
.
Available
https://sheeomain.wpengine.com/wp-content/uploads/2019/04/SHEEO_SHEF_FY18_Report.pdf.
Accessed
7 January 2020
.
Smith
,
Jonathan.
2018
.
The sequential college application process
.
Education Finance and Policy
13
(
4
):
545
575
.
Western Interstate Commission for Higher Education
.
2016
.
Knocking at the college door: Projections of high school graduates through 2032
.
Available
https://knocking.wiche.edu/.
Accessed
7 January 2020
.
Zinth
,
Kyle
, and
Matthew
Smith
.
2012
.
Tuition-setting authority for public colleges and universities
.
Education Commission of the States
.
Available
https://www.ecs.org/clearinghouse/01/04/71/10471.pdf.

Supplementary data