Abstract
Recent policy and research efforts have focused on simplifying the college-going process, improving transparency around college costs, and helping students make informed decisions. In 2012, the Obama administration released the “shopping sheet,” a standardized financial aid offer that is intended to provide students with simplified information about costs, loan options, and college outcomes. This paper examines the impact of the shopping sheet (adopted by more than 400 four-year colleges in two years) using: (1) administrative data from a field experiment among admitted and already-enrolled students at a public university, and (2) college-level data from a quasi-experiment among four-year colleges. Findings provide some evidence that information in the shopping sheet relating a college's graduation rate to other colleges led to decreased borrowing at colleges with poor graduation outcomes. Additionally, the shopping sheet decreased borrowing at colleges that enroll high shares of students receiving federal student aid and underrepresented minority students. These findings indicate the shopping sheet may be particularly salient to students who traditionally face higher informational barriers during the college-going process.
1. Introduction
During the college-going process, students make complex decisions about whether to enroll, where to enroll, and how to pay for college. Informational or behavioral barriers along the way—missteps or mistakes such as missing admission or aid deadlines, failing to complete a form, or not understanding loan options—can deter students (Lavecchia, Liu, and Oreopoulos 2016). For instance, as many as 20 to 30 percent of college-intending high school graduates do not enroll in college in part because of tasks they must complete after high school (Castleman and Page 2014). Additionally, more than two million college students who might be eligible for federal student aid do not complete the Free Application for Federal Student Aid (FAFSA) (White House 2015), foregoing an estimated $24 billion in grants and loans each year (Kofoed 2016). Many college students also underestimate how much they have borrowed or are unaware they have borrowed (Akers and Chingos 2014).
In recent years, many efforts within the federal government have centered on providing simple and transparent information to help students evaluate college options (Meyer and Rosinger 2019). As part of these efforts, in 2012 the Obama administration announced the release of the “shopping sheet,” which was recently renamed the “college financing plan” but is still commonly known as the shopping sheet. The shopping sheet is a financial aid offer developed by the U.S. Department of Education (ED) and the Consumer Financial Protection Bureau (CFPB) aimed at simplifying and standardizing information about college costs, loan options, and student outcomes. Financial aid offers are how colleges communicate information to students about the cost of attendance and financial aid. Information varies across colleges and is often incomplete or unclear (Kantrowitz 2010; Whitsett and O'Sullivan 2012; TICAS 2013). As a result, these offers have proven difficult to interpret and, for students who receive offers from more than one college, to compare.
More than 200 four-year colleges and nearly 500 other postsecondary institutions volunteered to use the shopping sheet during the 2012–13 academic year, a number that doubled the following year, and reached more than 3,000 by 2017.1 Around three-quarters of undergraduates attend a college that uses the shopping sheet (USDE 2016). A bill recently introduced in the U.S. Congress would require colleges to use a standardized format like the shopping sheet (Senate Bill 888 2019). The shopping sheet is intended to help students make informed decisions by simplifying information about costs and loans and making it easier to compare offers. The shopping sheet also provides reference points comparing a college's outcomes to other colleges on two metrics: graduation rate (categorized as low, medium, or high), and loan default rate, compared with the national average. Reference points place information in context and increase salience by locating points above and below (Kahneman and Tversky 1979; Tversky and Simonson 1993). The shopping sheet may make colleges that perform better (or worse) than the average college more (or less) attractive. In doing so, students may be more likely to enroll at colleges with better outcomes or adjust borrowing based on the likelihood of graduating or repaying loans.
The shopping sheet may be particularly salient to student populations that traditionally face greater barriers to college information. Low-income and underrepresented minority students, for instance, tend to overestimate costs and lack information about aid (Horn, Chen, and Chapman 2003; Rosa 2006; Grodsky and Jones 2007) and often have lower levels of financial literacy (Lusardi and Mitchelli 2007). Disparities in access to college advising, advanced course offerings, and other resources also contribute to difficulty navigating college decisions (McDonough 1997; Avery, Howell, and Page 2014). As a result, simplification efforts are likely to be particularly important for low-income and underrepresented minority students.
In this study, I evaluate the shopping sheet's impact on enrollment and borrowing using experimental and quasi-experimental methods. I investigate heterogeneity in treatment effects at colleges with better (and worse) outcomes and among student populations that face greater informational barriers or colleges that serve large shares of students from these populations. I first use data from a field experiment among admitted and already-enrolled students at a public university. I then use college-level data from a quasi-experiment to examine a broader range of colleges during the first two years of adoption. Findings provide some evidence that reference points relating to college outcomes led to decreased shares of students borrowing at colleges with poor graduation outcomes. Findings also indicate the shopping sheet may be particularly salient to students who traditionally face higher informational barriers: colleges serving larger shares of students who receive federal student aid and underrepresented minority students experienced a $200 decrease in the average amount borrowed. Among underrepresented minority students, the shopping sheet led to a $1,300 decrease in the amount borrowed at the college where the experiment was conducted, though the result was significant only for admitted and not already-enrolled students.
In the following section, I draw on insights from the behavioral sciences to understand complexity in financial aid offers and how efforts to provide simple and transparent information are likely to influence enrollment and borrowing. I then describe (1) the field experiment examining the shopping sheet's impact at a public college, and (2) the quasi-experiment examining its influence at a broader range of four-year colleges. I conclude with implications for policy and practice.
2. Conceptual Framework
Scholars frequently examine education decisions from an economic perspective in which students weigh expected costs and benefits associated with college options and make decisions in ways that are expected to maximize the net present value of their investment. However, low graduation rates and rising loan default rates indicate that students do not always make optimal decisions from a cost–benefit perspective: Just over half of students who started college in 2009 completed a degree six years later (Shapiro et al. 2015) while the share of students defaulting on loans has risen to 14 percent (College Board 2016). This indicates that many students enroll at colleges at which they are unlikely to succeed, fail to complete a degree, or borrow too much relative to future earnings. Insights from the behavioral sciences (a field influenced by economics, psychology, and other social sciences) demonstrate that cognitive biases, as well as the context in which people make decisions, can lead to suboptimal outcomes from a cost–benefit perspective, particularly when decisions involve complex or unfamiliar information (Kahneman and Tversky 1979; Bertrand et al. 2005).
Financial aid offers are complex for a number of reasons. First, the sheer amount of information students receive about costs, aid, and loans makes it difficult to know what is most important. Unless information stands out or seems salient, students are likely to ignore it (Loewenstein, Sunstein, and Golman 2014). Offers do not usually contain information on student outcomes—for example, the share of students who graduate or who make loan payments. Because humans have limited attention and are biased toward available information (Kahneman 2011), students are unlikely to seek out this information despite its relevance to the benefits they can expect to receive from attending a particular college. Students also are likely biased toward the present—for example, borrowing more now rather than focusing on the long-term impact of debt (Barr, Bird, and Castleman 2019).
Image 1 (which is available in a separate online appendix that can be accessed on Education Finance and Policy’s website at www.mitpressjournals.org/doi/suppl/10.1162/edfp_a_00260), provides an example of an aid offer commonly used by colleges. The offer includes information about aid, including grants, work study, and loan options, but does not clearly distinguish between grants and loans. There is no information about a student's likelihood of graduating or repaying loans, important outcomes to consider when making college decisions. Students typically view offers online, and information is spread across several pages or tabs, making it difficult to gather necessary information in one place.
Some students receive offers from more than one college, making it difficult to interpret and compare across colleges that differ in the information and language used to describe aid. Colleges themselves also differ on multiple dimensions (e.g., cost, quality, and location), attributes that involve tradeoffs. For instance, a student may travel farther to attend a higher-quality college than one nearby. Decisions involving tradeoffs require a great deal of cognitive effort, especially when information is unclear or not readily available (Gourville and Soman 2005).
Complexity is compounded by students’ limited experience making college decisions. Students have little familiarity or practice making college decisions and few opportunities to get these decisions right. Feedback is delayed, and students do not know until after college whether they attended an institution from which they would graduate or if they borrowed an amount they can repay. As a result, students are not able to learn from and correct previous mistakes when making enrollment and borrowing decisions each year. For low-income students, college decisions are likely to be especially complex and the outcomes particularly costly. Research on decision making under scarcity indicates that constant tradeoffs that low-income households make with every financial decision are distracting, fatiguing, and require cognitive bandwidth that reduces the amount of attention that can be devoted to planning ahead or problem solving (Mullainathan and Shafir 2013). This increases the likelihood of errors, and the consequences of errors are costly for families with limited resources. Additionally, low-income students and underrepresented minority students are less likely to have access to quality college counseling and other resources to assist in college decisions (McDonough 1997; Horn, Chen, and Chapman 2003; Grodsky and Jones 2007; Avery, Howell, and Page 2014).
Research indicates that reducing informational and behavioral barriers can improve outcomes during the college-going process (see Page and Scott-Clayton 2016 for a review). For instance, sending information about costs, admission procedures, and application fee waivers led high-achieving, low-income students to apply to and enroll at more selective colleges (Hoxby and Turner 2013). For college-intending, low-income students, counseling and reminders about pre-matriculation tasks increased enrollment (Castleman and Page 2015). Similarly, providing FAFSA assistance (Bettinger et al. 2012) and sending text messages reminding students to file FAFSA (Castleman and Page 2016) can improve enrollment and persistence.
While many studies focus on college-going, financial aid offers (a later stage of the college process during which students evaluate costs and loan options) are less frequently considered. A few recent studies demonstrate that interventions delivered to students as they evaluate offers can impact borrowing decisions. For instance, information about cumulative debt and future monthly payments (Stoddard, Urban, and Schmeiser 2017), changing default borrowing options (Marx and Turner 2017), and sending text messages about loans and counseling assistance (Barr, Bird, and Castleman 2019) influenced borrowing. In a recent study, however, information about previous borrowing had no impact on subsequent borrowing (Darolia and Harper 2018).
The shopping sheet (Image 2 available in the online appendix) is intended to provide simple, timely, and salient information as they evaluate college costs and loan options. Colleges that use the shopping sheet typically include the one-page document as a supplement to information they already provide about financial aid. The shopping sheet, however, has several features distinguishing it from other offers. First, it lists the cost of attendance adjusted for grant aid at the top of the page, a feature missing from many traditional offers. Loans are listed separately from grants to distinguish the type of aid students do not have to repay.
Second, the shopping sheet differs from traditional offers by providing reference points comparing a college's graduation and loan default rates to the average college. Graduation rate is ranked as low, medium, or high relative to other colleges, and default rate is compared with the national average. Reference points have been influential in motivating behavior in other contexts: For instance, giving consumers three options can lead to increased tipping by nudging people toward the middle option (Grynbaum 2009) and giving information about energy consumption relative to neighbors lowered energy use in high-consumption households (Allcott 2011). Information about college outcomes is located at the top of the page, generally indicating important information, and is formatted as an image to distinguish it from other information in the sheet (ideas42 2016).
While reference points highlight particularly risky investments, efforts to simplify information are likely especially important for students who face greater informational barriers, including low-income and underrepresented minorities, prompting the following questions:
How does the shopping sheet impact enrollment and borrowing?
How do reference points relating to college outcomes in the shopping sheet impact enrollment and borrowing?
How does the shopping sheet's impact on enrollment and borrowing differ among student populations that traditionally face greater informational barriers and at colleges that serve higher shares of students from these populations?
Rather than influencing decisions in a particular direction, the shopping sheet was intended to help students make informed decisions. Although not designed by the ED or CFPB as a tool to decrease (or increase) student borrowing per se, by clearly distinguishing loans from grants, I anticipate that the shopping sheet will make loans particularly salient, whereas before they may not have been clearly listed or identified as loans. As a result, I hypothesize that the shopping sheet will influence decisions about whether to borrow and how much to borrow and, for students who receive offers from more than one college, influence enrollment decisions by shifting students away from colleges where they would have to borrow more.
Reference points in particular are likely to shape decisions. Humans are loss-averse, prefering to avoid losses rather than experience gains (Kahneman and Tversky 1979), suggesting students are likely to be responsive to information about a risky investment. I hypothesize that students will be less likely to enroll at colleges with worse outcomes—that is, low graduation or high default rates. Likewise, borrowing is likely to change depending on outcomes because students are more easily able to assess the likelihood of graduating and paying back loans. I also hypothesize students will be less likely to borrow and those who do will borrow less at colleges with worse outcomes, given that the investment is riskier if students are less likely to graduate and/or repay debt. Finally, I hypothesize that the shopping sheet will have a larger impact among students and at colleges serving higher shares of students who traditionally face higher barriers to college information, for instance, low-income and underrepresented minority students.
3. Experimental Evidence
I collaborated with administrators at a public university during the spring of 2013, the initial year colleges used the shopping sheet, to evaluate its impact on enrollment and borrowing. The partner university is a regional compehensive college that provides broad access, admitting more than 80 percent of applicants and enrolling primarily students from in state. Net cost was $15,000, below the average of $16,500 for primarily bachelor's degree–granting colleges (College Scorecard 2016). Students at the university graduate at about the same rate as their peers at similar colleges: the six-year graduation rate was in the medium range (45 percent). The loan default rate (14 percent) was near the national average of 13.4 percent, and median borrowing was around $14,500, below the average of $17,800. The “averageness” of the university provides an appropriate context in which to evaluate the shopping sheet.
Although the partner university resembles regional comprehensive colleges across the country—that is, institutions that enroll around 70 percent of public sector students (Klor de Alva 2019)—the state in which it is located offers a broad-based merit aid program, which greatly reduces the cost of attendance for in-state students, especially relative to private or out-of-state colleges. Students receiving the scholarship face not only lower costs but also forego a large amount of aid to attend college outside the state. As a result, findings might best be generalized to colleges in states that offer some type of merit aid program or in states with relatively low tuition.
Sample and Data
The field experiment focused on two groups: (1) students admitted to the partner university and (2) students already-enrolled in their first year at the university. Among admitted students, I examine enrollment and borrowing among students making these decisions for the first time; for already-enrolled students, who make borrowing decisions anew each year, I examine borrowing decisions students make while in college. Enrolled students were not notified about how much they previously borrowed, so decisions are likely shaped by readily available information in the shopping sheet rather than information they would have had to seek out, though I control for previous borrowing to account for its influence on subsequent borrowing.
To obtain the number of students in treatment agreed upon with the partner college at the outset of the study, I randomly assigned students to treatment and control conditions from among the first waves of students whose aid was packaged. Once the agreed upon number of students had been assigned to treatment, students were excluded from the sample. Students in the study represent a subpopulation of students at the partner university: Just over 60 percent of admitted and already-enrolled students are included. The subpopulation may differ in some ways from the full population of students. For instance, nearly half of students at the university take out loans: A similar share of sampled already-enrolled students borrowed, but just under one quarter of admitted students borrowed. Admitted students in the study were nearly all in-state students (92 percent), higher than the 86 percent among the full population, which may explain differences in borrowing. The share of sampled students who received Pell grants, who were female, and who were nonwhite was similar to the full population. Nonetheless, results can be best generalized to in-state students rather than the full population of students at the partner university. Students included in the study may also have differed in unobservable ways from their peers—for instance, students whose aid was ready to be packaged at the time of randomization submitted their paperwork early, indicating that they may have been more motivated, knowledgeable about the process, or more likely to attend the college. These characteristics may also be associated with a smaller effect of additional information, that is, these students already have information or are motivated to find it elsewhere and thus less likely to respond to the shopping sheet.
I draw on administrative data from admissions, financial aid, and enrollment records at the partner university. The first outcome variable is a dichotomous variable indicating whether a student enrolled at the university. I examine enrollment decisions among admitted students only; virtually all of the already-enrolled students (96 percent) reenrolled at the university in the 2013–14 academic year. The other two outcomes relate to borrowing. The first is a dichotomous variable indicating whether a student borrowed federal loans (subsidized and unsubsidized Stafford and Perkins loans); the second is a continuous variable indicating the amount borrowed among borrowers. Borrowing data were not available for students who did not enroll at the partner university, so I estimate models for borrowing outcomes conditional upon enrollment, which could bias estimates on borrowing outcomes if the students who enrolled were less (or more) likely to borrow. However, results indicate that the shopping sheet did not impact enrollment, so borrowing is unlikely to be influenced by selection.2
The independent variable of interest is a dichotomous variable indicating whether a student was assigned to receive the shopping sheet. Administrative data include baseline academic (high school grade point average, ACT score, whether a student took more than 30 credit hours), socioeconomic and financial (amount of grant aid received, Pell grant eligibility, parent income, whether a student had a parent with a college degree, amount previously borrowed for already-enrolled students, whether a student was in state), and demographic (race, gender) covariates.
Intervention Design
Students received financial aid offers in Spring/Summer 2013, and I observed enrollment and borrowing in 2013–14. Students assigned to treatment received the shopping sheet in addition to the offer traditionally used by the partner university (Image 2 in the online appendix); students assigned to the control group received the traditional offer. Colleges that use the shopping sheet, including the partner university, generally include it as a supplement to information they already provide about aid, so this paper evaluates the impact of the policy as implemented.
Students received notifications through their online financial aid account. The university's traditional offer includes several screens of information that students click through to find information about and accept financial aid. Because information about aid offers vary across colleges, results from the field experiment may be more generalizable to institutions with similar offers to the partner university. However, the partner university uses an online format similar to many colleges. Additionally, the shopping sheet differs from many traditional offers, including the one at the partner university, by providing: (1) loans separate from grants and listed after net cost, and (2) information about a college's outcomes relative to other colleges.
Estimates should be interpreted as the intent to treat (or the effect of being assigned to treatment) on enrollment and borrowing rather than the actual treatment effect. Technological limitations prevented tracking students’ access to their financial aid accounts, making it difficult to know which students saw the shopping sheet page or how long they viewed it. To increase the likelihood that students and/or parents viewed the shopping sheet, the partner university mailed the sheet to treatment students. Students in the control group did not receive an additional mailing, so estimates could reflect an impact of increased contact from the college. However, students receive information regularly about enrollment steps during this time, so one additional mailing may not have seemed out of the ordinary.
Sample Size and Baseline Equivalence
Sample sizes accounted for a relatively small effect given mixed evidence from previous work on whether and how much information affects college decisions (e.g., Hoxby and Turner 2013; Oreopoulos and Dunn 2013).3 The sample of 2,655 admitted students (N = 1,100 in treatment) provided statistical power to detect an effect size of 0.11 on enrollment, or one tenth of a standard deviation difference in treatment and control means, at 80 percent power, 0.05 significance, and 10 percent variance explained. The sample of 821 already-enrolled students (N = 437 in treatment) provided power to detect a 0.19 effect size using the same criteria.4
Table 1 lists treatment and control group means and differences in means for baseline covariates. There were no significant differences in means among admitted students (columns 2–4). I conducted an F-test by regressing treatment status on pre-treatment covariates to determine whether covariates jointly predicted assignment to treatment using all observations by including dummy indicators for missingness. The test failed to reject the null hypothesis (p = 0.65), indicating that observable covariates did not jointly predict assignment to treatment.
. | . | Admitted Students . | . | Already-Enrolled Students . | ||||
---|---|---|---|---|---|---|---|---|
. | N (1) . | Treatment Group (2) . | Control Group (3) . | Difference in Means (4) . | N (5) . | Treatment Group (6) . | Control Group (7) . | Difference in Means (8) . |
High school grade point average | 2,637 | 3.42 | 3.40 | 0.02 | 821 | 3.56 | 3.58 | −0.02 |
ACT score | 2,655 | 22.0 | 21.9 | 0.1 | 821 | 23.6 | 23.4 | 0.2 |
Previously taken >30 credit hours | 820 | 25.9% | 22.7% | 3.3% | ||||
Amount previously borrowed | 821 | $2,280 | $2,036 | $244 | ||||
In-state resident | 2,655 | 92.5% | 91.9% | 0.6% | 821 | 94.7% | 95.8% | −1.1% |
Pell grant eligible | 2,655 | 46.3% | 45.5% | 0.7% | 821 | 40.0% | 42.7% | −2.7% |
Parent income | 2,536 | $74,511 | $78,504 | −$3,993 | 803 | $80,808 | $79,602 | $1,206 |
Parent with college degree or higher | 2,617 | 65.6% | 63.7% | 1.9% | 815 | 75.4% | 68.4% | 7.0%* |
Grant aid | 2,655 | $3,396 | $3,513 | −$117 | 821 | $6,495 | $7,199 | −$405 |
Female | 2,654 | 58.6% | 60.8% | 2.2% | 821 | 61.8% | 66.7% | −4.9% |
Black | 2,288 | 9.1% | 10.4% | −1.3% | 745 | 5.2% | 5.2% | 0.0% |
Latino | 2,288 | 2.2% | 2.5% | −0.2% | 745 | 2.5% | 1.5% | 1.0% |
White | 2,288 | 86.6% | 85.4% | 1.2% | 745 | 90.0% | 91.8% | −1.8% |
Other race/ethnicity | 2,288 | 2.1% | 1.8% | 0.3% | 745 | 2.2% | 1.5% | 0.8% |
Race/ethnicity missing or unreported | 2,655 | 14.0% | 13.7% | 0.3% | 821 | 8.0% | 10.7% | −2.7% |
. | . | Admitted Students . | . | Already-Enrolled Students . | ||||
---|---|---|---|---|---|---|---|---|
. | N (1) . | Treatment Group (2) . | Control Group (3) . | Difference in Means (4) . | N (5) . | Treatment Group (6) . | Control Group (7) . | Difference in Means (8) . |
High school grade point average | 2,637 | 3.42 | 3.40 | 0.02 | 821 | 3.56 | 3.58 | −0.02 |
ACT score | 2,655 | 22.0 | 21.9 | 0.1 | 821 | 23.6 | 23.4 | 0.2 |
Previously taken >30 credit hours | 820 | 25.9% | 22.7% | 3.3% | ||||
Amount previously borrowed | 821 | $2,280 | $2,036 | $244 | ||||
In-state resident | 2,655 | 92.5% | 91.9% | 0.6% | 821 | 94.7% | 95.8% | −1.1% |
Pell grant eligible | 2,655 | 46.3% | 45.5% | 0.7% | 821 | 40.0% | 42.7% | −2.7% |
Parent income | 2,536 | $74,511 | $78,504 | −$3,993 | 803 | $80,808 | $79,602 | $1,206 |
Parent with college degree or higher | 2,617 | 65.6% | 63.7% | 1.9% | 815 | 75.4% | 68.4% | 7.0%* |
Grant aid | 2,655 | $3,396 | $3,513 | −$117 | 821 | $6,495 | $7,199 | −$405 |
Female | 2,654 | 58.6% | 60.8% | 2.2% | 821 | 61.8% | 66.7% | −4.9% |
Black | 2,288 | 9.1% | 10.4% | −1.3% | 745 | 5.2% | 5.2% | 0.0% |
Latino | 2,288 | 2.2% | 2.5% | −0.2% | 745 | 2.5% | 1.5% | 1.0% |
White | 2,288 | 86.6% | 85.4% | 1.2% | 745 | 90.0% | 91.8% | −1.8% |
Other race/ethnicity | 2,288 | 2.1% | 1.8% | 0.3% | 745 | 2.2% | 1.5% | 0.8% |
Race/ethnicity missing or unreported | 2,655 | 14.0% | 13.7% | 0.3% | 821 | 8.0% | 10.7% | −2.7% |
*p < 0.10.
Among already-enrolled students (columns 6–8), those in the treatment group were statistically more likely to have a parent with a college degree than those in the control group: 75 percent of the treatment group had at least one parent with a college degree compared with 68 percent of the control group. Differences in other baseline covariates were not statistically significant, and covariates did not jointly predict assignment to treatment (p = 0.13).
Analytic Strategy
To examine whether students who traditionally face greater informational barriers during the college-going process were particularly responsive to attempts to simplify information, I estimated the full model among subgroups of low-income (measured by Pell grant eligibility and by bottom income quintile), first generation, and underrepresented minority students (black, Latino, and Native American). I also estimated models to test for heterogeneous effects of the shopping sheet among other subpopulations of students for whom information may be particularly salient or who traditionally rely more on loans to finance their education. These subgroups included female students, students in the top income quintile or the top ACT quintile who may be particuarly attuned to college information and have greater support interpreting information, and whether already-enrolled students had previously borrowed.
Results
Table 2 presents results for admitted (columns 1–6) and already-enrolled (columns 7–10) students, and among subgroups of Pell grant–eligible students, students from the bottom (and top) income quartiles, first generation students, underrepresented minority students, female students, students in the bottom (and top) ACT quartile, and students who had previously borrowed (among already-enrolled students). Estimates for whether a student borrowed were conditional upon enrollment; estimates for the amount borrowed were conditional upon borrowing. Results indicate that assignment to receive the shopping sheet did not have a statistically significant effect on whether a student enrolled, whether a student borrowed, or how much a student borrowed.
. | Admitted Students . | Already-Enrolled Students . | |||
---|---|---|---|---|---|
. | Whether a Student Enrolled (1) . | Whether a Student Borrowed (2) . | Amount Borrowed (3) . | Whether a Student Borrowed (4) . | Amount Borrowed (5) . |
Full sample | −0.02 | 0.04 | −66.87 | 0.00 | 227.24 |
(0.02) | (0.03) | (178.90) | (0.02) | (211.34) | |
N | 2,471 | 1,162 | 542 | 791 | 348 |
Pell eligible | −0.04 | 0.06 | −172.28 | 0.02 | 404.31 |
(0.03) | (0.04) | (273.28) | (0.04) | (330.15) | |
N | 1,057 | 506 | 288 | 313 | 169 |
Bottom income quartile | −0.04 | 0.01 | 182.44 | −0.01 | 603.70 |
(0.04) | (0.06) | (384.98) | (0.05) | (445.19) | |
N | 617 | 283 | 149 | 197 | 104 |
Top income quartile | 0.01 | 0.03 | −494.85 | −0.01 | 65.97 |
(0.04) | (0.05) | (448.54) | (0.03) | (409.88) | |
N | 629 | 281 | 70 | 200 | 56 |
First generation status | −0.03 | 0.00 | −262.15 | −0.03 | 292.44 |
(0.03) | (0.05) | (310.68) | (0.05) | (357.34) | |
N | 844 | 400 | 221 | 215 | 118 |
Underrepresented minority | −0.08 | 0.07 | −1334.71* | −0.12 | 1657.46 |
(0.06) | (0.09) | (709.30) | (0.10) | (1236.85) | |
N | 282 | 126 | 72 | 50 | 32 |
Female | −0.00 | 0.07* | 290.25 | 0.02 | 151.41 |
(0.03) | (0.04) | (213.40) | (0.03) | (257.98) | |
N | 1,484 | 706 | 332 | 504 | 236 |
Bottom ACT quartile | −0.04 | 0.07 | 70.34 | −0.01 | 674.17* |
(0.04) | (0.06) | (376.89) | (0.04) | (355.62) | |
N | 685 | 291 | 180 | 277 | 154 |
Top ACT quartile | 0.01 | 0.05 | −70.01 | −0.07 | −703.93 |
(0.04) | (0.05) | (423.03) | (0.05) | (626.13) | |
N | 508 | 274 | 71 | 138 | 27 |
Previously borrowed | 0.05* | 259.58 | |||
(0.03) | (241.13) | ||||
N | 318 | 293 |
. | Admitted Students . | Already-Enrolled Students . | |||
---|---|---|---|---|---|
. | Whether a Student Enrolled (1) . | Whether a Student Borrowed (2) . | Amount Borrowed (3) . | Whether a Student Borrowed (4) . | Amount Borrowed (5) . |
Full sample | −0.02 | 0.04 | −66.87 | 0.00 | 227.24 |
(0.02) | (0.03) | (178.90) | (0.02) | (211.34) | |
N | 2,471 | 1,162 | 542 | 791 | 348 |
Pell eligible | −0.04 | 0.06 | −172.28 | 0.02 | 404.31 |
(0.03) | (0.04) | (273.28) | (0.04) | (330.15) | |
N | 1,057 | 506 | 288 | 313 | 169 |
Bottom income quartile | −0.04 | 0.01 | 182.44 | −0.01 | 603.70 |
(0.04) | (0.06) | (384.98) | (0.05) | (445.19) | |
N | 617 | 283 | 149 | 197 | 104 |
Top income quartile | 0.01 | 0.03 | −494.85 | −0.01 | 65.97 |
(0.04) | (0.05) | (448.54) | (0.03) | (409.88) | |
N | 629 | 281 | 70 | 200 | 56 |
First generation status | −0.03 | 0.00 | −262.15 | −0.03 | 292.44 |
(0.03) | (0.05) | (310.68) | (0.05) | (357.34) | |
N | 844 | 400 | 221 | 215 | 118 |
Underrepresented minority | −0.08 | 0.07 | −1334.71* | −0.12 | 1657.46 |
(0.06) | (0.09) | (709.30) | (0.10) | (1236.85) | |
N | 282 | 126 | 72 | 50 | 32 |
Female | −0.00 | 0.07* | 290.25 | 0.02 | 151.41 |
(0.03) | (0.04) | (213.40) | (0.03) | (257.98) | |
N | 1,484 | 706 | 332 | 504 | 236 |
Bottom ACT quartile | −0.04 | 0.07 | 70.34 | −0.01 | 674.17* |
(0.04) | (0.06) | (376.89) | (0.04) | (355.62) | |
N | 685 | 291 | 180 | 277 | 154 |
Top ACT quartile | 0.01 | 0.05 | −70.01 | −0.07 | −703.93 |
(0.04) | (0.05) | (423.03) | (0.05) | (626.13) | |
N | 508 | 274 | 71 | 138 | 27 |
Previously borrowed | 0.05* | 259.58 | |||
(0.03) | (241.13) | ||||
N | 318 | 293 |
Notes: Robust standard errors in parentheses. Estimates for whether a student enrolled and whether a student borrowed (conditional on enrollment) come from linear probability models. Estimates for amount borrowed (conditional on borrowing) come from linear models. All models include baseline covariates: indicators for gender, ethnicity, in-state residency, Pell grant eligibility, one parent with college degree or higher, previously taken >30 college credit hours (already-enrolled students), and continuous measures of high school grade point average, ACT score, natural log of parent income, grant aid, and amount previously borrowed (already enrolled students). Pell grant eligibility, income, parent education, race, gender, and ACT scores are excluded as covariates in models for the corresponding subgroups.
*p < 0.10.
Admitted underrepresented minority students borrowed around $1,300 less than white students, a marginally significant effect at the 0.10 level; the effect was positive and insignificant among already-enrolled students. Importantly, the reduction for admitted students was concentrated in unsubsidized Stafford loans, with no changes in other loans (see table A.1 in the online appendix for results by subsidized and unsubsidized loans). Thus, the shopping sheet may represent one strategy to reduce debt that accrues interest while students are in college among minority students making borrowing decisions for the first time. I found little evidence of heterogeneous effects by income, first generation status, gender, or academic achievement. Small sample sizes for subgroups reduced statistical power but no clear patterns emerged to indicate the intervention had a particular strong effect on subgroups other than underrepresented minority students.
4. Quasi-Experimental Evidence
I next use a difference-in-differences (DID) estimation strategy with inverse propensity score weighting to evaluate the impact of the shopping sheet during the first two years, the most recently available data. I first consider the overall impact of the shopping sheet and then examine its impact at colleges with better (or worse) outcomes and at colleges serving larger shares of students receiving federal grant aid and underrepresented minority students.
Data and Methods
Data come from the National Center for Education Statistics’ Integrated Postsecondary Education Data System and the College Scorecard (see https://collegescorecard.ed.gov/data/). The sample consists of 1,731 primarily bachelor's degree-granting (four-year) colleges observed annually from the 2007–08 to the 2014–15 academic years. Colleges first used the shopping sheet in Spring/Summer 2013 to award aid for 2013–14, providing six years of pre-policy and two years of post-policy observations at colleges that initially adopted, and seven pre-policy and one post-policy observation at colleges adopting in the second year.
Outcome variables are the percent of admitted students who enrolled (yield rate), the percent of full-time, first-time students who borrowed (subsidized and unsubsidized Stafford and Perkins loans), and the average amount that full-time, first-time students borrowed (among borrowers). Outcomes are measured at the college level but correspond to student-level outcomes in the field experiment. Similar to the experiment, enrollment and borrowing outcomes are related: If enrollment shifts, the share of students borrowing may change as a result of the composition of students changing. For instance, students who may have had to borrow more may choose to attend a less expensive college after receiving information about net cost and loans, leaving enrollees less likely to borrow. Similarly, if the share of students borrowing decreases, students who borrow may be more needy and borrow more as a result. Disentangling the shopping sheet's impact on outcomes, net of its impact on other outcomes, is difficult. I controlled for several time-varying covariates that capture the composition of students: admission rate, percent sending FAFSA to two or more colleges, enrollment, percent Pell grant recipient enrollment, percent underrepresented minority student enrollment (black, Latino, and Native American), and percent female enrollment, reported annually for each college. Additionally, I controlled for graduation rate, default rate, tuition and fees, and average grant aid per student, which likely shape enrollment and borrowing. Financial figures are adjusted for inflation and scaled. Table A.2 in the online appendix lists variables, definitions, and sources.
The treatment variable is a dummy variable indicating whether a college used the shopping sheet in a given year. A list of shopping sheet colleges is maintained by the ED and is continually updated. To capture shopping sheet adoption in the first two years, I used the Internet Archive: Wayback Machine (https://archive.org/web/) to access the list of adopters as of 19 April 2013, which I coded as shopping sheet colleges in the first year, and as of 15 April 2014, which I coded as shopping sheet colleges in the second year. I selected mid-April because students typically receive offers in the spring, so the lists should be comprehensive of adopters at the time colleges send aid offers to students. Around 12 percent of four-year colleges (N = 211) adopted the shopping sheet in the first year, and nearly one-quarter of four-year colleges (N = 409) in the second year.
Equations 2–4 provide an overall estimate of the shopping sheet's impact on enrollment and borrowing. However, the shopping sheet's prominently featured reference points relating to graduation and loan default outcomes may have shaped enrollment and borrowing in different ways at colleges with better (or worse) outcomes relative to others. To test for heterogeneous effects, I next estimated models restricting the sample to colleges by graduation category: high (≥56.6 percent in 2014 and ≥57.7 percent in 2015, N = 713), medium (between 37.2 and 56.6 percent in 2014 and between 39.3 and 57.7 percent in 2015, N = 733), or low (<37.2 percent in 2014 and <39.3 percent in 2015, N = 578); and default category: above or below average (13.4 percent in 2014 and 14.7 percent in 2015, N = 1,385 above and N = 311 below). I then restricted the sample to colleges with both low graduation and above average default rates to examine outcomes at particularly risky colleges (N = 235). While graduation and default rates are correlated, not all colleges with low graduation rates had high default rates and vice versa: 75 percent of colleges with low graduation rates had above average default rates and 42 percent of colleges with above average default rates had low graduation rates. Colleges were categorized as low graduation rate if they were low in either 2014 or 2015 and so on for each category.
To examine whether enrollment and borrowing responses differed based on the population of students served, I restricted the sample to colleges with high shares of students receiving federal grant aid through the Pell program (>38.4 percent of students receiving Pell grants, the sample mean) and underrepresented minority students (>20.3 percent underrepresented minorities, the sample mean). Finally, I restricted the sample to selective colleges, those admitting fewer than 65.6 percent of applicants (the sample mean), to examine whether enrollment and borrowing differed at more selective institutions, those that disproportionately enroll students from more privileged backgrounds who tend to have greater access to college counseling and other resources to help evaluate information.
Causal inference in DID designs rests on the assumption that shopping sheet and comparison colleges would have experienced similar trends in outcomes in years after policy adoption absent treatment. If violated, DID estimates will capture not only the impact of the shopping sheet's introduction but other differences in enrollment or borrowing trends between treatment and comparison colleges. For instance, if colleges adopted the shopping sheet in response to relatively high or rapidly rising borrowing rates, a model that does not account for this would yield biased estimates that capture different trends between groups.
To account for differences between shopping sheet and comparison colleges, I estimated a second set of models with the inclusion of inverse propensity score weights as a balancing statistic to account for observable differences that influence the likelihood that a college adopted the shopping sheet. To do this, I first estimated a college's propensity score, or the probability of a college adopting the shopping sheet, conditional on a range of baseline covariates measured the year prior to adoption (2013) using logistic regression. Baseline covariates used to estimate propensity scores were selected to simulate the selection process, or factors that shaped a college's decision to adopt. The shopping sheet was hailed as an effort to help students understand college options and an initial group of ten prominent institutions, including Arizona State University, the University of North Carolina at Chapel Hill, Vassar College, and state systems of New York, Texas, Maryland, and Massachusetts, were quickly joined by nearly 200 other institutions in 2012–13, and an additional 200 the next. Discussion focused on transparency around costs and loan options (Nelson 2012), rather than featuring high graduation or low default rates. However, college outcomes, as well as factors such as pricing and borrowing, likely shaped decisions to adopt. For instance, a college with relatively high borrowing may adopt the shopping sheet in an effort to reduce borrowing by giving students more information.
In modeling the selection process for shopping sheet adoption, baseline covariates included: graduation and loan default rates (metrics highlighted in the shopping sheet); tuition and fees, grant aid, and median borrowing, as measures of a college's net price and students’ reliance on loans; the share of students who sent FAFSA to at least two colleges (a measure of the extent to which students at a college were considering other colleges), student characteristics such as percent Pell grant recipients, percent underrepresented minority students, and percent female; admission rate (a measure of college selectivity), whether a college is public (the majority of adopting colleges were public), and dummy variables for state (several state systems adopted as a whole).
After estimating a college's propensity of adopting the shopping sheet in 2014 or 2015 conditional on covariates, I calculated inverse propensity score weights for comparison colleges: (propensity score/[1 – propensity score]). Shopping sheet colleges maintained a weight of 1 to estimate the average treatment effect on the treated, that is, the average impact of the shopping sheet at shopping sheet colleges. Weights were included in the second set of models.
Table 3 provides baseline covariate means at shopping sheet and comparison colleges in 2013 and 2014 when adoption occurred.5 Shopping sheet colleges, on average, charged around $6,000 less in tuition and offered around $4,000 less in grant aid than comparison colleges. They had slightly though not significantly higher graduation rates, were nearly twice the size of comparison colleges, and were more likely to be public (more than 60 percent of adopters). Figures A.1 and A.2 in the online appendix provide kernel distributions of baseline covariates for unweighted and weighted samples. After weighting, covariate distributions more closely mirror shopping sheet colleges and are more likely to experience similar trends in outcomes.
. | 2013 . | 2014 . | ||||
---|---|---|---|---|---|---|
. | Shopping Sheet Colleges (1) . | Comparison Comparison Colleges (2) . | Difference in Means (3) . | Shopping Sheet Colleges (4) . | Comparison Colleges (5) . | Difference in Means (6) . |
Yield rate | 36.7% | 37.3% | −0.7% | 35.6% | 36.2% | −0.6% |
Percent of students borrowing | 60.3% | 62.1% | −1.8% | 58.5% | 61.5% | −3.1%** |
Average amount borrowed | $5,945 | $6,056 | −$111 | $5,819 | $5,993 | −$173*** |
Graduation rate | 53.8% | 51.3% | 2.6% | 51.7% | 52.0% | 0.2% |
Loan default rate | 7.3% | 7.6% | −0.4% | 6.9% | 7.6% | −0.7%** |
Tuition and fees | $14,359 | $20,434 | −$6,075*** | $14,636 | $21,666 | −$7,030*** |
Grant aid | $7,873 | $11,123 | −$3,249*** | $8,303 | $12,087 | −$3,784*** |
Acceptance rate | 65.1% | 64.3% | 0.8% | 66.8% | 64.7% | 2.2%* |
Share sending FAFSA to 2+ colleges | 57.9% | 61.4% | −3.5%*** | 58.8% | 62.6% | −3.8%*** |
Undergraduate enrollment | 10,406 | 4,950 | 5,456*** | 10,018 | 4,272 | 5,746*** |
Percent Pell enrollment | 37.4% | 41.2% | −3.8%*** | 38.3% | 41.4% | −3.1%*** |
Percent underrepresented minority | 19.4% | 21.4% | −2.0% | 19.9% | 22.1% | −2.2%* |
Percent women | 55.8% | 55.5% | 0.3% | 55.2% | 55.5% | −0.3% |
Percent public colleges | 66.8% | 27.6% | 39.3%*** | 63.3% | 22.8% | 40.6%*** |
N (number of colleges) | 211 | 1,520 | 409 | 1,322 |
. | 2013 . | 2014 . | ||||
---|---|---|---|---|---|---|
. | Shopping Sheet Colleges (1) . | Comparison Comparison Colleges (2) . | Difference in Means (3) . | Shopping Sheet Colleges (4) . | Comparison Colleges (5) . | Difference in Means (6) . |
Yield rate | 36.7% | 37.3% | −0.7% | 35.6% | 36.2% | −0.6% |
Percent of students borrowing | 60.3% | 62.1% | −1.8% | 58.5% | 61.5% | −3.1%** |
Average amount borrowed | $5,945 | $6,056 | −$111 | $5,819 | $5,993 | −$173*** |
Graduation rate | 53.8% | 51.3% | 2.6% | 51.7% | 52.0% | 0.2% |
Loan default rate | 7.3% | 7.6% | −0.4% | 6.9% | 7.6% | −0.7%** |
Tuition and fees | $14,359 | $20,434 | −$6,075*** | $14,636 | $21,666 | −$7,030*** |
Grant aid | $7,873 | $11,123 | −$3,249*** | $8,303 | $12,087 | −$3,784*** |
Acceptance rate | 65.1% | 64.3% | 0.8% | 66.8% | 64.7% | 2.2%* |
Share sending FAFSA to 2+ colleges | 57.9% | 61.4% | −3.5%*** | 58.8% | 62.6% | −3.8%*** |
Undergraduate enrollment | 10,406 | 4,950 | 5,456*** | 10,018 | 4,272 | 5,746*** |
Percent Pell enrollment | 37.4% | 41.2% | −3.8%*** | 38.3% | 41.4% | −3.1%*** |
Percent underrepresented minority | 19.4% | 21.4% | −2.0% | 19.9% | 22.1% | −2.2%* |
Percent women | 55.8% | 55.5% | 0.3% | 55.2% | 55.5% | −0.3% |
Percent public colleges | 66.8% | 27.6% | 39.3%*** | 63.3% | 22.8% | 40.6%*** |
N (number of colleges) | 211 | 1,520 | 409 | 1,322 |
FAFSA = Free Application for Federal Student Aid.
*p < 0.10; **p < 0.05; ***p < 0.01.
Figures 1–3 plot trends in yield rate, percent of students borrowing, and average amount borrowed for shopping sheet and weighted comparison colleges from 2008 to 2015. Plots in each figure show trends for the full sample and subgroups: high, medium, and low graduation rate; below average default rate; above average default rate; low graduation and high default rates; colleges with higher shares of Pell grant recipients; colleges with higher shares of underrepresented minority students; and selective colleges.
Yield rates across the plots and across shopping sheet and comparison colleges follow a downward trend over the time period with little clear change after the introduction of the shopping sheet. In subgroups of colleges with relatively worse outcomes—particularly colleges with both low graduation rates and high default rates—the downward trend is sharper after the introduction of the shopping sheet but later attenuated. The percent of students borrowing at shopping sheet and comparison colleges increased from 2008 to 2012, at which point the share leveled off and eventually declined. A similar trend in the average amount borrowed appears at colleges with both low graduation rates and high default rates after the introduction of the shopping sheet: the average amount decreases the first year after implementation at shopping sheet colleges but is attenuated the following year. This downward dip in the first year after adoption followed by a bounce back in year 2, however, does not appear as a significant difference in borrowing from pre-policy years in analytic models. Relative to weighted comparison colleges, the average amount borrowed appears to decline after the introduction of the shopping sheet at colleges with low graduation rates, below average default rates, and colleges with high shares of Pell grant recipients and underrepresented minorities. No clear shifts in the share of students borrowing occurred after the introduction of the shopping sheet.
To further test whether results could be attributed to the shopping sheet rather than other trends or policy changes occurring at the same time, I conducted event study analyses, assigning a series of leads and lags for shopping sheet use to examine whether there was an anticipatory effect prior to adoption of the shopping sheet or a delayed response a year after adoption. Coefficients from the event study analyses are plotted in figures A.3–A.5 in the online appendix. These placebo tests were largely null, indicating no significant effects in years prior to actual adoption. The few significant results generally occur in early years and may be simply by chance given the number of tests.
Results
Tables 4–6 present DID estimates of the shopping sheet's impact on yield rate, percent of students borrowing, and average amount borrowed from unweighted (columns 1–3) and weighted (columns 4–6) models. The first column for each outcome provides estimates from models that include the treatment indicator (shopping sheet x post) and college and year fixed effects, the second adds college-level time-varying covariates, and the third adds college-specific linear trends. Results are provided for the full sample of colleges and subsamples: high graduation rate, medium graduation rate, low graduation rate, below average default rate, above average default rate, low graduation and above average default, high Pell grant enrollment, high underrepresented minority student enrollment, and selective colleges.
. | Unweighted Sample . | Weighted Sample . | ||||
---|---|---|---|---|---|---|
. | (1) . | (2) . | (3) . | (4) . | (5) . | (6) . |
Full sample (N = 1,731) | 0.22 | 0.57 | 0.14 | 0.44 | 0.54 | 0.75 |
(0.40) | (0.40) | (0.46) | (0.42) | (0.42) | (0.48) | |
High graduation rate colleges (N = 713) | 0.07 | 0.37 | 0.10 | 0.34 | 0.35 | 0.55 |
(0.41) | (0.43) | (0.57) | (0.44) | (0.49) | (0.62) | |
Medium graduation rate colleges (N = 733) | 0.55 | 0.91 | −0.01 | 0.40 | 0.61 | 0.63 |
(0.59) | (0.57) | (0.67) | (0.59) | (0.56) | (0.64) | |
Low graduation rate colleges (N = 578) | 0.83 | 0.87 | 1.43 | 1.81 | 1.67 | 2.58* |
(1.16) | (1.15) | (1.35) | (1.30) | (1.26) | (1.45) | |
Low default rate colleges (N = 1,385) | −0.01 | 0.16 | 0.63 | 0.60 | 0.65 | 1.86 |
(1.23) | (1.37) | (1.47) | (1.39) | (1.50) | (1.47) | |
High default rate colleges (N = 311) | 0.19 | 0.55 | 0.04 | 0.38 | 0.47 | 0.55 |
(0.42) | (0.41) | (0.47) | (0.43) | (0.43) | (0.50) | |
Low graduation & high default rate colleges (N = 235) | 0.07 | −0.10 | 0.73 | 0.40 | 0.40 | 1.48 |
(1.69) | (1.86) | (1.88) | (1.95) | (2.05) | (2.02) | |
High Pell colleges (N = 755) | 0.45 | 0.44 | −0.12 | 0.76 | 0.89 | 0.89 |
(0.83) | (0.81) | (0.90) | (0.87) | (0.86) | (0.95) | |
High minority colleges (N = 519) | 1.62* | 1.67* | −0.59 | 1.09 | 1.48* | 0.19 |
(0.86) | (0.86) | (0.85) | (0.90) | (0.89) | (0.87) | |
High selectivity (N = 711) | 0.55 | 1.36** | 0.52 | 0.79 | 1.36** | 1.01 |
(0.56) | (0.57) | (0.70) | (0.56) | (0.57) | (0.72) | |
College covariates | No | Yes | Yes | No | Yes | Yes |
College-specific linear trends | No | No | Yes | No | No | Yes |
. | Unweighted Sample . | Weighted Sample . | ||||
---|---|---|---|---|---|---|
. | (1) . | (2) . | (3) . | (4) . | (5) . | (6) . |
Full sample (N = 1,731) | 0.22 | 0.57 | 0.14 | 0.44 | 0.54 | 0.75 |
(0.40) | (0.40) | (0.46) | (0.42) | (0.42) | (0.48) | |
High graduation rate colleges (N = 713) | 0.07 | 0.37 | 0.10 | 0.34 | 0.35 | 0.55 |
(0.41) | (0.43) | (0.57) | (0.44) | (0.49) | (0.62) | |
Medium graduation rate colleges (N = 733) | 0.55 | 0.91 | −0.01 | 0.40 | 0.61 | 0.63 |
(0.59) | (0.57) | (0.67) | (0.59) | (0.56) | (0.64) | |
Low graduation rate colleges (N = 578) | 0.83 | 0.87 | 1.43 | 1.81 | 1.67 | 2.58* |
(1.16) | (1.15) | (1.35) | (1.30) | (1.26) | (1.45) | |
Low default rate colleges (N = 1,385) | −0.01 | 0.16 | 0.63 | 0.60 | 0.65 | 1.86 |
(1.23) | (1.37) | (1.47) | (1.39) | (1.50) | (1.47) | |
High default rate colleges (N = 311) | 0.19 | 0.55 | 0.04 | 0.38 | 0.47 | 0.55 |
(0.42) | (0.41) | (0.47) | (0.43) | (0.43) | (0.50) | |
Low graduation & high default rate colleges (N = 235) | 0.07 | −0.10 | 0.73 | 0.40 | 0.40 | 1.48 |
(1.69) | (1.86) | (1.88) | (1.95) | (2.05) | (2.02) | |
High Pell colleges (N = 755) | 0.45 | 0.44 | −0.12 | 0.76 | 0.89 | 0.89 |
(0.83) | (0.81) | (0.90) | (0.87) | (0.86) | (0.95) | |
High minority colleges (N = 519) | 1.62* | 1.67* | −0.59 | 1.09 | 1.48* | 0.19 |
(0.86) | (0.86) | (0.85) | (0.90) | (0.89) | (0.87) | |
High selectivity (N = 711) | 0.55 | 1.36** | 0.52 | 0.79 | 1.36** | 1.01 |
(0.56) | (0.57) | (0.70) | (0.56) | (0.57) | (0.72) | |
College covariates | No | Yes | Yes | No | Yes | Yes |
College-specific linear trends | No | No | Yes | No | No | Yes |
Notes: Covariates include graduation rate, default rate, tuition and fees, average grant aid per student, admission rate, percent of students sending Free Application for Federal Student Aid to 2+ colleges, total enrollment, percent Pell recipient enrollment, percent underrepresented minority student enrollment (black, Latino, and Native American), and percent female enrollment. Columns 4—6 come from models that include inverse propensity score weights. High Pell colleges defined as those at which >38.4 percent receive Pell grants (sample mean), high minority colleges defined as those at which >20.3 percent are underrepresented minorities (sample mean), and high selectivity colleges defined as those admitting <65.6 percent of applicants (sample mean).
*p < 0.10; **p < 0.05; ***p < 0.01.
. | Unweighted Sample . | Weighted Sample . | ||||
---|---|---|---|---|---|---|
. | (1) . | (2) . | (3) . | (4) . | (5) . | (6) . |
Full sample (N = 1,731) | −0.94** | −1.22*** | −0.24 | −1.64** | −1.33*** | −0.01 |
(0.44) | (0.38) | (0.45) | (0.64) | (0.50) | (0.52) | |
High graduation rate colleges (N = 713) | −0.55 | −0.80 | −0.28 | −0.87 | −0.81 | 0.14 |
(0.56) | (0.49) | (0.61) | (0.57) | (0.50) | (0.58) | |
Medium graduation rate colleges (N = 733) | −1.02* | −0.88 | −0.26 | −1.98* | −0.87 | −0.26 |
(0.61) | (0.54) | (0.63) | (1.13) | (0.76) | (0.88) | |
Low graduation rate colleges (N = 578) | −2.43** | −2.98*** | 0.18 | −2.23* | −2.29** | 0.25 |
(1.18) | (1.11) | (1.35) | (1.16) | (1.14) | (1.30) | |
Low default rate colleges (N = 1,385) | −0.01 | 0.16 | 0.63 | −2.22 | −0.90 | 0.50 |
(1.23) | (1.37) | (1.47) | (1.52) | (1.56) | (1.70) | |
High default rate colleges (N = 311) | −0.58 | −1.14*** | −0.39 | −1.52** | −1.30** | −0.18 |
(0.44) | (0.37) | (0.43) | (0.70) | (0.50) | (0.53) | |
Low graduation & high default rate colleges (N = 235) | −3.76* | −2.67 | 0.38 | −3.64* | −2.43 | −0.51 |
(2.15) | (2.23) | (2.57) | (2.15) | (2.34) | (2.42) | |
High Pell enrollment colleges (N = 755) | −2.07** | −1.91** | 0.05 | −3.43** | −2.51** | −0.34 |
(0.82) | (0.76) | (0.91) | (1.42) | (1.04) | (1.19) | |
High minority student enrollment colleges (N = 519) | −0.35 | −0.77 | 0.37 | −2.24 | −1.71 | −0.47 |
(1.00) | (0.86) | (0.95) | (1.74) | (1.30) | (1.39) | |
High selectivity (N = 711) | −0.35 | −1.05* | −0.70 | −1.76 | −1.45* | −0.54 |
(0.68) | (0.62) | (0.70) | (1.22) | (0.84) | (0.90) | |
College covariates | No | Yes | Yes | No | Yes | Yes |
College-specific linear trends | No | No | Yes | No | No | Yes |
. | Unweighted Sample . | Weighted Sample . | ||||
---|---|---|---|---|---|---|
. | (1) . | (2) . | (3) . | (4) . | (5) . | (6) . |
Full sample (N = 1,731) | −0.94** | −1.22*** | −0.24 | −1.64** | −1.33*** | −0.01 |
(0.44) | (0.38) | (0.45) | (0.64) | (0.50) | (0.52) | |
High graduation rate colleges (N = 713) | −0.55 | −0.80 | −0.28 | −0.87 | −0.81 | 0.14 |
(0.56) | (0.49) | (0.61) | (0.57) | (0.50) | (0.58) | |
Medium graduation rate colleges (N = 733) | −1.02* | −0.88 | −0.26 | −1.98* | −0.87 | −0.26 |
(0.61) | (0.54) | (0.63) | (1.13) | (0.76) | (0.88) | |
Low graduation rate colleges (N = 578) | −2.43** | −2.98*** | 0.18 | −2.23* | −2.29** | 0.25 |
(1.18) | (1.11) | (1.35) | (1.16) | (1.14) | (1.30) | |
Low default rate colleges (N = 1,385) | −0.01 | 0.16 | 0.63 | −2.22 | −0.90 | 0.50 |
(1.23) | (1.37) | (1.47) | (1.52) | (1.56) | (1.70) | |
High default rate colleges (N = 311) | −0.58 | −1.14*** | −0.39 | −1.52** | −1.30** | −0.18 |
(0.44) | (0.37) | (0.43) | (0.70) | (0.50) | (0.53) | |
Low graduation & high default rate colleges (N = 235) | −3.76* | −2.67 | 0.38 | −3.64* | −2.43 | −0.51 |
(2.15) | (2.23) | (2.57) | (2.15) | (2.34) | (2.42) | |
High Pell enrollment colleges (N = 755) | −2.07** | −1.91** | 0.05 | −3.43** | −2.51** | −0.34 |
(0.82) | (0.76) | (0.91) | (1.42) | (1.04) | (1.19) | |
High minority student enrollment colleges (N = 519) | −0.35 | −0.77 | 0.37 | −2.24 | −1.71 | −0.47 |
(1.00) | (0.86) | (0.95) | (1.74) | (1.30) | (1.39) | |
High selectivity (N = 711) | −0.35 | −1.05* | −0.70 | −1.76 | −1.45* | −0.54 |
(0.68) | (0.62) | (0.70) | (1.22) | (0.84) | (0.90) | |
College covariates | No | Yes | Yes | No | Yes | Yes |
College-specific linear trends | No | No | Yes | No | No | Yes |
Notes: Covariates include graduation rate, default rate, tuition and fees, average grant aid per student, admission rate, percent of students sending Free Application for Federal Student Aid to 2+ colleges, total enrollment, percent Pell recipient enrollment, percent underrepresented minority student enrollment (black, Latino, and Native American), and percent female enrollment. Columns 4-6 come from models that include inverse propensity score weights. High Pell colleges defined as those at which >38.4 percent receive Pell grants (sample mean), high minority colleges defined as those at which >20.3 percent are underrepresented minorities (sample mean), and high selectivity colleges defined as those admitting <65.6 percent of applicants (sample mean).
*p < 0.10; **p < 0.05; ***p < 0.01.
. | Unweighted Sample . | Weighted Sample . | ||||
---|---|---|---|---|---|---|
. | (1) . | (2) . | (3) . | (4) . | (5) . | (6) . |
Full sample (N = 1,731) | 50.57 | 39.80 | −84.34 | 17.58 | 19.18 | −74.72 |
(37.71) | (36.58) | (52.39) | (42.71) | (40.08) | (53.44) | |
High graduation rate colleges (N = 713) | 73.98 | 48.70 | −22.26 | 116.71* | 92.79 | 6.73 |
(50.54) | (50.47) | (82.54) | (62.35) | (56.69) | (82.30) | |
Medium graduation rate colleges (N = 733) | 149.10*** | 142.49*** | −44.06 | 20.16 | 35.60 | −98.34 |
(53.93) | (53.82) | (69.46) | (59.17) | (56.66) | (73.59) | |
Low graduation rate colleges (N = 578) | −87.12 | −92.09 | −198.41 | −73.92 | −56.55 | −94.65 |
(89.87) | (84.68) | (124.37) | (86.07) | (83.16) | (106.73) | |
Low default rate colleges (N = 1,385) | −92.83 | −82.36 | 9.67 | −22.12 | −12.29 | 7.23 |
(107.99) | (119.52) | (166.14) | (105.45) | (109.04) | (136.42) | |
High default rate colleges (N = 311) | 79.21** | 61.83* | −93.22* | 23.94 | 22.94 | −90.70 |
(39.37) | (37.29) | (54.18) | (46.68) | (42.46) | (57.42) | |
Low graduation & high default rate colleges (N = 235) | −173.29 | −230.28 | −98.38 | −143.09 | −125.07 | −112.08 |
(121.83) | (141.06) | (217.41) | (114.63) | (125.30) | (179.69) | |
High Pell enrollment colleges (N = 755) | 4.50 | 8.95 | −227.79** | −64.57 | −34.75 | −174.69* |
(72.26) | (71.46) | (93.92) | (81.71) | (76.58) | (93.08) | |
High minority student enrollment colleges (N = 519) | 15.80 | 4.50 | −217.42** | −81.20 | −80.78 | −236.73** |
(80.57) | (82.15) | (100.36) | (87.33) | (89.13) | (107.37) | |
High selectivity (N = 711) | 70.76 | 35.59 | −177.80* | 35.25 | 8.23 | −121.78 |
(62.30) | (60.96) | (93.81) | (77.86) | (70.27) | (104.08) | |
College covariates | No | Yes | Yes | No | Yes | Yes |
College-specific linear trends | No | No | Yes | No | No | Yes |
. | Unweighted Sample . | Weighted Sample . | ||||
---|---|---|---|---|---|---|
. | (1) . | (2) . | (3) . | (4) . | (5) . | (6) . |
Full sample (N = 1,731) | 50.57 | 39.80 | −84.34 | 17.58 | 19.18 | −74.72 |
(37.71) | (36.58) | (52.39) | (42.71) | (40.08) | (53.44) | |
High graduation rate colleges (N = 713) | 73.98 | 48.70 | −22.26 | 116.71* | 92.79 | 6.73 |
(50.54) | (50.47) | (82.54) | (62.35) | (56.69) | (82.30) | |
Medium graduation rate colleges (N = 733) | 149.10*** | 142.49*** | −44.06 | 20.16 | 35.60 | −98.34 |
(53.93) | (53.82) | (69.46) | (59.17) | (56.66) | (73.59) | |
Low graduation rate colleges (N = 578) | −87.12 | −92.09 | −198.41 | −73.92 | −56.55 | −94.65 |
(89.87) | (84.68) | (124.37) | (86.07) | (83.16) | (106.73) | |
Low default rate colleges (N = 1,385) | −92.83 | −82.36 | 9.67 | −22.12 | −12.29 | 7.23 |
(107.99) | (119.52) | (166.14) | (105.45) | (109.04) | (136.42) | |
High default rate colleges (N = 311) | 79.21** | 61.83* | −93.22* | 23.94 | 22.94 | −90.70 |
(39.37) | (37.29) | (54.18) | (46.68) | (42.46) | (57.42) | |
Low graduation & high default rate colleges (N = 235) | −173.29 | −230.28 | −98.38 | −143.09 | −125.07 | −112.08 |
(121.83) | (141.06) | (217.41) | (114.63) | (125.30) | (179.69) | |
High Pell enrollment colleges (N = 755) | 4.50 | 8.95 | −227.79** | −64.57 | −34.75 | −174.69* |
(72.26) | (71.46) | (93.92) | (81.71) | (76.58) | (93.08) | |
High minority student enrollment colleges (N = 519) | 15.80 | 4.50 | −217.42** | −81.20 | −80.78 | −236.73** |
(80.57) | (82.15) | (100.36) | (87.33) | (89.13) | (107.37) | |
High selectivity (N = 711) | 70.76 | 35.59 | −177.80* | 35.25 | 8.23 | −121.78 |
(62.30) | (60.96) | (93.81) | (77.86) | (70.27) | (104.08) | |
College covariates | No | Yes | Yes | No | Yes | Yes |
College-specific linear trends | No | No | Yes | No | No | Yes |
Notes: Covariates include graduation rate, default rate, tuition and fees, average grant aid per student, admission rate, percent of students sending Free Application for Federal Student Aid to 2+ colleges, total enrollment, percent Pell recipient enrollment, percent underrepresented minority student enrollment (black, Latino, and Native American), and percent female enrollment. Columns 4—6 come from models that include inverse propensity score weights. High Pell colleges defined as those at which >38.4 percent receive Pell grants (sample mean), high minority colleges defined as those at which >20.3 percent are underrepresented minorities (sample mean), and high selectivity colleges defined as those admitting <65.6 percent of applicants (sample mean).
*p < 0.10; **p < 0.05; ***p < 0.01.
Results for yield rate indicate that the shopping sheet did not significantly impact the share of admitted students who enrolled. Coefficients are generally positive, even at colleges with low graduation rates and, to a lesser extent, high default rates. Colleges with low graduation rates in particular experienced a 1 to 2 percentage point increase in yield rate after shopping sheet adoption (from a sample mean of 49.4 percent in 2013) but results were not significant across models. The positive coefficient is counterintuitive, given that reference points introduce information about poor graduation outcomes, but could indicate fewer alternative options for students at these colleges. Results provide some evidence of increased yield rates at colleges with higher shares of underrepresented minority students (perhaps indicating that students served by these institutions are particularly responsive to college information) and at more selective institutions (indicating that these institutions may enroll students who are particularly attuned to college information). Overall, however, results are suggestive rather than conclusive given the lack of statistical significance across models.
The shopping sheet decreased the percent of students borrowing by 1 to 2 percentage points (from a sample average of 61.9 percent in 2013), but the effect was not significant when college-specific trends were included, possibly because of reduced statistical power in the model. The decrease was concentrated at colleges with poor outcomes—low graduation rate, above average default rate, and both low graduation and above average default rates—though results are not significant and, in some cases, change signs once college-specific linear trends are included. These findings indicate that reference points relating to college outcomes may have been salient to students in deciding whether to borrow, though they are not conclusive evidence. The percent of students borrowing also decreased at colleges with high shares of Pell grant recipients.
The average student borrowed less at colleges with worse outcomes (low graduation rates, both low graduation and above average default rates), though results are not statistically significant. At colleges with high levels of Pell grant and underrepresented minority student enrollment, models accounting for college-specific trends demonstrate the average borrower borrowed around $200 less after the introduction of the shopping sheet. In weighted models, results indicate a $175 decrease at colleges with high levels of Pell grant enrollment (from a sample average of $6,182 in 2013) and a $240 decrease at colleges with high shares of underrepresented minority students (from a sample average of $6,272 in 2013). Students at selective colleges borrowed around $175 less after shopping sheet adoption, perhaps indicating that these students are particularly attuned to information, but results were not significant in weighted models.
Results for the average amount borrowed could be driven by a change in the share of students borrowing, at least at high Pell grant enrollment colleges where the share of students borrowing decreased. Student-level analyses would provide additional information about whether the amount borrowed results directly from the shopping sheet or from a change in the composition of students borrowing. Nonetheless, the findings presented here provide insight into overall enrollment and borrowing results at four-year colleges as well as within subgroups of colleges with relatively better (and worse) outcomes, and colleges that serve populations of students who may face greater informational barriers (students receiving federal grant aid and underrepresented minority students) or who may be more attuned to college information (selective colleges).
5. Discussion
This study uses two identification strategies to investigate the impact of a policy effort aimed at helping students make informed college decisions by simplifying and standardizing financial aid offers. Experimental and quasi-experimental results indicate that the shopping sheet had little impact on enrollment and borrowing at the average four-year college. However, reference points in the shopping sheet that related a college's graduation and loan default rates to other colleges had some influence on borrowing. In particular, quasi-experimental results provide suggestive evidence that the share of students borrowing and the average amount borrowed decreased at colleges with poor outcomes, though findings were sensitive to model specification. Experimental results demonstrate that the shopping sheet decreased borrowing among admitted underrepresented minority students by around $1,300, concentrated in reductions in unsubsidized loans, but this finding did not appear among already-enrolled students, perhaps because students making borrowing decisions for the first time are more sensitive to information than peers who have made previous borrowing decisions. Quasi-experimental evidence indicates that students borrowed around $200 less at colleges enrolling larger shares of federal aid recipients and underrepresented minority students. Together, findings indicate that simplification efforts may be particularly important for students who face greater informational barriers.
Several limitations must be considered prior to noting the broader implications of this work. Importantly, this study assumes that the shopping sheet is an improvement over existing college aid offers. Although the CFPB and ED incorporated feedback into the shopping sheet, a study from the National Association for Student Aid Administrators found that the shopping sheet and other formats still led to questions from students, indicating it may not have done enough to simplify information (NASFAA 2013). The shopping sheet, however, is sent to students along with traditional offers rather than replacing information colleges already provide.
Related, treatment in the quasi-experiment is difficult to define because colleges typically adopted the shopping sheet as a cover page or supplement to information they already provide. Offers vary across colleges, which means treatment was different at each college. As a result, predicting enrollment and borrowing outcomes in part depends on how different the college's current offer is from the shopping sheet. At some colleges, differences may be greater, and the shopping sheet may offer more new information or do more to clarify costs and loan options.
One question this research raises is whether the shopping sheet, currently used by more than 3,000 postsecondary institutions (USDE 2016) and required if recently introduced legislation requiring a standardized format passes (Understanding the True Cost of College Act 2019, Senate Bill 888), could represent one strategy to reduce informational barriers for low-income and underrepresented minority students. Students who leave college with debt, even small amounts, but do not complete a degree, are at particular risk of defaulting on loans (Dynarski 2015). Borrowing levels and default rates are particularly high among African American students: Even among those who earn a college degree, one quarter default on loans (Miller 2017). This study provides experimental evidence in a local context and quasi-experimental evidence in a national context that indicates that the shopping sheet may represent one strategy to decrease borrowing among student populations facing greater informational barriers and who may be at particular risk of default. Though additional studies should examine long-term outcomes, such as graduation and repayment, this study provides evidence of heterogeneous effects of a policy effort aimed at helping students make informed decisions.
Although this study demonstrates that how information is presented and to whom it is provided can influence borrowing, one potential explanation for the limited overall impact is that the shopping sheet is delivered too late to alter decisions in meaningful ways. Students receive aid offers in the late spring prior to enrollment. This leaves little time to change enrollment and borrowing: Students may have missed deadlines elsewhere, have less time to start the college search process again, and be able to do little to alter savings or work to adjust borrowing. Most students attend colleges within around an hour from their home; the likelihood of information about college quality changing enrollment decisions for students who might only have one or two nearby options is low (see Hillman 2016 for a detailed discussion of geographic constraints to college enrollment). Geographic barriers likely contribute to previous findings that the shopping sheet had no impact on enrollment at community colleges, where the majority of students come from the local area (Rosinger 2017). Nonetheless, student-level analyses of choice sets and local options could provide new insight into how information about college outcomes influences decisions.
This study contributes to growing policy and research interest in reducing barriers and simplifying the college-going process (e.g., ACSFA 2005; Dynarski and Scott-Clayton 2006; Long 2010; Bettinger et al. 2012; Castleman and Page 2015, 2016). Efforts to reduce barriers and simplify college-going have resulted in a number of consumer information tools designed to help students evaluate options (White House 2016). Findings from this study entail implications for how related efforts can influence different stages of the college-going process. For instance, the College Scorecard, a Web site that the Obama administration revamped in 2015, allows students to compare colleges on many of the same metrics as the shopping sheet but is intended to inform the college search process and potentially expand students’ choices to include colleges with better outcomes (College Scorecard 2016). Research indicates that the number of students sending SAT scores increased as a college's earnings increased, but the effect was concentrated among relatively advantaged students (Hurwitz and Smith 2016).
More broadly, recent policy efforts within the federal government have focused on providing salient and timely information to help consumers evaluate complex choices. Consistent with behavioral interventions in other contexts, such as household energy use, automobile purchases, mortgage rates, and tipping (e.g., Tversky and Simonson 1993; Grynbaum 2009; Allcott 2011; Sunstein 2014; White House Social and Behavioral Sciences Team 2015), this study indicates that context matters. In particular, reference points assessing college outcomes may prove particularly salient for borrowing. This study also demonstrates that informational interventions may reduce borrowing among underrepresented minority students and at colleges serving larger shares of federal grant aid recipients and underrepresented minority students.
Notes
Executive Order No. 13607 (2012) requires institutions complying with the Veterans Administration Principles of Excellence to use a standardized offer for students eligible for federal military and veteran educational benefits. Of the initial institutions that adopted the shopping sheet, almost 600 provided it to all students and 120 to only students receiving federal military and veteran educational benefits.
The shopping sheet also did not impact the academic composition of incoming students, measured by ACT scores.
Power analyses were conducted using PowerUp (Maynard and Dong 2013), and randomization was done separately for admitted and enrolled students using Stata's random number generator with a uniform distribution to ensure each student had an equal probability of selection into treatment.
Year is defined as end of academic year (e.g., 2013 data come from the 2012–13 academic year).
Acknowledgments
I am grateful to the administrators at the partner university who assisted with this research. I would also like to thank Benjamin Castleman, James Hearn, Erik Ness, Asher Rosinger, Robert Toutkoushian, and the editors and reviewers at Education Finance and Policy for their helpful comments on earlier versions of this paper. The research reported here was supported by a fellowship from the Jack Kent Cooke Foundation and funding from the Institute of Education Sciences, U.S. Department of Education, through grant R305B130013 to the University of Virginia. The opinions expressed are my own and do not represent views of the U.S. Department of Education or the Jack Kent Cooke Foundation.