Abstract

Will college students who set goals work harder and perform better? We report two field experiments that involved four thousand college students. One experiment asked treated students to set goals for performance in the course; the other asked treated students to set goals for a particular task (completing online practice exams). Task-based goals had robust positive effects on the level of task completion and marginally significant positive effects on course performance. Performance-based goals had positive but small and statistically insignificant effects on course performance. A theoretical framework that builds on present bias and loss aversion helps to interpret our results.

I. Introduction

RESEARCHERS and policymakers worry that college students exert too little effort, with consequences for their learning, their graduation prospects, and ultimately their labor market outcomes. With this in mind, attention has focused on policies and interventions that could increase student effort by introducing financial incentives, such as making student aid conditional on meeting GPA cutoffs and paying students for improved performance; however, these programs are typically expensive and often yield disappointing results (e.g., Henry, Rubenstein, & Bugler, 2004; Cornwell, Lee, & Mustard, 2005; Angrist, Lang, & Oreopoulos, 2009; Cha & Patel, 2010; Leuven, Oosterbeek, & van der Klaauw, 2010; Scott-Clayton, 2011; De Paola, Scoppa, & Nisticò, 2012; Patel & Rudd, 2012; Castleman, 2014; Cohodes & Goodman, 2014).1

In this paper, we aim to discover whether goal setting can motivate college students to work harder and achieve better outcomes. We focus on goal setting for three main reasons. First, in contrast to financial incentives, goal setting is low cost, scalable, and logistically simple. Second, students might lack self-control. In other words, although students might set out to exert their preferred level of effort, when the time comes to attend class or study, they might lack the self-control necessary to implement these plans. The educational psychology literature finds that self-control correlates positively with effort, which supports the idea that some students underinvest in effort because of low self-control (e.g., Duckworth & Seligman, 2005; Duckworth, Quinn, & Tsukayama, 2012). Third, the behavioral economics literature suggests that agents who lack self-control can use commitment devices such as restricted-access savings accounts to self-regulate their behavior (e.g., Wertenbroch, 1998; Ariely & Wertenbroch, 2002; Thaler & Benartzi, 2004; Ashraf et al., 2006; DellaVigna & Malmendier, 2006; Augenblick, Niederle, & Sprenger, 2015; Kaur, Kremer, & Mullainathan, 2015; Patterson, 2016).2 Goal setting might act as an effective internal commitment device that allows students who lack self-control to increase their effort.3

We gather large-scale experimental evidence from the field to investigate the causal effects of goal setting among college students. We study goals that are set by students themselves, as opposed to goals set by another party (such as a counselor or professor), because self-set goals can be personalized to each student's degree of self-control. We study two types of goals: self-set goals that relate to performance in a course (performance-based goals) and self-set goals that relate to a particular study task (task-based goals). The design of our goal interventions builds on prior work. Our performance-based goals can be viewed as a variant of the performance-based incentives discussed above, with the financial incentives removed and with self-set goals added in their place. Our task-based goals build on recent research by Allan and Fryer (2011) and Fryer (2011) that suggests that financial incentives at the K–12 level work well when they are tied to task completion (e.g., reading a book).

In considering both task-based goals and performance-based goals, our aim is not to test which is more effective. Instead, we aim to understand separately the impacts of two goal-setting technologies that could easily be incorporated into the college setting. To do this, we ran two separate experiments, each with its own within-cohort treatment-control comparison. By learning whether each intervention is effective in its own right, we can provide policy makers and educators who are considering introducing a particular form of goal setting with valuable information about the likely impact of the intervention.4

We administered two field experiments with almost 4,000 college students in total. The subjects were undergraduate students enrolled in an on-campus semester-long introductory course at a public university in the United States. The course was well established prior to our study and has been taught by the same professor for many years. The course is worth four credit hours, and a letter grade of a C or better in the course is required to graduate with a bachelor's degree in the associated subject.

In the performance-based goals experiment, students were randomly assigned to a treatment group that was asked to set goals for their performance in the course or to a control group that was not. The performance measures for which goals were set included the overall course letter grade and scores on the midterm exams and final exam. Consistent with the prior work on performance-based incentives discussed above, we find that performance-based goals do not have a significant impact on course performance. Instead, our estimates were positive but small and statistically insignificant.

In the task-based goals experiment, students were randomly assigned to a treatment group that was asked to set goals for the number of online practice exams that they would complete in advance of each midterm exam and the final exam or to a control group that was not. We find that task-based goals are effective. Asking students to set task-based goals for the number of practice exams to complete increased the average number of practice exams that students completed by 0.102 of a standard deviation. This positive effect of task-based goals on the level of task completion is statistically significant (p=0.017) and robust. As well as increasing task completion, task-based goals also increased course performance (although the effects are on the margins of statistical significance): asking students to set task-based goals increased average total points scored in the course by 0.068 of a standard deviation (p=0.086) and increased median total points scored by 0.096 of a standard deviation (p=0.019). The obvious explanation for this increase in performance is that it stems from the greater task completion induced by setting task-based goals. If correct, this implies that the task-based goal-setting intervention directed student effort toward a productive activity (completing practice exams). More generally, our results suggest that if tasks are chosen appropriately, then task-based goals can improve educational performance as well as induce greater task-specific investments.

Interestingly, we also find that task-based goals were more effective for male students than for female students, in terms of both the impact on the number of practice exams completed and on performance in the course. Specifically, for male students, task-based goals increased the average number of practice exams completed by 0.190 of a standard deviation (p=0.006) and increased average total points scored by 0.159 of a standard deviation (p=0.013). In contrast, for female students, task-based goals increased the average number of practice exams completed by only 0.033 of a standard deviation and decreased average total points scored by 0.012 of a standard deviation (the treatment effects for women are far from being statistically significant). These gender differences in effect size are in line with prior work showing that men are more responsive to incentives for shorter-term performance (e.g., Gneezy & Rustichini, 2004; Levitt et al., 2011), and contrast with prior work showing that women are more responsive to longer-term performance incentives (e.g., Angrist et al., 2009; Angrist & Lavy, 2009.)

We focus on gender because four strands of literature come together to suggest that the effect of goal setting in education might vary by gender. First, evidence from other educational environments suggests that men have less self-control than women (e.g., Duckworth & Seligman, 2005; Buechel, Mechtenberg, & Petersen, 2014; Duckworth et al., 2015). Summarizing this literature, Duckworth et al. (2015) conjecture that educational interventions aimed at improving self-control may be especially beneficial for men. Second, our theoretical framework implies that goal setting is more effective for present-biased students, while the evidence from incentivized experiments suggests that men are more present biased than women (we survey this literature in online appendix V.6). Third, evidence from the laboratory suggests that goal setting is more effective for men: in an experiment in which goals were set by the experimenter rather than by the subjects themselves, Smithers (2015) finds that goals increased the work performance of men but not that of women. Fourth, to the extent that education is a competitive environment, the large literature on gender and competition (that started with Gneezy, Niederle, & Rustichini, 2003) suggests that there might be interesting and robust gender differences in the effectiveness of interventions designed to motivate students.

We argue that our findings are consistent with a theoretical framework in which students are present biased and loss averse. This framework builds on Koch and Nafziger (2011) and implies that present-biased students will, in the absence of goals, underinvest in effort. By acting as salient reference points, self-set goals can serve as internal commitment devices that enable students to increase effort. This mechanism can rationalize the positive effects of task-based goal setting (although we do not rule out all other possible mechanisms).5 We use the framework to suggest three key reasons why performance-based goals might not be very effective in the setting that we studied: performance is realized in the future, performance is uncertain, and students might be overconfident about how effort translates into performance. Consistent with Allan and Fryer's (2011) explanation for why performance-based financial incentives appear ineffective, our overconfidence explanation implies that students have incorrect beliefs about the best way to increase their academic achievement.6

The primary contribution of this paper is to show that a low-cost, scalable, and logistically simple intervention using self-set goals can have a significant effect on student behavior. As discussed above, prior programs have offered financial incentives for meeting externally set (and usually longer-term) performance targets, but the results of these studies have been modest, especially given their costs and other concerns about using incentives (e.g., crowding out of intrinsic motivation; see Cameron & Pierce, 1994, and Gneezy, Meier, & Rey-Biel, 2011). We provide experimental evidence that task-based goal setting can increase the effort and performance of college students. We also show that performance-based goals have small and statistically insignificant effects on performance, although any direct comparison of our two interventions should be interpreted with some caution.7

Our study represents a substantial innovation on existing experimental evaluations of the effects of goal setting on the effort and performance of college students. In particular, while a handful of papers in psychology use experiments to study the effects of self-set goals among college students (Morgan, 1987; Latham & Brown, 2006; Morisano et al., 2010; Chase et al., 2013), these differ from our analysis in three important respects. First, they rely on much smaller samples. Second, they have not explored the impact of performance-based goals on performance or the impact of task-based goals on performance.8 Third, they have not studied the effect of task-based goals on task completion and therefore have not investigated the mechanism behind any performance effects of task-based goal setting.9

Numerous studies in educational psychology report noncausal correlational evidence suggesting that performance-based goal setting has strong, positive effects on performance (e.g., Zimmerman & Bandura, 1994; Schutz & Lanehart, 1994; Harackiewicz et al., 1997; Elliot & McGregor, 2001; Barron & Harackiewicz, 2003; Linnenbrink-Garcia, Tyson, & Patall, 2008; Darnon et al., 2009). Another contribution of our paper is to cast doubt on this correlational evidence using our experimental finding that performance-based goals have small and statistically insignificant effects on performance. The obvious explanation for the discrepancy between previous correlational estimates and our experimental estimate is that the correlational estimates do not identify the relevant causal effect. We use our sample to explore this possibility. In line with previous correlational studies, in our experiment students who set ambitious performance-based goals performed better: conditional on student characteristics, the correlation in our sample between course performance (measured by the total number of points scored out of 100) and the level of the goal is 0.203 (p=0.000) for students who set performance-based goals. The difference between the strong, positive correlation based on nonexperimental variation in our sample and the small and statistically insignificant causal effects that we estimate suggests that correlational analysis gives a misleading impression of the effectiveness of performance-based goals.10

Our analysis breaks new ground in understanding the impacts of goal setting among college students. In particular, our experimental findings suggest that for these students, task-based goals could be an effective method of mitigating self-control problems. We emphasize that our task-based goal intervention was successful because it directed students toward a productive task. When applying our insights, teachers should attempt to pair goal setting with tasks that they think are productive, while policymakers should publicize new knowledge about which tasks work well with goals.

As we explain in the conclusion of this paper, our findings have important implications for educational practice and future research. Many colleges already offer a range of academic advising programs, including mentors, study centers, and workshops. These programs often recommend goal setting, but only as one of several strategies that students might adopt to foster academic success. Our findings suggest that academic advising programs could give greater prominence to goal setting and that students could be encouraged to set task-based goals for activities that are important for educational success. Our findings also suggest that individual courses could be designed to give students opportunities to set task-based goals. In courses with some online components (including fully online courses), it would be especially easy to incorporate task-based goal setting into the technology used to deliver course content; in traditional classroom settings, students might be encouraged to set task-based goals in consultation with instructors, who are well placed to select productive tasks. In conjunction with our experimental findings, these possibilities demonstrate that task-based goal setting is a scalable and logistically simple intervention that could help to improve college outcomes at low cost. This is a promising insight, and we argue in the conclusion that it ought to spur further research into the effects of task-based goal setting in other college contexts (e.g., two-year colleges) and for other tasks (e.g., attending lectures or contributing to online discussions).

The paper proceeds as follows. In section II, we describe our field experiments; in section III, we present our experimental results; in section IV, we interpret our results using a theoretical framework that is inspired by present bias and loss aversion; and in section V, we conclude by discussing the implications of our findings.

II. Experimental Design and Descriptive Statistics

A. Description of the Sample

We ran our field experiments at a large, public, land grant university in the United States.11 Our subjects were undergraduate students enrolled in a large on-campus semester-long introductory course. The course is a mainstream Principles of Microeconomics course that follows a conventional curriculum and assesses student performance in a standard way using quizzes, midterms, and a final (see section IIB). The course was well established prior to our study and has been taught by the same experienced professor for many years. The course is worth four credit hours, and a letter grade of a C or better in the course is required to graduate with a bachelor's degree in the associated subject. Since this is a large course, the live lectures are recorded and placed on the Internet; all students have the choice of watching the lectures as they are delivered live, but many choose to watch online. There are no sections for this course.

At least two features of this course reduce the likelihood of spillovers from the treatment group to the control group. First, this is an introductory course in which most of the students are freshmen, and therefore social networks are not yet well established. Second, the absence of sections or organized study groups and the fact that many students choose to watch the lectures online reduce the likelihood of in-class spillovers. Of course, these course features might also shape the effects of goal setting.12

As described in section IIB, we sought consent from all our subjects (the consent rate was 98%). Approximately 4,000 students participated in total. We employed a between-subjects design: each student was randomized into the treatment group or the control group immediately on giving consent.13 Students in the treatment group were asked to set goals, while students in the control group were not asked to set any goals. As described in section IIC, in the fall 2013 and spring 2014 semesters, we studied the effects of performance-based goals on student performance in the course (this was the performance-based goals experiment). As described in section IID, in the fall 2014 and spring 2015 semesters, we studied the effects of task-based goals on task completion and course performance (this was the task-based goals experiment).14

Table A.1 in online appendix I provides statistics about participant numbers and treatment rates and describes the sample. We have information about participant demographics from the university's registrar data, including gender, age, and race. Tables A.2, A.3, and A.4 in online appendix I summarize the characteristics of our participants and provide evidence that our sample is balanced.15

B. Course Structure

In all semesters, a student's letter grade for the course was based on the student's total points score out of 100. The relationship between total points scored and letter grades was fixed throughout our experiments and is shown in the grade key at the bottom of figure A.1 in online appendix II. The grade key was provided to all students at the start of the course (via the course syllabus), and students were also reminded of the grade key each time they checked their personalized online grade card (described below).

Points were available for performance in two midterm exams, a final exam, and a number of online quizzes. Points were also available for taking an online syllabus quiz and a number of online surveys. For the fall 2013 semester figure A.2 in online appendix II gives a time line of the exams, quizzes, and surveys and the number of points available for each. As described in sections IIC and IID, the course structure in other semesters was similar.

Each student had access to a private personalized online grade card that tracked the student's performance through the course and was available to view at all times. After every exam, quiz, or survey, the students received an email telling them that their grade card had been updated to include the credit that they had earned from that exam, quiz, or survey. The grade cards also included links to answer keys for the online quizzes. Figure A.1 in online appendix II shows an example grade card for a student in the control group in the fall 2013 semester.

In all semesters, students had the opportunity to complete practice exams that included question-by-question feedback. The opportunity to take practice exams was highlighted on the first page of the course syllabus. In the fall 2013 and spring 2014 semesters, the students downloaded the practice exams from the course website, and the downloads included answer keys.16 In the fall 2014 and spring 2015 semesters, the students completed the practice exams online, and the correct answer was shown to the student after attempting each question. As described in section IID, the students received emails reminding them about the practice exams in the fall 2014 and spring 2015 semesters.

We sought consent from all of our subjects using an online consent form. The consent form appeared immediately after students completed the online syllabus quiz and immediately before the online start-of-course survey. Figure A.3 in online appendix II provides the text of the consent form.

C. Performance-Based Goals Experiment

In the fall 2013 and spring 2014 semesters, we studied the effects of performance-based goals on student performance in the course. In the fall 2013 semester, treated students were asked to set a goal for their letter grade in the course. As outlined in figure A.2 in online appendix II, treated students were asked to set their goal during the start-of-course survey that all students were invited to take.17 In the spring 2014 semester, treated students were asked to set goals for their scores in the two midterm exams and the final exam. As outlined in figure A.4 in online appendix II, the treated students were asked to set a goal for their score in a particular exam as part of a midcourse survey that all students were invited to take.18

Figures A.5 and A.6 in online appendix II provide the text of the goal-setting questions. In each case, the treated students were told that their goal would be private and that “each time you get your quiz, midterm and final scores back, your gradecard will remind you of your goal.” Figures A.7 and A.8 illustrate how the goal reminders were communicated to the treated students on the online grade cards. The grade cards, described in section IIB, were a popular part of the course: the median number of times students viewed their grade card during the fall 2013 and spring 2014 semesters was 23. In spring 2014, when the midcourse survey before a particular exam closed, the students received an email telling them that their online grade card had been updated to include the credit that they had earned from completing the midcourse survey; opening the grade card provided a preexam reminder of the treated student's goal for his or her score in the forthcoming exam.

D. Task-Based Goals Experiment

In the fall 2014 and spring 2015 semesters, we studied the effects of task-based goals on task completion and course performance. Specifically, we studied the effects of goals about the number of practice exams to complete on: the number of practice exams that students completed (which we call the level of task completion) and the students' performance in the course. The experimental design was identical across the fall 2014 and spring 2015 semesters.

The course structure in the fall 2014 and spring 2015 semesters was the same as that outlined in figure A.4 in online appendix II for the spring 2014 semester, except that before each of the two midterm exams and the final exam, instead of setting performance-based goals, the treated students were asked to set a goal for the number of practice exams to complete out of a maximum of five before that particular exam (recall from section IIB that students had the opportunity to complete practice exams in all four semesters). The treated students were asked to set the goal as part of a midcourse survey that all students were invited to take. Both the treated and control students had the opportunity to complete up to five practice exams online before each exam. The opportunity to take the online practice exams was communicated to the treated and control students in the course syllabus, in the midcourse surveys (see figure A.9 in online appendix II) and in reminder emails before each exam (see figure A.10). Figures A.11 and A.12 show the practice exam instructions and feedback screens.19

Figure A.9 in online appendix II provides the text of the goal-setting question. The treated students were told that their goal would be private and that “when you take the practice exams you will be reminded of your goal.” Figures A.11 and A.12 illustrate how the goal reminders were communicated to the treated students when attempting the practice exams. The treated students also received a reminder of their goal in the reminder email about the practice exams that all students received (see figure A.10). Reminders were not provided on grade cards.

E. Descriptive Statistics on Goals

Table 1 presents some descriptive statistics on the goals that the treated students set and the extent to which they achieved these. Looking at the first row of panel A, we see that the vast majority of treated students chose to set at least one goal, irrespective of whether the goal was performance based or task based. In the second row of panel A, we see that on average, students in the performance-based goals experiment set performance goals of 90% (as explained in the notes to table 1, all performance goals have been converted to percentages of the maximal performance), while on average, students in the task-based goals experiment set task goals of four out of five practice exams. The third row of panel A tells us that these goals were generally a little ambitious: achievement lagged somewhat behind the goals that the students chose to set. Given that the goals were a little ambitious, many students failed to achieve them: the fourth row of panel A shows that each performance-based goal was reached by about one-quarter of students, while each task-based goal was reached by about half of the students.20 Panels B and C show that the same patterns hold for both male and female students. We further note that for students who set a goal related to the first midterm exam and a goal related to the final exam, performance-based goals decreased over the semester by an average of 1.56 percentage points, while task-based goals increased over the semester by an average of 0.60 practice exams; these trends did not vary substantially by gender.

Table 1.
Descriptive Statistics on Goals for Students in the Treatment Group
A. All Students in the Treatment Group 
 Performance-Based Goals Task-Based Goals 
Fraction who set at least one goal 0.99 0.98 
Mean goal 89.50 4.05 
Mean achievement 78.40 3.14 
Fraction of goals achieved 0.24 0.53 
B. Male Students in the Treatment Group 
 Performance-Based Goals Task-Based Goals 
Fraction who set at least one goal 0.99 0.97 
Mean goal 90.35 4.03 
Mean achievement 79.50 3.03 
Fraction of goals achieved 0.25 0.50 
C. Female Students in the Treatment Group 
 Performance-Based Goals Task-Based Goals 
Fraction who set at least one goal 0.99 0.99 
Mean goal 88.68 4.07 
Mean achievement 77.34 3.23 
Fraction of goals achieved 0.24 0.55 
A. All Students in the Treatment Group 
 Performance-Based Goals Task-Based Goals 
Fraction who set at least one goal 0.99 0.98 
Mean goal 89.50 4.05 
Mean achievement 78.40 3.14 
Fraction of goals achieved 0.24 0.53 
B. Male Students in the Treatment Group 
 Performance-Based Goals Task-Based Goals 
Fraction who set at least one goal 0.99 0.97 
Mean goal 90.35 4.03 
Mean achievement 79.50 3.03 
Fraction of goals achieved 0.25 0.50 
C. Female Students in the Treatment Group 
 Performance-Based Goals Task-Based Goals 
Fraction who set at least one goal 0.99 0.99 
Mean goal 88.68 4.07 
Mean achievement 77.34 3.23 
Fraction of goals achieved 0.24 0.55 

The fraction who set at least one goal is defined as the fraction of students in the treatment group who set at least one goal during the semester. A student is considered to have set a goal for her letter grade in the course if she chose a goal better than an E (an E can be obtained with a total points score of 0). Other types of goal are numerical, and a student is considered to have set such a goal if she chose a goal strictly above 0. The mean goal, mean achievement, and fraction of goals achieved are computed only for the students who set at least one goal. The mean goal is calculated by averaging over the goals set by each student (that is, one, two, or three goals) and then averaging over students. Goals for the letter grade in the course are converted to scores out of 100 using the lower grade thresholds on the grade key, and goals for scores in the midterms and final exam are rescaled to scores out of 100. Mean achievement is calculated by averaging within students over the outcome that is the object of each set goal and then averaging over students (outcomes that correspond to performance-based goals are converted to scores out of 100 as described previously for the performance-based goals themselves). The fraction of goals achieved is calculated by averaging within students over indicators for the student achieving each set goal and then averaging over students.

III. Experimental Results

We now describe the results of our experiments. In section IIIA, we present the effects on task completion. In section IIIB, we turn to the effects on course performance.

A. Impact of Task-Based Goals on Task Completion

In this section, we study the impact of task-based goals on the level of task completion, defined as the number of practice exams that the student completed during the course. Recall that all students in the task-based goals experiment had an opportunity to complete up to five practice exams online before each of two midterms and the final exam, giving a maximum of 15 practice exams. As explained in section II, all students received question-by-question feedback while they completed a practice exam. To preview our results, we find that asking students to set task-based goals for the number of practice exams to complete successfully increased task completion. The positive effect of task-based goals on task completion is large, statistically significant, and robust.

We start by looking at the effects of task-based goals on the pattern of task completion. Figure 1a shows the pattern of task completion for the students in the control group, who were not asked to set goals. For example, figure 1a shows that almost all students in the control group completed at least one practice exam during the course while around 15% of the students in the control group completed all fifteen of the available practice exams. Figure 1b shows how task-based goal setting changed the pattern of task completion. In particular, figure 1b shows that the task-based goals intervention had significant effects on the bottom and the middle of the distribution of the number of practice exams completed. For example, task-based goals increased the probability that a student completed at least one practice exam by more than 2 percentage points (p-value =0.020) and increased the probability that a student completed eight or more practice exams by more than 6 percentage points (p-value =0.004).
Figure 1.

Effects of Task-Based Goals on the Pattern of Task Completion

The effects shown in panel b were estimated using OLS regressions of indicators of the student having completed at least X practice exams for X{1,,15} on an indicator for the student having been randomly allocated to the treatment group in the task-based goals experiment. The 95% confidence intervals are based on heteroskedasticity-consistent standard errors.

Figure 1.

Effects of Task-Based Goals on the Pattern of Task Completion

The effects shown in panel b were estimated using OLS regressions of indicators of the student having completed at least X practice exams for X{1,,15} on an indicator for the student having been randomly allocated to the treatment group in the task-based goals experiment. The 95% confidence intervals are based on heteroskedasticity-consistent standard errors.

Next, we look at how task-based goals changed the average level of task completion. Table 2 reports ordinary least squares (OLS) regressions of the number of practice exams completed during the course on an indicator for the student having been randomly allocated to the treatment group in the task-based goals experiment. To give a sense of the magnitude of the effects, the second row reports the effect size as a proportion of the standard deviation of the number of practice exams completed in the control group in the task-based goals experiment, while the third row reports the average number of practice exams completed in the same control group. The regression in the second column controls for age, gender, race, SAT score, high school GPA, advanced placement credit, fall semester, and first login time, including linear terms, squares, and interactions of these variables (see the notes to table 2 for further details on the controls).

Table 2.
Effects of Task-Based Goals on the Average Level of Task Completion
All Students in the Task-Based Goals Experiment 
 Number of Practice Exams Completed 
 OLS OLS 
Effect of asking students to set task-based goals 0.479** 0.491** 
 (0.208) (0.205) 
 [0.022] [0.017] 
Effect / (SD in control group) 0.100 0.102 
Mean of dependent variable in control group 8.627 8.627 
Controls for student characteristics No Yes 
Observations 2,004 2,004 
All Students in the Task-Based Goals Experiment 
 Number of Practice Exams Completed 
 OLS OLS 
Effect of asking students to set task-based goals 0.479** 0.491** 
 (0.208) (0.205) 
 [0.022] [0.017] 
Effect / (SD in control group) 0.100 0.102 
Mean of dependent variable in control group 8.627 8.627 
Controls for student characteristics No Yes 
Observations 2,004 2,004 

Both columns report OLS regressions of the number of practice exams completed during the course (out of a maximum of 15) on an indicator for the student having been randomly allocated to the treatment group in the task-based goals experiment. “SD in control group” refers to the standard deviation of the dependent variable in the control group. In the first column, we do not control for student characteristics. In the second column, we control for the student characteristics defined in table A.2 in online appendix I: (a) letting Q denote the set containing indicators for the binary characteristics other than gender (race-based categories, advanced placement credit, fall semester) and Z denote the set containing the nonbinary characteristics (age, SAT score, high school GPA, first login time), we include jQ, kZ, k×l for kZ and lZ, and j×k for jQ and kZ; and (b) we include gender together with gender interacted with every control variable defined in (a). Heteroskedasticity-consistent standard errors are shown in parentheses, and two-sided p-values are shown in brackets. Significant at *10%, **5%, and ***1% (two-sided tests).

From the results in the second column of table 2, we see that task-based goals increased the mean number of practice exams that students completed by about 0.5 of an exam (the effect has a p-value of 0.017). This corresponds to an increase in practice exam completion of about 0.1 of a standard deviation, or almost 6% relative to the average number of practice exams completed by students in the control group. From the first column, we see that these results are quantitatively similar when we omit the controls for student characteristics.

As we discussed in section I, evidence from other educational environments suggests that men have less self-control than women. This motivates splitting our analysis by gender to examine whether self-set task-based goals act as a more effective commitment device for male students than for females.21 In line with this existing evidence on gender differences in self-control, table 3 shows that the effect of task-based goals is mainly confined to male students. We focus our discussion on the second column of results, which were obtained from OLS regressions that include controls for student characteristics (the first column of results shows that our findings are robust to omitting these controls). Panel A shows that task-based goals increased the number of practice exams that male students completed by about one exam. This corresponds to an increase in practice exam completion of about 0.2 of a standard deviation, or almost 11% relative to the average number of practice exams completed by male students in the control group. This positive effect of task-based goals on the level of task completion for male students is statistically significant at the 1% level. Panel B shows that for female students, task-based goals increased the number of practice exams completed by less than 0.2 of an exam, and this effect is far from being statistically significant.

Table 3.
Gender Differences in the Effects of Task-Based Goals on Task Completion
A. Male students in the task-based goals Experiment 
 Number of Practice Exams Completed 
 OLS OLS 
Effect of asking students to set task-based goals 0.809** 0.893*** 
 (0.306) (0.300) 
 [0.016] [0.006] 
Effect / (SD in control group) 0.172 0.190 
Mean of dependent variable in control group 7.892 7.892 
Controls for student characteristics No Yes 
Observations 918 918 
B. Female Students in the Task-Based Goals experiment 
Effect of asking students to set task-based goals 0.217 0.156 
 (0.281) (0.281) 
 [0.882] [1.000] 
Effect / (SD in control group) 0.045 0.033 
Mean of dependent variable in control group 9.239 9.239 
Controls for student characteristics No Yes 
Observations 1,086 1,086 
A. Male students in the task-based goals Experiment 
 Number of Practice Exams Completed 
 OLS OLS 
Effect of asking students to set task-based goals 0.809** 0.893*** 
 (0.306) (0.300) 
 [0.016] [0.006] 
Effect / (SD in control group) 0.172 0.190 
Mean of dependent variable in control group 7.892 7.892 
Controls for student characteristics No Yes 
Observations 918 918 
B. Female Students in the Task-Based Goals experiment 
Effect of asking students to set task-based goals 0.217 0.156 
 (0.281) (0.281) 
 [0.882] [1.000] 
Effect / (SD in control group) 0.045 0.033 
Mean of dependent variable in control group 9.239 9.239 
Controls for student characteristics No Yes 
Observations 1,086 1,086 

The regressions are the same as those reported in table 2, except that we now split the sample by gender. Heteroskedasticity-consistent standard errors are shown in parentheses, and two-sided Bonferonni-adjusted p-values are shown in brackets. The Bonferonni adjustment accounts for the multiple null hypotheses being considered, that is, 0 treatment effect for men and 0 treatment effect for women. Significant at *10%, **5%, and ***1% (two-sided tests based on the Bonferonni-adjusted p-values).

Interestingly, in the control group, female students completed more practice exams than males did (p=0.000), and the stronger effect for men of the task-based goals intervention (p=0.073) eliminated most of the gender gap in practice exam completion. Specifically, in the control group, women completed 17% more practice exams than men did, while in the treatment group, women completed only 7% more practice exams than men did. Although women completed more practice exams than men did in the control group, the average marginal effects reported in table A.8 in online appendix I suggest that the marginal productivity of one extra practice exam was similar for men and women, and so it appears that women were not closer to the effort frontier.22

B. Impact of Goals on Student Performance

We saw in section IIIA that task-based goal setting successfully increased the students' level of task completion. Table 4 provides evidence that asking students to set task-based goals also improved their performance in the course, while performance-based goals had only a small and statistically insignificant effect on performance.

Table 4.
Effects of Task-Based Goals and Performance-Based Goals on Student Performance
 All Students in the Task-Based Goals Experiment All Students in the Performance-Based Goals Experiment 
 Total Points Score Total Points Score 
 OLS Median OLS Median 
Effect of asking students to set task-based goals 0.742* 1.044**   
 (0.431) (0.446)   
 [0.086] [0.019]   
Effect of asking students to set performance-based goals   0.300 0.118 
   (0.398) (0.459) 
   [0.452] [0.797] 
Effect / (SD in control group) 0.068 0.096 0.028 0.011 
Mean of dependent variable in control group 83.111 83.111 83.220 83.220 
Observations 2,004 2,004 1,967 1,967 
 All Students in the Task-Based Goals Experiment All Students in the Performance-Based Goals Experiment 
 Total Points Score Total Points Score 
 OLS Median OLS Median 
Effect of asking students to set task-based goals 0.742* 1.044**   
 (0.431) (0.446)   
 [0.086] [0.019]   
Effect of asking students to set performance-based goals   0.300 0.118 
   (0.398) (0.459) 
   [0.452] [0.797] 
Effect / (SD in control group) 0.068 0.096 0.028 0.011 
Mean of dependent variable in control group 83.111 83.111 83.220 83.220 
Observations 2,004 2,004 1,967 1,967 

The first and second columns report OLS and unconditional quantile (median) regressions of total points score on an indicator for the student having been randomly allocated to the treatment group in the task-based goals experiment. The third and fourth columns report OLS and unconditional quantile (median) regressions of total points score on an indicator for the student having been randomly allocated to the treatment group in the performance-based goals experiment. Total points score (out of 100) determines a student's letter grade and is our measure of performance in the course; as explained in section IIB, only the maximum of the two midterm exam scores counts toward the total points score. “SD in control group” refers to the standard deviation of the dependent variable in the control group. We control for the student characteristics defined in table A.2 in online appendix I: (a) letting Q denote the set containing indicators for the binary characteristics other than gender (race-based categories, Advanced Placement credit, fall semester) and Z denote the set containing the nonbinary characteristics (age, SAT score, high school GPA, first login time), we include jQ, kZ, k×l for kZ and lZ, and j×k for jQ and kZ; and (b) we include gender together with gender interacted with every control variable defined in (a). Heteroskedasticity-consistent standard errors are shown in parentheses and two-sided p-values are shown in brackets. Significant at *10%, **5%, and ***1%. (two-sided tests).

Our measure of performance is a student's total points score in the course (out of 100) that determines her letter grade. The first and second columns of table 4 report OLS and unconditional quantile (median) regressions of total points score on an indicator for the student having been randomly allocated to the treatment group in the task-based goals experiment.23 The third and fourth columns report OLS and unconditional quantile (median) regressions of total points score on an indicator for the student having been randomly allocated to the treatment group in the performance-based goals experiment. To give a sense of the magnitude of the effects, the third row reports the effect size as a proportion of the standard deviation of the dependent variable in the relevant control group, while the fourth row reports the average of the dependent variable in the same group. The regressions in table 4 control for age, gender, race, SAT score, high school GPA, Advanced Placement credit, fall semester, and first login time, including linear terms, squares, and interactions of these variables (see the notes to table 4 for further details on the controls). The results are quantitatively similar, but precision falls when we do not condition on student characteristics (see table A.5 in online appendix I).24

The first and second columns of table 4 report results from the task-based goals experiment: asking students to set goals for the number of practice exams to complete improved performance by a little under 0.1 of a standard deviation on average across the two specifications. The median regression gives significance at the 5% level (p=0.019), while the OLS regression gives significance at the 10% level. The tests are two-sided: using one-sided tests would give significance at the 1% level for the median regression and the 5% level for the OLS regression.

The third and fourth columns of table 4 report results from the performance-based goals experiment: the experiment shows a nonsignificant increase in performance. In more detail, asking students to set performance-based goals had positive but small and statistically insignificant effects on student performance in the course. The p-values are not close to the thresholds for statistical significance at conventional levels. Within the performance-based goals experiment, neither goals for letter grades in the course nor goals for scores in the two midterms and the final exam had a statistically significant effect on student performance.25 For both experiments, we also find that treatment effects did not vary statistically significantly across exams.26

In line with previous correlational studies (see section I), we find that students who set ambitious performance-based goals performed better. Conditional on student characteristics, the correlation in our sample between course performance (measured by total number of points scored out of 100) and the level of the goal is 0.203 (p=0.000) for students who set performance-based goals. The difference between the strong, positive correlation based on nonexperimental variation in our sample and the small and statistically insignificant causal effects that we estimate suggests that correlational analysis gives a misleading impression of the effectiveness of performance-based goals.

Table 5 repeats the analysis from table 4 with the sample split by gender.27 Consistent with our finding in section IIIA that task-based goal setting increased task completion only for men, the first and second columns of table 5 show that task-based goals increased course performance for men but not women. For male students, task-based goals improved performance by over 0.15 of a standard deviation on average across the two specifications, which corresponds to an increase in performance of almost two points. The effects of task-based goal setting on the performance of male students are strongly statistically significant (p-values of 0.013 and 0.015). However, task-based goals were ineffective in raising performance for female students. On average across the two specifications, task-based goals improved the performance of female students by only 0.02 of a standard deviation, and the effect of task-based goals on the performance of female students is statistically insignificant. In the control group in the task-based goals experiment, men performed slightly better (p=0.642), and the stronger effect for men of the task-based goal intervention (p=0.028) exacerbated this performance difference (these two p-values are from OLS regressions). Thus task-based goal setting closed the gender gap in task completion (see section IIIA) but increased the gender gap in performance. The third and fourth columns of table 5 show that we continue to find statistically insignificant effects of performance-based goals on performance when we break the sample down by gender, and there is also no gender difference in the treatment effect (p=0.755).

Table 5.
Gender Differences in the Effects of Goals on Student Performance
 Men in the Task-Based Goals Experiment Men in the Performance-Based Goals Experiment 
 Total Points Score Total Points Score 
 OLS Median OLS Median 
Effect of asking students to set task-based goals 1.787** 1.714**   
 (0.657) (0.642)   
 [0.013] [0.015]   
Effect of asking students to set performance-based goals   0.430 0.576 
   (0.594) (0.618) 
   [0.937] [0.703] 
Effect / (SD in control group) 0.159 0.153 0.041 0.055 
Mean of dependent variable in control group 83.285 83.285 83.644 83.644 
     
Observations 918 918 933 933 
 Women in the Task-Based Goals Experiment Women in the Performance-Based Goals Experiment 
 Total Points Score Total Points Score 
 OLS Median OLS Median 
Effect of asking students to set task-based goals −0.128 0.449   
 (0.571) (0.613)   
 [1.000] [0.929]   
Effect of asking students to set performance-based goals   0.181 −0.330 
   (0.536) (0.642) 
   [1.000] [1.000] 
Effect / (SD in control group) −0.012 0.043 0.017 −0.031 
Mean of dependent variable in control group 82.966 82.966 82.864 82.864 
Observations 1,086 1,086 1,034 1,034 
 Men in the Task-Based Goals Experiment Men in the Performance-Based Goals Experiment 
 Total Points Score Total Points Score 
 OLS Median OLS Median 
Effect of asking students to set task-based goals 1.787** 1.714**   
 (0.657) (0.642)   
 [0.013] [0.015]   
Effect of asking students to set performance-based goals   0.430 0.576 
   (0.594) (0.618) 
   [0.937] [0.703] 
Effect / (SD in control group) 0.159 0.153 0.041 0.055 
Mean of dependent variable in control group 83.285 83.285 83.644 83.644 
     
Observations 918 918 933 933 
 Women in the Task-Based Goals Experiment Women in the Performance-Based Goals Experiment 
 Total Points Score Total Points Score 
 OLS Median OLS Median 
Effect of asking students to set task-based goals −0.128 0.449   
 (0.571) (0.613)   
 [1.000] [0.929]   
Effect of asking students to set performance-based goals   0.181 −0.330 
   (0.536) (0.642) 
   [1.000] [1.000] 
Effect / (SD in control group) −0.012 0.043 0.017 −0.031 
Mean of dependent variable in control group 82.966 82.966 82.864 82.864 
Observations 1,086 1,086 1,034 1,034 

The regressions are the same as those reported in table 4, except that we now split the sample by gender. Heteroskedasticity-consistent standard errors are shown in parentheses, and two-sided Bonferonni-adjusted p-values are shown in brackets. The Bonferonni adjustment accounts for the multiple null hypotheses being considered, that is, 0 treatment effect for men and 0 treatment effect for women. Significant at *10%, **5%, and ***1%. (two-sided tests based on the Bonferonni-adjusted p-values).

So far, we have shown that task-based goals increased the level of task completion and improved student performance. The obvious explanation for our results is that the increase in task completion induced by task-based goal setting caused the improvement in student performance. A potential concern is that instead, task-based goals increased students' general engagement in the course. However, we think this is unlikely for two reasons. First, it is hard to understand why only men would become more engaged. Second, we find that task-based goal setting did not affect course participation.28

C. Benchmarking

In this section, we benchmark the results of our task-based goals experiment against other experiments in the economics literature. To preview the results of this benchmarking exercise, our estimates are well within the range of those produced by these other experiments. This means that while our estimates are large enough to justify low-cost and scalable interventions, they are not especially large in relation to those found in the prior literature.

First, we benchmark the effects of our task-based goals intervention on the performance of college students by comparing them to prior estimates of the effects of instructor quality, class size, and financial incentives on college grades. We find that asking students to set task-based goals increased average total points scored in the course by 0.068 of a standard deviation (p=0.086) and increased median total points scored by 0.096 of a standard deviation (p=0.019).29 Carrell and West (2010) find that a 1 SD increase in instructor quality increased GPA by 0.052 of a standard deviation (p<0.05). Bandiera, Larcinese, and Rasul (2010) find that a 1 SD increase in class size decreased test scores by 0.108 of a standard deviation (p<0.01). When benchmarking against the effects of financial incentives, we restrict attention to the studies listed in table 1B (postsecondary education) of the survey by Lavecchia et al. (2016) for which effect sizes are reported in standard deviations. Angrist et al. (2009) find that GPA-based scholarships increased first-year GPA by 0.01 of a standard deviation (p>0.10) and decreased second-year GPA by 0.02 of a standard deviation (p>0.10). Angrist et al. (2009) also find that mentoring combined with a GPA-based scholarship increased first-year GPA by 0.23 of a standard deviation (p<0.05) and second-year GPA by 0.08 of a standard deviation (p>0.10). Angrist, Oreopoulos, and Williams (2014) find that financial incentives worth up to $1,000 per semester decreased first-year GPA by 0.021 of a standard deviation (p>0.10) and increased second-year GPA by 0.107 of a standard deviation (p>0.10). De Paola et al. (2012) find that performance-based prizes of $1,000 increased exam scores by 0.19 of a standard deviation (p<0.05), while prizes of $350 increased scores by 0.16 of a standard deviation (p<0.10).

Second, we benchmark the effects of our task-based goals intervention on task completion by comparing them to prior estimates of the effects of grading policies, financial incentives, and course format on class attendance. As described above, we find that asking students to set goals for the number of practice exams to complete increased the average number of practice exams completed by 0.102 of a standard deviation (p=0.017). This effect is equivalent to an increase in practice exam completion of 5.691%. Marburger (2006) finds that providing students with credit for class attendance increased attendance by 11.475% (p<0.05). De Paola et al. (2012) find that performance-based prizes of $1,000 increased attendance by 6.145% (p>0.10), while prizes of $350 decreased attendance by 2.509% (p>0.10). Joyce et al. (2015) find that moving from a traditional lecture-based course format to a hybrid course format that combined lectures with online material increased attendance by 1.150% (p>0.10).

IV. Using a Theoretical Framework to Interpret Our Findings

A. Motivation

In this section we suggest some hypotheses for our findings in the context of a theoretical framework. Online appendix III formalizes the discussion and provides further references. Our aim is not to test theory; rather, we use the theoretical framework to guide the analysis and interpretation of our findings.

Our theoretical framework builds on Koch & Nafziger (2011) and is inspired by two key concepts in behavioral economics: present bias and loss aversion. The concept of present bias captures the idea that people lack self-control because they place a high weight on current utility (Strotz, 1956). More specifically, a present-biased discounter places more weight on current utility relative to utility n periods in the future than she does on utility at future time t relative to utility at time t+n. This implies that present-biased discounters exhibit time inconsistency, since their time preferences at different dates are not consistent with one another. Present bias has been proposed as an explanation for aspects of many behaviors such as addiction and credit card borrowing (e.g., Gruber & Kőszegi, 2001; Khwaja, Silverman, & Sloan, 2007; Fang & Silverman, 2009; Meier & Sprenger, 2010). In the context of education, a present-biased student might set out to exert her preferred level of effort, but when the time comes to attend class or review for a test, she might lack the self-control necessary to implement these plans.30

The concept of loss aversion captures the idea that people dislike falling behind a salient reference point (Kahneman & Tversky, 1979). Loss aversion has been proposed as a foundation of a number of phenomena such as the disposition effect and the role of expectations in decision making (e.g., Genesove & Mayer, 2001; Kőszegi & Rabin, 2006; Gill & Stone, 2010; Gill & Prowse, 2012). In the context of education, a loss-averse student might work particularly hard in an attempt to achieve a salient reference point (e.g., a particular grade in her course).

Together, the literature on present bias and loss aversion suggests that self-set goals might serve as an effective commitment device. Specifically, they might act as salient reference points, helping present-biased agents to mitigate their self-control problem and so steer their effort toward its optimal level. Indeed, Koch & Nafziger (2011) developed a model of goal setting based on this idea that we build on here, but unlike us, they did not explore the effectiveness of different types of goals (Heath, Larrick, & Wu, 1999, proposed that goals could act as reference points, but they did not make the connection to present bias).31

B. Performance-Based Goal Setting

Theoretical framework.

We start by describing a theoretical framework that captures performance-based goal setting. In the following section, we use the framework to suggest three hypotheses for why performance-based goals might not be very effective in the context that we studied.

At period 1, the student chooses a goal for performance; we call the student at this period the student-planner. At period 2, the student chooses how much effort to exert; we call the student at this period the student-actor. At period 3, performance is realized and the student incurs any disutility from failing to achieve her goal; we call the student at this period the student-beneficiary. Performance increases linearly in effort exerted by the student-actor at period 2, and the disutility from effort is quadratic in effort. The student-beneficiary is loss averse around her goal: she suffers goal disutility that depends linearly on how far performance falls short of the goal set by the student-planner at period 1.

The student is present biased. In particular, the student exhibits quasi-hyperbolic discounting: the student discounts utility n periods in the future by a factor βδn.32 Under quasi-hyperbolic discounting, the student-planner discounts period 2 utility by a factor βδ and period 3 utility by a factor βδ2, and so discounts period 3 utility by δ relative to period 2 utility. The student-actor, however, discounts period 3 utility by βδ relative to immediate period 2 utility. Since βδ<δ, the student-planner places more weight on utility from performance at period 3 relative to the cost of effort at period 2 than does the student-actor.

As a result of this present bias and in the absence of a goal, the student-planner's desired effort is higher than the effort chosen by the student-actor: that is, the student exhibits a self-control problem due to time inconsistency. To alleviate her self-control problem, the student-planner chooses to set a goal. Goals work by increasing the student-actor's marginal incentive to work in order to avoid the goal disutility that results from failing to achieve the goal. The optimal goal induces the student to work harder than she would in the absence of a goal

Why might performance-based goals not be very effective?

This theoretical framework suggests that performance-based goals can improve course performance. However, our experimental data show that performance-based goals had a positive but small and statistically insignificant effect on student performance (table 4). In our view, the theoretical framework suggests three hypotheses for why performance-based goals might not be very effective in the context that we studied (we view these hypotheses as complementary).

Timing of goal disutility. In the theoretical framework, the student works in period 2 and experiences any goal disutility from failing to achieve her performance-based goal in period 3 (i.e., when performance is realized). This temporal distance will dampen the motivating effect of the goal. Even when the temporal distance between effort and goal disutility is modest, the timing of goal disutility dampens the effectiveness of performance-based goals because quasi-hyperbolic discounters discount the near future relative to the present by a factor β even if δ1 over the modest temporal distance.

Overconfidence. In the theoretical framework, students understand perfectly the relationship between effort and performance. In contrast, the education literature suggests that students face considerable uncertainty about the educational production function and that this uncertainty could lead them to hold incorrect beliefs about the relationship between effort and performance (e.g., Romer, 1993; Fryer, 2013). Furthermore, the broader behavioral literature shows that people tend to be overconfident when they face uncertainty (e.g., Weinstein, 1980; Camerer & Lovallo, 1999; Park & Santos-Pinto, 2010). In light of these two strands of literature, suppose that some students are overconfident in the sense that they overestimate how effort translates into performance (and hence think that they need to do less preparation than they actually have to). For an overconfident student, actual performance with goal setting and in the absence of a goal will be a fraction of that expected by the student. As a result, this type of overconfidence reduces the impact of performance-based goal setting on performance.33

Performance uncertainty. In the theoretical framework, the student knows for sure how her effort translates into performance (i.e., the relationship between effort and performance involves no uncertainty). In practice, the relationship between effort and performance is likely to be noisy. The student could face uncertainty about her own ability or about the productivity of work effort. The student might also get unlucky: for instance, the draw of questions on the exam might be unfavorable or the student might become ill near the exam.

To introduce uncertainty about performance in a straightforward way, suppose that with known probability, performance falls to some baseline level (since we assume that this probability is known, the student is neither overconfident nor underconfident).34 The uncertainty directly reduces the student-actor's marginal incentive to exert effort, which reduces both the student's goal and her choice of effort with and without goal setting. However, this reduction in the expected value of effort is not the only effect of uncertainty: performance-based goals also become risky because when performance turns out to be low, the student fails to achieve her performance-based goal and so suffers goal disutility that increases in the goal.35 Anticipating the goal disutility suffered when performance turns out to be low, the student-planner further scales back the performance-based goal that she sets for the student-actor, which reduces the effectiveness of performance-based goal setting.36

C. Task-Based Goal Setting

Theoretical framework.

We now extend our theoretical framework to task-based goal setting. At period 1, the student-planner chooses a goal for the number of units of the task to complete. At period 2, the student-actor chooses the level of task completion, and the loss-averse student-actor suffers goal disutility that depends linearly on how far the level of task completion falls short of the goal set by the student-planner at period 1. At period 3, performance is realized. Performance increases linearly in the level of task completion, and the disutility from task completion is quadratic in the level of task completion.

The present-biased student exhibits quasi-hyperbolic discounting as described in section IVB. In the absence of a goal, the present-biased student exhibits a self-control problem due to time inconsistency: the student-actor chooses a level of task completion that is smaller than the student-planner's desired level of task completion. As a result, the student-planner chooses to set a goal to alleviate her self-control problem. The optimal goal increases the level of task completion above the level without a goal, which in turn improves course performance.

Why were task-based goals effective?

Our experimental data show that task-based goals improved task completion and course performance (see table 2 for the effect on task completion and table 4 for the effect on course performance).37 How might we account for these findings, given our discussion of why performance-based goals might not be very effective? In our view, an obvious answer is that with task-based goal setting, the three factors that reduce the effectiveness of performance-based goals (section IVB) are of lesser importance or do not apply at all.

Timing of goal disutility. In the case of task-based goal setting, any goal disutility from failing to achieve the task-based goal is suffered immediately when the student stops working on the task in period 2. Thus, unlike the case of performance-based goal setting discussed in section IVB, there is no temporal distance that dampens the motivating effect of the goal.

Overconfidence. As discussed in section IVB, overconfident students overestimate how effort translates into performance, which reduces the effectiveness of goal setting. Overconfidence diminishes the effectiveness of both performance-based and task-based goals. However, in the case of task-based goal setting, this effect is mitigated if practice exams direct students toward productive tasks. Plausibly, teachers have better information about which tasks are likely to be productive, and asking students to set goals for productive tasks is one way to improve the power of goal setting for overconfident students.38

Performance uncertainty. Even with uncertainty about performance, the student faces no uncertainty about the level of task completion because the student-actor controls the number of units of the task that she completes. Thus, unlike the case of performance-based goals with uncertainty, the student has no reason to scale back her task-based goal to reduce goal disutility in the event that the goal is not reached.

Why were task-based goals more effective for men than for women?

Our data show that task-based goals are more effective for men than for women. More specifically, in the control group without goal setting, men completed fewer practice exams than women (table 3), and task-based goals increased performance and the number of practice exams completed more for men than for women (tables 5 and 3, respectively). In the context of our theoretical framework, a higher degree of present bias among men can explain both of these findings, and existing empirical evidence supports the idea that men have less self-control and are more present biased than women (see online appendix V.6 for a survey of this evidence).39

Saliency of the task.

If practice exams were less salient in the performance-based goals experiment and if goals work better when students have access to salient practice exams, then the lower saliency could help to explain why task-based goals were effective while performance-based goals were not. There were some differences in the practice exams across experiments (most notably, practice exams had to be downloaded in the performance-based goals experiment, while they could be completed online in the task-based goals experiment; see the penultimate paragraph of section IIB). However, we do not think that a difference in saliency was important, for three reasons. First, in both experiments, the first page of the course syllabus highlighted the practice exams, and the syllabus quiz at the start of each semester made the syllabus itself salient. Second, analysis of the course evaluations shows that students mentioned that the practice exams were helpful at a similar rate in the two experiments.40 Third, course performance in the control groups was almost identical across the two experiments (table 4), which suggests that any difference in the saliency of the practice exams was not an important determinant of performance.

V. Conclusion

Our experimental findings suggest that task-based goal setting is an intervention that can improve college outcomes: asking students to set goals for the number of practice exams to complete increased the number of practice exams that students completed and increased course performance. We emphasize that our task-based goal worked because goal setting directed students toward a productive task. In an educational context, teachers should pair goal setting with tasks that they think are productive, while policymakers should disseminate new knowledge to teachers about which tasks work well with goals.

One of the challenges in applying our findings to other settings is that policymakers often do not know the production function for course performance. Future research that examines the effects of self-set goals for other tasks such as attending class, contributing to online discussions, or working through textbook chapters can advance our knowledge of the production function. Specifically, if task-based goal setting increases course performance only through the effects of goal setting on task-specific investments, then assignment to the goals treatment is an instrument that can be used to identify the performance effects of these investments.41

As well as measuring the effectiveness of different tasks, it would also be interesting to conduct similar goal-setting experiments in other types of colleges and educational environments. For example, our subjects (who attend a four-year college) are likely more able than two-year college students. If they also possess more self-control than these two-year college students, then goal setting might be more effective at two-year colleges.

The most direct way to incorporate task-based goals into the college environment would be for instructors to design courses that promote task-based goal setting. For example, in a course that required students to complete certain tasks online (e.g., homework assignments or class discussion), the opportunity to set goals could be built into the technology used to deliver these course components. Academic advising services could also give greater prominence to task-based goal setting and encourage students to set task-based goals in consultation with course instructors who can give advice about the tasks most likely to be productive.42 Ideally, this advice would rest on a solid base of evidence.

To summarize, we believe that our study marks an important step toward a better understanding of the role that self-set goals could play in motivating college students to work harder and perform better. Research in psychology and economics provides reason to expect that college students, like other agents in the economy, will lack self-control. Our results suggest that self-set goals can act as an effective commitment device that helps college students to self-regulate behavior and mitigate these self-control problems. Provided that students set goals for productive tasks, task-based goal setting can also improve student performance. Since task-based goal setting could easily be incorporated into the college environment, our findings have important implications for educational practice. Future research should probe the effects of task-based goal setting in other contexts and for other tasks.

Notes

1

See online appendix V.1 and the survey by Lavecchia, Liu, and Oreopoulos (2016) for more details. A recent study by Lusher (2016) evaluates the CollegeBetter.com program in which students make parimutuel bets that they will raise their GPA by the end of the term. The financial rewards and penalties that the program creates act as an external commitment device. Participating students were more likely to increase their GPA compared to students who wanted to participate but were randomly excluded; however, CollegeBetter.com did not affect average GPA.

2

See online appendix V.2 and the survey by Bryan, Karlan, and Nelson (2010) for more details.

3

A small and recent literature in economics suggests that goal setting can influence behavior in other settings (Goerg & Kube, 2012; Harding & Hsiaw, 2014; Corgnet, Gómez-Miñambres, & Hernán-Gonzalez, 2015, 2016; Choi et al., 2016); see online appendix V.3 and the survey by Goerg (2015) for more details. Although not focused on education, several psychologists argue for the motivational benefits of goals more generally (e.g., Locke, 1968; Locke et al., 1981; Mento, Steel, & Karren, 1987; Locke & Latham, 2002; and Latham & Pinder, 2005).

4

Our experiments are powered to detect plausible treatment-control differences. We did not power our experiments to test directly for differences in the effectiveness of goal setting across experiments for two reasons. First, calculating power ex ante was not realistic because we had little evidence ex ante to guide us regarding the size of such differences; second, sample size constraints (that arise from the number of students enrolled in the course) limit power to detect across-experiment differences unless those differences are very large.

5

In related theoretical work, Hsiaw (2013) studies goal setting with present bias and expectations-based reference points. In an educational context, Levitt et al. (2016) find evidence that school children exhibit both loss aversion (incentives framed as losses are more powerful) and present bias (immediate rewards are more effective).

6

In the case of task-based goals, the first two considerations no longer apply. Overconfidence diminishes the effectiveness of both performance-based and task-based goals. However, to the extent that task-based goals direct students toward productive tasks, task-based goal setting mitigates the effect of overconfidence. Plausibly, teachers have better information about which tasks are likely to be productive, and asking students to set goals for productive tasks is one way to improve the power of goal setting for overconfident students.

7

In particular, the structure of the practice exams was not exactly the same across the two experiments: practice exams had to be downloaded in the performance-based goals experiment, but could be completed online in the task-based goals experiment. However, we provide evidence that a difference in the saliency of practice exams was not important (see section IVC).

8

Morgan (1987) is the exception, but this small-scale study of task-based goal setting does not report a statistical test of the relevant treatment-control comparison. Online appendix V.4 provides more detail about this paper.

9

Using a sample of 77 college students, Schunk and Ertmer (1999) studied teacher-set instead of self-set goals: they directed students who were acquiring computer skills to think about outcomes (that the students had already been asked to achieve) as goals. Online appendix V.5 discusses the literature in psychology on goals and the learning of grade-school-aged children, which focuses on teacher-set goals.

10

For students who set task-based goals, the correlation between course performance (measured by total number of points scored out of 100) and the level of the goal is 0.391 (p=0.000), in line with correlational findings from educational psychology (e.g., Elliot & McGregor, 2001; Church, Elliot, & Gable, 2001; Hsieh, Sullivan, & Guerra, 2007).

11

The university is the top-ranked public university in a major state and is categorized as an R1 (highest research activity) institution by the Carnegie Classification of Institutions of Higher Education. The median SAT score of incoming freshmen is slightly more than 1,300. Around 6,400 full-time, first-time undergraduate freshmen students enroll on the main campus each year, of whom around 60% are female, around 50% are non-Hispanic white, around 20% are Hispanic, around 10% are Asian, and around 5% are black. Around a third receive Pell grants, and around 40% receive either a Pell grant or a subsidized Stafford Loan.

12

For example, as a referee pointed out, task-based goal setting may be particularly effective in settings that exacerbate student shirking. Intuitively, if a course is designed such that students cannot exert suboptimal effort, then there is no underinvestment problem and no demand for commitment. Because students can watch lectures online, this course may facilitate shirking. If that is the case, then our findings may be more relevant to the types of settings in which attendance is not compulsory (e.g., larger classes and online education).

13

When the subject pressed the online consent button, a computerized random draw allocated that subject to the treatment or control group with equal probability. The draws were independent across subjects.

14

We also ran a small-scale pilot in summer 2013 to test our software.

15

For each characteristic, we test the null that the difference between the mean of the characteristic in the treatment group and the control group is 0, and we then test the joint null that all of the differences equal 0. The joint test gives p-values of 0.636, 0.153 and 0.471 for, respectively, all semesters, fall 2013 and spring 2014 (the performance-based goals experiment), and fall 2014 and spring 2015 (the task-based goals experiment). See tables A.2, A.3, and A.4 for further details.

16

As a result, we have no measure of practice exam completion for the fall 2013 and spring 2014 semesters.

17

Treated students set their goal after the quiz on the syllabus. In every semester, the syllabus gave the students information about the median student's letter grade in the previous semester.

18

The students were invited to take the midcourse survey three days before the exam.

19

The students were invited to take the midcourse survey five days before the relevant exam. Practice exam reminder emails were sent three days before the exam, at which time the practice exams became active. The practice exams closed when the exam started.

20

Within the performance-based goals experiment, goals and goal achievement varied little according to whether the students set a goal for their letter grade in the course or set goals for their scores in the two midterm exams and the final exam.

21

We do not study heterogeneity by age because there is little age variation in our sample. We do not study heterogeneity by race because we are underpowered to study the effects of race: fewer than 20% of the sample are Hispanic, only around 10% are Asian, and only around 5% are black. We did not have access to any data on income.

22

The estimates of the effect on performance of completing one more practice exam presented in table A.8 leverage within-student variation in the number of practice exams completed across the two midterms and the final. Since this variation was not experimentally induced, the estimates could be influenced by omitted variable bias; however, we have no evidence that any such bias varies by gender.

23

The median results were obtained using the estimator of Firpo, Fortin, and Lemieux (2009), which delivers the effect of the treatment on the unconditional median of total points score.

24

Table A.6 in online appendix I further shows that average treatment effects do not change when we interact treatment with indicators for SAT score bins (and include SAT score bin controls).

25

For both specifications reported in the third and fourth columns of table 4 and using the 10% level criterion, we find no statistically significant effect of either type of performance-based goal, and we find no statistically significant difference between the effects of the two types of goal. For the case of OLS regressions of total points score on the treatment, the p-values for the two effects and the difference are, respectively, p=0.234, p=0.856, and p=0.386.

26

Using the 10% level criterion, the null hypothesis that there is no difference in the treatment effect on the first midterm exam, the second midterm exam, and the final exam cannot be rejected for either experiment. For the effect of task-based goals on the number of practice exams completed, the joint test gives p=0.697; for the effect of task-based goals on total points score, the joint test gives p=0.156; and for the effect of performance-based goals on total points score, the joint test gives p=0.628.

27

The regressions in table 5 control for student characteristics. The results are quantitatively similar, but precision falls when we do not condition on student characteristics (see table A.7 in online appendix I).

28

In more detail, we construct an index of course participation, which measures the proportion of course components that a student completed weighted by the importance of each component in determining the total points score in the course. We regress our index of course participation on an indicator of the student having been randomly allocated to the treatment group in the task-based goals experiment. We find that the effects of the treatment on course participation are small and far from being statistically significant. The p-values for OLS regressions of this index on the treatment are 0.668, 0.367, and 0.730 for, respectively, all students, male students, and female students.

29

Translating our effect size into GPA, asking students to set task-based goals increased average GPA by 0.062, or 0.059 of a standard deviation. As a proportion of the relevant standard deviation, the effect on average GPA is similar to the effect on average total points scored. To convert total points to grades, we used the grade key at the bottom of figure A.1 in online appendix II. To convert grades to GPA, we followed the university grading scale: 4 grade points for A; 3.67 for A-; 3.33 for B+; 3 for B; 2.67 for B-; 2.33 for C+; 2 for C; 1 for D; and 0 for E.

30

Under standard (i.e., exponential) discounting, this self-control problem disappears.

31

Related theoretical work on goal setting includes Suvorov and Van de Ven (2008), Wu, Heath, and Larrick (2008), Jain (2009), Hsiaw (2013, 2016), and Koch and Nafziger (2016).

32

Laibson (1997) was the first to apply the analytically tractable quasi-hyperbolic (or beta-delta) model of discounting to analyze the choices of present-biased time-inconsistent agents.

33

A naive student who does not understand her present bias would be overconfident about her level of effort. However, she would not understand how to use goals to overcome her lack of self-control, and so our discussion focuses on sophisticated students who understand their present bias.

34

We can think of this baseline level as the performance that the student achieves with little effort even in the absence of goal setting.

35

It is this second effect that drives the prediction that uncertainty reduces the effectiveness of performance-based goal setting. If we assumed that only the variance of performance changed, this second effect would still operate, but the formal analysis in online appendix III would become substantially more involved.

36

This scaling back of goals is not necessarily at odds with the fact that the performance-based goals that we see in the data appear ambitious. First, the goal will appear ambitious relative to average achievement because when performance turns out to be low, the student fails to achieve her goal. Second, without any scaling back, the goals might have been even higher. Third, the overconfidence that we have discussed could keep the scaled-back goal high. Fourth, we explain in online appendix III.3.4 that students likely report as their goal an aspiration that is relevant only if, when the time comes to study, the cost of effort turns out to be particularly low: the actual cost-specific goal that the student aims to hit could be much lower than this aspiration.

37

It is possible that some students in the control group (who were not invited to set goals) might already use goals as a commitment device. However, since we find that task-based goals are successful at increasing performance, we conclude that many students in the control group did not use goals or set goals that were not fully effective. We note that asking students to set goals might make the usefulness of goal setting as a commitment device more salient and thus effective. Reminding students of their goal, as we did, might also help to make them more effective.

38

Instead of improving the power of goal setting by directing overconfident students toward productive tasks, it is conceivable that task-based goals improved performance via another channel: signaling to students in the treatment group that practice exams were an effective task. But we think this is highly unlikely. First, we were careful to make the practice exams as salient as possible to the control group. Second, students in the control group in fact completed many practice exams. Third, it is hard to understand why only men would respond to the signal.

39

Two alternative explanations for the gender differences that we find seem inconsistent with our data. The first alternative explanation is based on the idea that women are closer to the effort frontier. However, we report that the marginal productivity of practice exams was similar by gender (see section IIIA). The second alternative explanation posits that because women perform worse in higher-stakes environments (Ors, Palomino, & Peyrache, 2013; Azmat, Calsamiglia, & Iriberri, 2016), the high stakes might make women care less about completing more practice exams in response to goal setting. However, if high stakes make women care less about completing more practice exams in response to goal setting, we should also expect the stakes to make women care less about completing practice exams in the control group (where practice exams are also salient), but in fact, our data show that women complete more practice exams in the control.

40

In the performance-based goals experiment, 3.2% of the 557 students who made comments mentioned that the practice exams were helpful; in the task-based goals experiment, 2.8% of 532 did so. We do not have data on practice exam downloads in the performance-based goals experiment.

41

There is already a small literature on the performance effects of attending class. For example, Dobkin, Gil, and Marion (2010) and Arulampalam, Naylor, and Smith (2012) exploit quasi-experiments to estimate the effects of attendance on college course performance.

42

CUNY's Accelerated Study in Associate Programs encourages first-year students to think about goal setting as one of many strategies that students might try (Scrivener et al., 2015).

REFERENCES

Allan
,
Bradley M.
, and
Roland G.
Fryer
, “
The Power and Pitfalls of Education Incentives
,”
Hamilton Project policy paper
(2011)
.
Angrist
,
Joshua
,
Daniel
Lang
, and
Philip
Oreopoulos
, “
Incentives and Services for College Achievement: Evidence from a Randomized Trial,
American Economic Journal: Applied Economics
1
(
2009
),
136
163
.
Angrist
,
Joshua
, and
Victor
Lavy
, “
The Effects of High Stakes High School Achievement Awards: Evidence from a Randomized Trial,
American Economic Review
99
(
2009
),
1384
1414
.
Angrist
,
Joshua
,
Philip
Oreopoulos
, and
Tyler
Williams
, “
When Opportunity Knocks, Who Answers? New Evidence on College Achievement Awards,
Journal of Human Resources
49
(
2014
),
572
610
.
Ariely
,
Dan
, and
Klaus
Wertenbroch
, “
Procrastination, Deadlines, and Performance: Self-Control by Precommitment,
Psychological Science
13
(
2002
),
219
224
.
Arulampalam
,
Wiji
,
Robin A.
Naylor
, and
Jeremy
Smith
, “
Am I Missing Something? The Effects of Absence from Class on Student Performance,
Economics of Education Review
31
(
2012
),
363
375
.
Ashraf
,
Nava
,
Dean
Karlan
, and
Wesley
Yin
, “
Tying Odysseus to the Mast: Evidence from a Commitment Savings Product in the Philippines,
Quarterly Journal of Economics
121
(
2006
),
635
672
.
Augenblick
,
Ned
,
Muriel
Niederle
, and
Charles
Sprenger
, “
Working over Time: Dynamic Inconsistency in Real Effort Tasks,
Quarterly Journal of Economics
130
(
2015
),
1067
1115
.
Azmat
,
Ghazala
,
Caterina
Calsamiglia
, and
Nagore
Iriberri
, “
Gender Differences in Response to Big Stakes,
Journal of the European Economic Association
14
(
2016
),
1372
1400
.
Bandiera
,
Oriana
,
Valentino
Larcinese
, and
Imran
Rasul
, “
Heterogeneous Class Size Effects: New Evidence from a Panel of University Students
,”
Economic Journal
,
120:549
(2010)
,
1365
1398
.
Barron
,
Kenneth E.
, and
Judith M.
Harackiewicz
, “
Revisiting the Benefits of Performance-Approach Goals in the College Classroom: Exploring the Role of Goals in Advanced College Courses,
International Journal of Educational Research
39
(
2003
),
357
374
.
Bryan
,
Gharad
,
Dean
Karlan
, and
Scott
Nelson
, “
Commitment Devices,
Annual Review of Economics
2
(
2010
),
671
698
.
Buechel
,
Berno
,
Lydia
Mechtenberg
, and
Julia
Petersen
, “
Peer Effects and Students' Self-Control
,”
Humboldt University SFB 649 discussion paper
2014-024
(
2014
).
Camerer
,
Colin
, and
Dan
Lovallo
, “
Overconfidence and Excess Entry: An Experimental Approach,
American Economic Review
89
(
1999
),
306
318
.
Cameron
,
Judy
, and
W. David
Pierce
, “
Reinforcement, Reward, and Intrinsic Motivation: A Meta-Analysis,
Review of Educational Research
64
(
1994
),
363
423
.
Carrell
,
Scott
, and
James
West
, “
Does Professor Quality Matter? Evidence from Random Assignment of Students to Professors,
Journal of Political Economy
118
(
2010
),
409
432
.
Castleman
,
Benjamin L.
, “
The Impact of Partial and Full Merit Scholarships on College Entry and Success: Evidence from the Florida Bright Futures Scholarship Program
,”
University of Virginia EdPolicyWorks working paper
(
2014
).
Cha
,
Paulette
, and
Reshma
Patel
, “
Rewarding Progress, Reducing Debt: Early Results from Ohio's Performance-Based Scholarship Demonstration for Low-Income Parents
,”
MDRC technical report
(
2010
).
Chase
,
Jared A.
,
Ramona
Houmanfar
,
Steven C.
Hayes
,
Todd A.
Ward
,
Jennifer Plumb
Vilardaga
, and
Victoria
Follette
, “
Values Are Not Just Goals: Online ACT-Based Values Training Adds to Goal Setting in Improving Undergraduate College Student Performance
,”
Journal of Contextual Behavioral Science
2
(
2013
),
79
84
.
Choi
,
James J.
,
Emily
Haisley
,
Jennifer
Kurkoski
, and
Cade
Massey
, “
Small Cues Change Savings Choices
,”
Yale University mimeograph
(
2016
).
Church
,
Marcy A.
,
Andrew J.
Elliot
, and
Shelly L.
Gable
, “
Perceptions of Classroom Environment, Achievement Goals, and Achievement Outcomes,
Journal of Educational Psychology
93
(
2001
),
43
54
.
Cohodes
,
Sarah R.
, and
Joshua S.
Goodman
, “
Merit Aid, College Quality, and College Completion: Massachusetts' Adams Scholarship as an In-Kind Subsidy,
American Economic Journal: Applied Economics
6
(
2014
),
251
285
.
Corgnet
,
Brice
,
Joaquín
Gómez-Miñambres
, and
Roberto
Hernán-Gonzalez
, “
Goal Setting and Monetary Incentives: When Large Stakes Are Not Enough,
Management Science
61
(
2015
),
2926
2944
.
Corgnet
,
Brice
,
Joaquín
Gómez-Miñambres
, and
Roberto
Hernán-Gonzalez
Goal Setting in the Principal-Agent Model: Weak Incentives for Strong Performance
,”
CEDEX discussion paper
2016-09
(
2016
).
Cornwell
,
Christopher M.
,
Kyung Hee
Lee
, and
David B.
Mustard
, “
Student Responses to Merit Scholarship Retention Rules,
Journal of Human Resources
40
(
2005
),
895
917
.
Darnon
,
Céline
,
Fabrizio
Butera
,
Gabriel
Mugny
,
Alain
Quiamzade
, and
Chris S.
Hulleman
, “
Too Complex for Me! Why Do Performance-Approach and Performance-Avoidance Goals Predict Exam Performance?
European Journal of Psychology of Education
24
(
2009
),
423
434
.
De Paola
,
Maria
,
Vincenzo
Scoppa
, and
Rosanna
Nisticò
, “
Monetary Incentives and Student Achievement in a Depressed Labor Market: Results from a Randomized Experiment,
Journal of Human Capital
6
(
2012
),
56
85
.
DellaVigna
,
Stefano
, and
Ulrike
Malmendier
, “
Paying Not to Go to the Gym,
American Economic Review
96
(
2006
),
694
719
.
Dobkin
Carlos
,
Ricard
Gil
, and
Justin
Marion
, “
Skipping Class in College and Exam Performance: Evidence from a Regression Discontinuity Classroom Experiment,
Economics of Education Review
29
(
2010
),
566
575
.
Duckworth
,
Angela
,
Patrick
Quinn
, and
Eli
Tsukayama
, “
What No Child Left Behind Leaves Behind: The Roles of IQ and Self-Control in Predicting Standardized Achievement Test Scores and Report Card Grades,
Journal of Educational Psychology
104
(
2012
),
439
451
.
Duckworth
,
Angela L.
, and
Martin E. P.
Seligman
, “
Self-Discipline Outdoes IQ in Predicting Academic Performance of Adolescents,
Psychological Science
16
(
2005
),
939
944
.
Duckworth
,
Angela L.
,
Elizabeth P.
Shulman
,
Andrew J.
Mastronarde
,
Sarah D.
Patrick
,
Jinghui
Zhang
, and
Jeremy
Druckman
, “
Will Not Want: Self-Control Rather than Motivation Explains the Female Advantage in Report Card Grades,
Learning and Individual Differences
39
(
2015
),
13
23
.
Elliot
,
Andrew J.
, and
Holly A.
McGregor
, “
A 2 × 2 Achievement Goal Framework,
Journal of Personality and Social Psychology
80
(
2001
),
501
519
.
Fang
,
Hanming
, and
Dan
Silverman
, “
Time-Inconsistency and Welfare Program Participation: Evidence from the NLSY
,”
International Economic Review
50
:
4
(
2009
),
1043
1077
.
Firpo
,
Sergio
,
Nicole M.
Fortin
, and
Thomas
Lemieux
, “
Unconditional Quantile Regressions,
Econometrica
77
(
2009
),
953
973
.
Fryer
,
Roland G.
, “
Financial Incentives and Student Achievement: Evidence from Randomized Trials
, “
Quarterly Journal of Economics
126
(2011)
,
1755
1798
.
Fryer
,
Roland G.
Information and Student Achievement: Evidence from a Cellular Phone Experiment
,”
NBER working paper
19113
(
2013
).
Genesove
,
David
, and
Christopher
Mayer
, “
Loss Aversion and Seller Behavior: Evidence from the Housing Market,
Quarterly Journal of Economics
116
(
2001
),
1233
1260
.
Gill
,
David
, and
Victoria
Prowse
, “
A Structural Analysis of Disappointment Aversion in a Real Effort Competition,
American Economic Review
102
(
2012
),
469
503
.
Gill
,
David
, and
Rebecca
Stone
, “
Fairness and Desert in Tournaments,
Games and Economic Behavior
69
(
2010
),
346
364
.
Gneezy
,
Uri
,
Stephan
Meier
, and
Pedro
Rey-Biel
, “
When and Why Incentives (Don't) Work to Modify Behavior
,”
Journal of Economic Perspectives
25
(2011)
,
191
209
.
Gneezy
,
Uri
,
Muriel
Niederle
, and
Aldo
Rustichini
, “
Performance in Competitive Environments: Gender Differences,
Quarterly Journal of Economics
118
(2003)
,
1049
1074
.
Gneezy
,
Uri
, and
Aldo
Rustichini
, “
Gender and Competition at a Young Age,
American Economic Review: Papers and Proceedings
94
(2004)
,
377
381
.
Goerg
,
Sebastian J.
,
Goal Setting and Worker Motivation,
IZA World of Labor
178
(2015)
,
1
10
.
Goerg
,
Sebastian J.
, and
Sebastian
Kube
, “
Goals (Th)at Work–Goals, Monetary Incentives, and Workers' Performance
,”
Max Planck Institute for Research on Collective Goods preprint
2012/19
(
2012
).
Gruber
,
Jonathan
, and
Botond
Koszegi
, “
Is Addiction Rational? Theory and Evidence,
Quarterly Journal of Economics
116
(2001)
,
1261
1303
.
Harackiewicz
,
Judith M.
,
Kenneth E.
Barron
,
Suzanne M.
Carter
,
Alan T.
Lehto
, and
Andrew J.
Elliot
, “
Predictors and Consequences of Achievement Goals in the College Classroom: Maintaining Interest and Making the Grade,
Journal of Personality and Social Psychology
73
(
1997
),
1284
1295
.
Harding
,
Matthew
, and
Alice
Hsiaw
, “
Goal Setting and Energy Conservation,
Journal of Economic Behavior and Organization
107
(
2014
),
209
227
.
Heath
,
Chip
,
Richard P.
Larrick
, and
George
Wu
, “
Goals as Reference Points,
Cognitive Psychology
38
(
1999
),
79
109
.
Henry
,
Gary
,
Ross
Rubenstein
, and
Daniel
Bugler
, “
Is HOPE Enough? Impacts of Receiving and Losing Merit-Based Financial Aid,
Educational Policy
18
(
2004
),
686
709
.
Hsiaw
,
Alice
, “
Goal-Setting and Self-Control,
Journal of Economic Theory
148
(
2013
),
601
626
.
Hsiaw
,
Alice
Goal Bracketing and Self-Control
,”
Brandeis University mimeograph
(
2016
).
Hsieh
,
Peggy
,
Jeremy
Sullivan
, and
Norma
Guerra
, “
A Closer Look at College Students: Self-Efficacy and Goal Orientation,
Journal of Advanced Academics
18
(
2007
),
454
476
.
Jain
,
Sanjay
, “
Self-Control and Optimal Goals: A Theoretical Analysis,
Marketing Science
28
(
2009
),
1027
1045
.
Joyce
,
Ted
,
Sean
Crockett
,
David
Jaeger
,
Onur
Altindag
, and
Stephen
O'Connell
, “
Does Classroom Time Matter?
Economics of Education Review
46
(
2015
),
64
77
.
Kahneman
,
Daniel
, and
Amos
Tversky
, “
Prospect Theory: An Analysis of Decision under Risk,
Econometrica
47
(
1979
),
263
291
.
Kaur
,
Supreet
,
Michael
Kremer
, and
Sendhil
Mullainathan
, “
Self-Control at Work,
Journal of Political Economy
123
(
2015
),
1227
1277
.
Khwaja
,
Ahmed
,
Dan
Silverman
, and
Frank
Sloan
, “
Time Preference, Time Discounting, and Smoking Decisions,
Journal of Health Economics
26
(
2007
),
927
949
.
Koch
,
Alexander K.
, and
Julia
Nafziger
, “
Self-Regulation through Goal Setting,
Scandinavian Journal of Economics
113
(
2011
),
212
227
.
Koch
,
Alexander K.
, and
Julia
Nafziger
Goals and Bracketing under Mental Accounting,
Journal of Economic Theory
162
(
2016
),
305
351
.
Kőszegi
,
Botond
, and
Matthew
Rabin
, “
A Model of Reference-Dependent Preferences,
Quarterly Journal of Economics
121
(
2006
),
1133
1165
.
Laibson
,
David
, “
Golden Eggs and Hyperbolic Discounting,
Quarterly Journal of Economics
112
(
1997
),
443
477
.
Latham
,
Gary P.
, and
Travor C.
Brown
, “
The Effect of Learning vs. Outcome Goals on Self-Efficacy, Satisfaction and Performance in an MBA Program,
Applied Psychology
55
(
2006
),
606
623
.
Latham
,
Gary P.
, and
Craig C.
Pinder
, “
Work Motivation Theory and Research at the Dawn of the Twenty-First Century,
Annual Review of Psychology
56
(
2005
),
485
516
.
Lavecchia
,
Adam
,
Heidi
Liu
, and
Philip
Oreopoulos
, “
Behavioral Economics of Education: Progress and Possibilities
” (vol.
5
, pp.
1
74
), in
E.
Hanushek
,
S.
Machin
, and
L.
Woessman
, eds.,
Handbook of the Economics of Education
(
Amsterdam
:
North-Holland
,
2016
).
Leuven
,
Edwin
,
Hessel
Oosterbeek
, and
Bas
van der Klaauw
, “
The Effect of Financial Rewards on Students' Achievement: Evidence from a Randomized Experiment,
Journal of the European Economic Association
8
(
2010
),
1243
1265
.
Levitt
,
Steven D.
,
John A.
List
,
Susanne
Neckermann
, and
Sally
Sadoff
, “
The Impact of Short-Term Incentives on Student Performance
,”
University of Chicago mimeograph
(
2011
).
Levitt
,
Steven D.
,
John A.
List
,
Susanne
Neckermann
, and
Sally
Sadoff
The Behavioralist Goes to School: Leveraging Behavioral Economics to Improve Educational Performance,
American Economic Journal: Economic Policy
8
(
2016
),
183
219
.
Linnenbrink-Garcia
,
Lisa
,
Diana F.
Tyson
, and
Erika A.
Patall
, “
When Are Achievement Goal Orientations Beneficial for Academic Achievement? A Closer Look at Main Effects and Moderating Factors
,”
Revue Internationale de Psychologie Sociale
21
:
1
(
2008
),
19
70
.
Locke
,
Edwin
A
, “
Toward a Theory of Task Motivation and Incentives,
Organizational Behavior and Human Performance
3
(
1968
),
157
189
.
Locke
,
Edwin A.
, and
Gary
Latham
, “
Building a Practically Useful Theory of Goal Setting and Task Motivation: A 35-Year Odyssey,
American Psychologist
57
(
2002
),
705
717
.
Locke
,
Edwin A.
,
Karyll N.
Shaw
,
Lise M.
Saari
, and
Gary P.
Latham
, “
Goal Setting and Task Performance: 1969–1980,
Psychological Bulletin
90
(
1981
),
125
152
.
Lusher
,
Lester
, “
College Better: Parimutuel Betting Markets as a Commitment Device and Monetary Incentive
,”
Natural Field Experiments
561
(2016)
, www.fieldexperiments.com.
Marburger
,
Daniel R.
,
Does Mandatory Attendance Improve Student Performance?
Journal of Economic Education
37
(
2006
),
148
155
.
Meier
,
Stephan
, and
Charles
Sprenger
, “
Present-Biased Preferences and Credit Card Borrowing,
American Economic Journal: Applied Economics
2
(
2010
),
193
210
.
Mento
,
Anthony J.
,
Robert P.
Steel
, and
Ronald J.
Karren
, “
A Meta-Analytic Study of the Effects of Goal Setting on Task Performance: 1966–1984,
Organizational Behavior and Human Decision Processes
39
(
1987
),
52
83
.
Morgan
,
Mark
, “
Self-Monitoring and Goal Setting in Private Study
,”
Contemporary Educational Psychology
12:1
(
1987
),
1
6
.
Morisano
,
Dominique
,
Jacob B.
Hirsh
,
Jordan B.
Peterson
,
Robert O.
Pihl
, and
Bruce M.
Shore
, “
Setting, Elaborating, and Reflecting on Personal Goals Improves Academic Performance
,”
Journal of Applied Psychology
95
(
2010
),
255
264
.
Ors
,
Evren
,
Frédéric
Palomino
, and
Eloic
Peyrache
, “
Performance Gender Gap: Does Competition Matter?
Journal of Labor Economics
31
(
2013
),
443
499
.
Park
,
Young Joon
, and
Luís
Santos-Pinto
, “
Overconfidence in Tournaments: Evidence from the Field,
Theory and Decision
69
(
2010
),
143
166
.
Patel
,
Reshma
, and
Timothy
Rudd
, “
Can Scholarships Alone Help Students Succeed? Lessons from Two New York City Community Colleges
,”
MRDC technical report
(
2012
).
Patterson
,
Richard W.
, “
Can Behavioral Tools Improve Online Student Outcomes? Experimental Evidence from a Massive Open Online Course
,”
US Military Academy at West Point mimeograph
(
2016
).
Romer
,
David
, “
Do Students Go to Class? Should They?
Journal of Economic Perspectives
7
(
1993
),
167
174
.
Schunk
,
Dale H.
, and
Peggy A.
Ertmer
, “
Self-Regulatory Processes during Computer Skill Acquisition: Goal and Self-Evaluative Influences
,”
Journal of Educational Pschology
91
(
1999
),
251
260
.
Schutz
,
Paul A.
, and
Sonja L.
Lanehart
, “
Long-Term Educational Goals, Subgoals, Learning Strategies Use and the Academic Performance of College Students,
Learning and Individual Differences
6
(
1994
),
399
412
.
Scott-Clayton
,
Judith
, “
On Money and Motivation: A Quasi-Experimental Analysis of Financial Incentives for College Achievement,
Journal of Human Resources
46
(
2011
),
614
646
.
Scrivener
,
Susan
,
Michael J.
Weiss
,
Alyssa
Ratledge
,
Timothy
Rudd
,
Colleen
Sommo
, and
Hannah
Fresques
, “
Doubling Graduation Rates: Three-Year Effects of CUNY's Accelerated Study in Associate Programs (ASAP) for Developmental Education Students
,”
MDRC technical report
(2015)
.
Smithers
,
Samuel
, “
Goals, Motivation and Gender,
Economics Letters
131
(
2015
),
75
77
.
Strotz
,
Robert Henry
, “
Myopia and Inconsistency in Dynamic Utility Maximization,
Review of Economic Studies
23
(
1956
),
165
180
.
Suvorov
,
Anton
, and
Van de Ven
,
Jeroen
, “
Goal Setting as a Self-Regulation Mechanism
,” CEFIR NES working paper
122
(
2008
).
Thaler
,
Richard
, and
Shlomo
Benartzi
, “
Save More TomorrowTM: Using Behavioral Economics to Increase Employee Saving
,”
Journal of Political Economy
112
:
S1
(
2004
),
S164
S187
.
Weinstein
,
Neil D.
,
Unrealistic Optimism about Future Life Events,
Journal of Personality and Social Psychology
39
(
1980
),
806
820
.
Wertenbroch
,
Klaus
, “
Consumption Self-Control by Rationing Purchase Quantities of Virtue and Vice
,”
Marketing Science
17
(1998)
,
317
337
.
Wu
,
George
,
Chip
Heath
, and
Richard
Larrick
, ”
A Prospect Theory Model of Goal Behavior
,”
University of Chicago mimeograph
(
2008
).
Zimmerman
,
Barry J.
, and
Albert
Bandura
, “
Impact of Self-Regulatory Influences on Writing Course Attainment,
American Educational Research Journal
31
(
1994
),
845
862
.

External Supplements

Author notes

Primary IRB approval was granted by Cornell University. We thank Cornell University and UC Irvine for funding this project. We thank Svetlana Beilfuss, Daniel Bonin, Debasmita Das, Linda Hou, Stanton Hudja, Tingmingke Lu, Jessica Monnet, Ben Raymond, Mason Reasner, Peter Wagner, Laurel Wheeler, and Janos Zsiros for excellent research assistance. Finally, we are grateful for the many helpful and insightful comments that we have received from seminar participants and in private conversations.

A supplemental appendix is available online at http://www.mitpressjournals.org/doi/suppl/10.1162/rest_a_00864.

Supplementary data