Abstract

Performance-based funding (PBF) in higher education has grown in recent years as a means of institutional accountability and incentive for improving student success. Although most states have successfully implemented their respective systems, research on early funding models suggests a difficult fiscal environment can introduce tension between theory and practice of the concept. This policy brief uses the case of Washington State's redesign of the Student Achievement Initiative to describe new implications around this tension. The revision focused on using a base reallocation as the funding source in the context of diminished state resources and the importance of college buy-in. Regression analyses tested the alignment between the principles and metrics for awarding funds, which resulted in a funding model that awarded the maximum dollars related to performance versus college characteristics. Policy makers considering new or revised PBF systems can benefit from critical lessons learned from Washington State's comprehensive process and final product.

Introduction

Performance-based funding (PBF) systems have been in place since the Tennessee Higher Education Commission first adopted their policy in 1978. Interest in PBF has grown substantially in recent years because of the increased focus on student success and accountability through President Obama's Completion Agenda (Friedel et al. 2013). In contrast to the historical input-based, enrollment-driven funding model, this dedicated agenda embodies an output-based version of accountability for mission fulfillment and affordability. For state policy makers and education leaders, this paradigm shift has changed the conversation around the goals of higher education, which now include equal attention to both access and student success.

The primary objective of PBF is to incentivize a focus on student outcomes versus enrollment inputs by attaching a portion of the state allocation to measures of performance. The concept derives from the theory that public institutions of higher education, whose ability to operate rely heavily on the state appropriation, will adapt their behavior to achieve the outcomes that best protect their funding (Harnisch 2011). The most common PBF methodology used to link financial incentives to outcomes is a performance set-aside of the state appropriation (Friedel et al. 2013). The set-aside can consist of either a “new money” allocation (funding above and beyond the base allocation) or a separated portion of the base funds (Harnisch 2011). The new money method is often viewed as extra incentive or a bonus for colleges to increase their efforts to support student success. A historical challenge with this method is that the amount of funding dedicated to performance (typically between 1 and 15 percent) is often not substantial enough to incentivize significant change in the institutional behavior (Miao 2012). The separated portion (or base reallocation) method causes a greater degree of uncertainty and instability within a college's budget, even at small funding levels, which consequently increases the chance that colleges will pay attention and take the necessary action to preserve their funding level. From this view, the base reallocation method provides the greater incentive to focus on student outcomes. Nevertheless, as the stakes increase for colleges, they expect a voice within any process that impacts their budgets and ability to provide services to their students. Of particular concern, especially within a large state system of colleges, is the ability to create a standardized system that adequately accounts for differing college missions and student populations. As colleges are the primary stakeholders at the intersection between funding and delivering services to best serve their local populations, the incentive-based principles that undergird PBF can quickly cause outright rejection to the idea if colleges are not involved in developing the system.

A state's decision regarding the source of performance funding and stakeholder involvement in the process are two interrelated factors that carry weight in whether a PBF system will be successfully implemented and sustained over the course of time. Research on PBF has noted that these themes are especially paramount when circumstances change and resources become constrained, such as during the Great Recession (Dougherty and Natow 2009; Harnisch 2011; Schulock and Jenkins 2011; Jones 2012; Miao 2012; Friedel et al. 2013). Washington State represents a prime example of the tension that can ensue when the fiscal environment moves out of alignment with the design considerations for a PBF system.

As one of the longest-standing PBF systems in the nation, Washington State's Student Achievement Initiative (SAI) for the community and technical college system is cited in numerous studies and is considered “one of the most carefully designed performance accountability systems in the United States, serving as a model for the most recent wave of new performance funding reforms” (Hillman, Tandberg, and Fryar 2015, p. 2). Regardless of that designation, a 2012 review of SAI, conducted by a state system-level advisory committee, uncovered design flaws in the funding model that had resulted in some colleges having no equitable ability to earn performance awards. The complication occurred because SAI was originally designed and implemented as a new money–incentive program, but the state was forced to shift to a base reallocation funding model after the fiscal crisis in 2008 resulted in budget cuts and no new money for the system. College anxiety over the loss of state funding juxtaposed with the negative perception that the original funding principles were not being followed were significant challenges that threatened the long-term viability of PBF for the system. The situation resulted in distrust in the validity of SAI, which caused some of the colleges to reject the concept in theory.

The goal of this policy brief is to draw on Washington State's experience with PBF to address the implications of the interplay between funding challenges and the involvement and support of stakeholders previously identified in the literature. These implications, most commonly described by policy experts as recommendations for successful PBF design, are studied through a deep case-study analysis of Washington State's history with PBF. The result is an enhanced understanding of some of the more technical aspects of those design recommendations, which pertain to any PBF system in higher education and consequently are offered to help guide policy makers considering revising or implementing PBF.

The remainder of this brief is organized into five sections. First I describe some of the historical challenges of PBF, specifically, the distinguishing elements around funding and stakeholder support that have impacted the survival of systems over the course of time. Then I outline the history of PBF for community and technical colleges in Washington State, and describe how the original design principles of SAI proved incompatible with the fiscal environment of the state during the Great Recession. Next I detail the work of the system-level advisory committee and the considerations that resulted in a new set of principles and funding model. These included a review of prior research on SAI and PBF models in other states, an analysis of the technical aspect of assessing money from the base allocation to create a performance pool, and stakeholder input of policy goals for the PBF system. I describe how the advisory committee evaluated a series of regression models to test the alignment between the principles and the recommended new funding model—a step in the design process that has not been as pronounced in the prior literature on performance funding model design. Finally, I conclude the brief with key lessons learned from Washington State's experience.

Review of Challenges for Performance-Based Funding Systems

Between 1979 and 2007, twenty-six states attempted some form of PBF; fourteen were eventually discontinued because of poor design of implementation (Miao 2012). As more states develop PBF systems, they have benefitted from both extensive research on the early systems as well as the lessons learned from issues plaguing early adopters. Studies on PBF have identified commonalities among states where the system was abandoned or reconstructed, resulting in recommendations for improvement. Two overarching and interrelated components emerge: the source of the funding and the involvement of stakeholders.

The fiscal condition of a state at the time a PBF system is deployed can not only determine the necessity of the funding source as new money or a base reallocation, but can influence the perceptions and behavior of the colleges and stakeholders involved. During difficult financial times, Dougherty and Natow (2009) find that colleges focused more on preserving their base funding over their performance funding, as they viewed the latter as less stable. Colleges whose performance was funded through the base reallocation method were frustrated by this type of model; they perceived the process not as a reallocation designed to motivate performance but rather as a cut to their appropriation—which they then had to earn back. Burke and Modarresi (2000) also identified stable funding as a key characteristic of a successful system and suggest new money be used as the funding source in order to avoid the problems associated with the volatility of the state revenue cycle. Jones (2012) acknowledges this perspective, but also cautions that, given the current fiscal outlook for most states, waiting for new money will make PBF implementation unlikely. Rather than relying solely on new money as the means of funding stability, Jones suggests a phased-in approach and stop-loss provisions to allow colleges time to adjust to the new appropriation. Harnisch (2011) adds that stable and predictable program funding is necessary to allow the incentive concept behind PBF to work, and the system should be set up so as to be protected from budget cuts.

The second critical component is the importance of widespread input to the PBF system and the support of stakeholders. In states where PBF was discontinued, colleges were insufficiently involved in the performance metric design. In addition, they viewed some aspects as not reflective of the diverse missions and goals of the different institutions (Dougherty and Natow 2009). This discrepancy can occur if state policy makers’ goals for higher education differ from the goals of the individual institutions—for example, a greater emphasis on completions versus progression for under-served populations (Miao 2012). Colleges are in the best position to understand their institutional mission in the context of the environment where the PBF system is deployed, therefore, college input from the beginning of the planning process is crucial for a sense of investment in the system. Additionally, college support helps garner backing from key constituents who have a stake in PBF at the state level (such as the legislature), thereby increasing the chances the system will be successfully implemented (Dougherty and Natow 2009).

Washington is one of the states cited by Dougherty and Natow (2009) where PBF has been implemented, discontinued, reinstated, and revised a number of times. The first attempt was implemented in 1997 as a budget proviso for the community and technical college system and was not renewed in the 2000–02 biennium. The relinquishment occurred because of a change in legislative priorities for higher education and overall push-back from the colleges on the metrics and funding design (Dougherty and Natow 2009). The next attempt at implementation was in 2006 when the State Board for Community and Technical Colleges (SBCTC) adopted a set of system strategic goals, one being “achieve increased educational attainment for all residents across the state” (SBCTC 2006). Drawing on lessons learned from the first failed attempt, SBCTC created a Student Achievement Task Force—comprising State Board members and staff, college presidents, faculty, and college trustees—in order to gain widespread input into the process. Working with national experts,1 this group was charged to develop a way to measure and reward colleges for increasing student achievement in alignment with the Board's system goal.

Washington State's Performance-Based Funding System: The Student Achievement Initiative

As a result of the Student Achievement Task Force's work, in 2006 the SBCTC adopted the new performance funding system, called the Student Achievement Initiative (SAI). SAI is comprised of points that emphasize student momentum for college success by both building college readiness and earning college credits. In this way, SAI captures critical educational gains made by all students, from those who come in the least prepared to those who are college-ready. These gains are awarded through points for increasing basic skills,2 completing developmental coursework,3 earning 15 and 30 college-level credits, completing college-level math, and earning a credential. This progressive continuum of points recognized the system's strategic goal of increased educational attainment for all residents by not placing all of the weight on completions, but also awarding the major milestones that students make along the way to completion. The points were also constructed to be “mission neutral,” that is, colleges that serve a high number of underprepared students, academic transfer students, or students in professional-technical programs all have a chance to earn performance funds based on their work with their unique student populations.

Funding Model: New Money

The principles guiding the distribution of performance funds at the onset of SAI were:

  1. Each college is measured for total improvement (point gains) against own performance.

  2. Total improvement represents a single number—not a rate.

  3. There are no targets and no ceilings on achievement gains.

  4. All gains are rewarded.

  5. Rewards are stable and predictable, so colleges can invest funds for further improvements.

  6. Reward funds are flexible.

When the initiative began, colleges were given $65,000 in start-up funds from the SBCTC, and this was permanently added to their base allocation. Beyond that initial start-up fund, the SBCTC set aside $500,000 to fund the first performance year. In keeping with the Board's recommendation, the funding pool was structured where each college competed against itself, through the single funding metric of “net point gain” (earn more points than in the prior year). A dollar amount per point was established in advance and colleges were awarded that amount per each point they accumulated above total points earned the prior year. This within-college comparison methodology meant that the rewards had no impact on the funding of other colleges, and because they did not originate from the base, they were considered stable.

Following the first performance year, it was assumed that the colleges would continue to invest in their student success strategies and would need additional money to sustain those initiatives over time. If the investment in strategies had the positive effect of increasing student achievement each year, the SBCTC estimated an average amount of approximately $100,000 would be added to each college's base allocation per year. Therefore, in order to continue to support the initiative, the SBCTC made a legislative request for the 2009–11 biennium of $7 million in new funds.

Altered Funding Model: Base Reallocation

Because of the Great Recession and an overall reduction in state funding for the system, the legislature did not fulfill the 2009–11 request for new money. Nevertheless, the legislature created a $1 million proviso that directed existing funds to be allocated on the basis of net point gain. Without additional funding, each college's base allocation was reduced on a pro rata share of the base allocation to create the funding pool.

The pool of base funds culled from the colleges was used to award the performance funds to colleges who had increased their total points from the prior year. Because each college's base allocation was first reduced to create the funding pool, however, in some cases the net effect was a cut. The base reallocation methodology for creating the pool meant that a college could improve its performance according to the net point funding metric and earn a performance award, but ultimately lose money for the year because the reduction for the funding pool was greater than the amount of the award.

Loss of Stakeholder Support

The mid-course change to the funding model was a fear that some college presidents articulated early on during the implementation year of SAI. In the first phase of a 2009 evaluation of the system conducted by the CCRC, colleges expressed concerns about the impact of the recession on the performance funding source and how it could lead to competition between colleges (Jenkins, Ellwein, and Boswell 2009). From the colleges’ perspective, the key difference between a PBF system funded through new money versus the base reallocation is that the latter shifts the focus of funding performance as an incentive to a battle over resources as colleges feel they have to “earn back” their base funding or lose it to higher performing colleges. This sentiment is particularly apropos in a system of multiple colleges with one total state appropriation where, in the absence of additional funding, resources shift between colleges.

At this time, the amount of money dedicated to performance was less than one percent of the total state allocation. Throughout the 2009–11 biennium, however, the colleges’ budgets were reduced by approximately 20 percent. Consequently, any further reduction in funding was met with significant concern and anxiety, even if it represented a small amount of the overall budget for each college. Additionally, the colleges perceived the redistribution of base funds that resulted from the altered funding model as a violation of the funding principles. Some college leaders also considered it a disconnect to create a funding pool on the basis of allocation size (essentially college size), but to award the funds using a metric unrelated to college size (increased performance). Furthermore, for colleges that did not receive a performance award multiple years in a row, this methodology had the potential to result in a rather significant reallocation over time. These concerns caused college frustration and distrust in the validity of the model, which, as noted in Dougherty and Natow's study (2009), can be a major reason for a PBF system to fail. Even though a dollar amount per point was established in advance, and colleges were rewarded on the basis of improvement above their own baseline, the bifurcation between the funding principles and the technical aspect of the funding process changed the tone of the overall initiative.

A Revised Model

When SAI was developed in 2006, the State Board agreed to review the initiative after a five-year period. In 2012, an advisory committee (led by college presidents and including vice presidents, institutional researchers, and SBCTC staff) was tasked to work through the problems that the changing fiscal environment caused within the original design. The focus of the review centered on how the original funding principles and the actual implementation of the funding model were no longer in alignment because of the change in funding source.

As stated in the previous section, the performance funding pool was created by reducing each college's base budget as a pro rata share of the total amount needed to fund the pool. This system practice was considered a neutral method for applying a broad-based budget reduction to colleges with significantly varying allocation sizes (smallest base allocation is $8 million, largest is $56 million). An unintended consequence followed—a college's ability to earn performance awards was not based on allocation size. Larger colleges purported that the model was problematic because they had the largest portion of their funds pulled for the pool but they were not in any better position to achieve increased performance (points) from year to year. From the smaller college perspective, even small gains made by larger colleges were significant when compared with the gains a college with fewer students could make, which resulted in an uneven redistribution of funding.

This misalignment suggested that a weakness in the current model was that the amount of funding a college received was not based entirely on performance but was also driven by the mechanism of how the pool was created. Because it was unlikely the system would receive new money for SAI going forward, the advisory committee constructed new principles for the funding model—with the assumption that the funding source would be a base reallocation. Within that context, the committee's goal was to redesign the model in a way that better aligned the principles with the overall goal of performance funding for the system.

Proposed Funding Metrics

As part of the review process, the advisory committee studied PBF systems in other states, consulted with national experts,4 and analyzed whether a change in the overall funding methodology was warranted. The review revealed variation in methodologies across states both in how points are counted and how funds are allocated. Some use targets, some weight metrics to recognize differing missions or historically underserved students, and some choose metrics on the basis of promoting alignment with state goals (such as degree completion) (Friedel et al. 2013). The committee determined that the points that had been used to measure milestones of student progress within SAI (college readiness, college-level attainment, math achievement, and completion) were still valid, and potential changes to the funding methodology would occur within the parameters of those achievement points. As the literature on detailed funding models in other states is limited, the committee conducted its own evaluation of the strengths and weaknesses of different ways of measuring performance within the context of the SAI framework to see which metric(s) should be the basis for determining the award (table 1).

Table 1. 
Attributes of Proposed Funding Metrics
MetricPerformance ConstructAttributes of the Metric
Total points Productivity • Closely aligns with college size 
  • Relatively predictable from year to year 
  • Not related to any other measures—distinct in what it shows 
  • Provides an absolute value 
Total points per student Efficiency • College size is not a factor 
  • Every college shares in efficiency every year, but more efficient colleges earn more 
Change in total points (original measure) Improvement • Stands alone as a measure—not correlated with any of the others or college size 
  • Adjusted for enrollment decreases, but with unintended consequences. No adjustment for increases 
  • In years that colleges have no gains they will have zero rewards 
Change in total points per student Improvement in efficiency • Correlated with other change metrics but more difficult to control because it is based on a ratio 
  • Magnitude of change is small, making funding a challenge 
MetricPerformance ConstructAttributes of the Metric
Total points Productivity • Closely aligns with college size 
  • Relatively predictable from year to year 
  • Not related to any other measures—distinct in what it shows 
  • Provides an absolute value 
Total points per student Efficiency • College size is not a factor 
  • Every college shares in efficiency every year, but more efficient colleges earn more 
Change in total points (original measure) Improvement • Stands alone as a measure—not correlated with any of the others or college size 
  • Adjusted for enrollment decreases, but with unintended consequences. No adjustment for increases 
  • In years that colleges have no gains they will have zero rewards 
Change in total points per student Improvement in efficiency • Correlated with other change metrics but more difficult to control because it is based on a ratio 
  • Magnitude of change is small, making funding a challenge 

The performance concept behind SAI's original funding metric of net point gain was improvement. Improvement was operationalized as an increase in achievement points from the prior year, with colleges measured against themselves rather than each other. The funding metric is not correlated with any other way of awarding performance or with college characteristics, such as college size or mission. Because a college's ability to generate points in any given year is highly correlated with enrollment, however, the improvement concept poses a challenge. At a high level, if enrollment declined in a subsequent year, the funding model would not capture all educational gains made by retained students (Belfield 2012). Alternatively, a college could show improvements from year to year simply from enrolling new students who achieved milestones for the first time, rather than from improved achievement of current students. Additionally, colleges who showed no improvement receive no awards—which makes the “no competition” factor problematic when each college contributes part of their base allocation to the pool.

The committee also considered an alternate funding metric—that is, college share of total point accumulation within a single year. The performance concept in this view is not improvement but rather overall productivity of achievement as measured by the milestones captured within the points. Unlike the improvement metric, this metric is closely tied to college size in that more students equals more points. Under this method, colleges would technically compete with each other for their share of the single pool of money, although, by distributing it on a yearly basis, every college would receive an award regardless of their productivity. Another performance concept aside from productivity is efficiency, operationalized as points per student. In this value, enrollment and college size are not factors—it simply measures how many points are generated by each student. The combined performance concepts of improvement and efficiency results in “change in points per student” as a funding metric. This method would remove the competition factor between colleges by measuring change within individual colleges from year to year. Colleges would have a difficult time influencing this measure, however, and, because it is a ratio, the magnitude of change would be so small that funding would prove a challenge.

As enrollment projections going forward were expected to decline rather than grow, the college presidents expressed concern about building a PBF model so tightly tied to enrollment. It was also clear that the chances of receiving new money for the initiative were small, and while in principle colleges should not battle with each other over resources, competition was an inherent part of PBF. Therefore, in order to best mitigate the impacts, the new model needed to address two design elements no longer applicable in the current state—new money and no competition. Subsequently, the committee ultimately agreed to a multifaceted approach to the model, one that would reward performance for both productivity (total points) and efficiency (points per student) in a single academic year.

Proposed Funding Principles

With the anticipation that the PBF system would continue to use a base reallocation as the funding source, the advisory committee had to evaluate each potential new funding metric in the context of the impact of the award on the college system as a whole. This level of analysis directly addressed concerns brought forth by the colleges regarding the competition element resulting from the misalignment between the principles and the funding metrics. Consequently, the process of “assessment,” or pulling the money from each college's base allocation to create the pool, became as equally important as the process for awarding the performance funds. For the advisory committee to test that alignment, the award that would be distributed by each funding metric was analyzed in concert with the assessment to create a net award as the outcome for analysis of the principles. The following funding principles were drafted to help guide the analysis:

  1. Student achievement is a factor in allocating funds to colleges.

  2. Colleges are rewarded for efficiency and productivity in student achievement.

  3. Funding is structured so that colleges compete against themselves for continuous improvement rather than competing with each other.

  4. New funds provide the greatest incentive. If base funds are used, the method used to create the performance fund aligns with the award method.

  5. Colleges have a fair opportunity to earn performance awards regardless of student demographics, program mix, or college characteristics.

  6. Performance funding rewards student success and becomes a resource for adopting and expanding practices leading to further success. The amount of performance funding is balanced between providing significant incentive without undermining the college's ability to impact student success.

For each proposed funding metric, the net award outcome was compared with the funding principles for alignment. The results revealed different considerations for the assessment based on the funding metric in question. For the points per student funding metric (a methodology that is college-size neutral), a flat rate assessment is appropriate. Every college contributes the same funding amount and draws its reward from its share of the system's points per student. Distributing all funds on share of points per student skews the positive rewards to small colleges (when compared to the college's contribution) almost exclusively, however. Base funding and pro rata share of enrollment both align with total points. This metric is correlated with college size—therefore, a pro rata contribution to the pool is justified. Spreading all funds based on share of total points skews the positive rewards to larger colleges (when compared to the college's contribution) almost exclusively, however. Therefore, a combination of these different methods to distribute money seemed a reasonable way to equalize the size differential among the colleges in the system.

To ensure stakeholders were invested in the revision process, the recommended change to the funding metrics were brought forward to the president's commission and the State Board (the nine-member governing board of SBCTC) for their feedback and input. The committee chair shared the potential fiscal impact of each funding metric, which raised additional questions and concerns about the fairness of the proposed system, clarification on the funding principles, and whether the two were in alignment. Further, the presidents proposed elevating the impact of degree and certificate completions as a policy focus in the overall funding model and recommended those points be funded as a separate category. The feedback from the system constituencies led to a final set of analysis questions that were needed before the recommended new funding model went forward to the State Board for adoption.

Testing the Alignment of the Funding Metrics and Principles

After the attributes of each potential funding metric had been reviewed and the presidents’ recommendation to align SAI with the national completion agenda (McPhail 2011) was included, the advisory committee drafted a recommended funding model that included total points and completions (productivity and attainment), and points per student (efficiency). Adding completions as a separate category raised concerns for some committee members about unintended consequences for the mission, specifically for the funding principle that stated “colleges have a fair opportunity to earn performance awards regardless of student demographics, program mix, or college characteristics.” The concern was that if performance funds were attached to completions, colleges with high populations of historically underrepresented and low-income students would be disadvantaged as they may have fewer of those students in their completion cohorts. Prior to submitting their final recommendation for a new funding model to the president's commission, the advisory committee reviewed a set of analyses that tested alignment between their proposed model and the funding principles as outlined in the following section.

To test the principle of fairness and equality of mission mix, the net award (assessment from the base for the funding pool plus the award) of the proposed funding model was analyzed against a variety of college and student characteristics. The purpose of this analysis was to determine which of those characteristics might account for significant variation in the net award and to find the funding method that best mitigated those effects. Multiple models with net award as the dependent variable were established based on (1) the funding metric, (2) the method of the assessment, and (3) the weight of the funding for each metric. This process determined if different weights given to the funding metrics or different methods of assessing money for the pool had an impact on the relationship between net award and student or college characteristics. As stated in the previous section, the points per student metric requires every college to contribute the same amount of funding because it is size-neutral. The two options for assessing money for the size-based award pools (total points and completions) were college percentage share of actual headcount or college percentage share of the full-time equivalent (FTE) allocation. The remainder of the analysis was two-fold:

  1. Identify which net award variables (top of table 2) were statistically significantly correlated with student and college characteristic variables (bottom of table 2).

  2. Create a regression model with each of the dependent net award and correlated independent variables to determine the degree of variation in net award explained by student and college characteristics.

Each net award variable in table 2 was first correlated with each student and college characteristic. Those characteristics showing a statistically significant relationship with net award were analyzed in a backward regression model to determine the amount of variation in net award explained by those characteristics. This process resulted in an analysis of eleven different funding models, one for each dependent variable of net award shown in table 2. The results of the correlation analysis for each potential funding model first revealed that none of the net awards was significantly correlated with size of the student population that is non-white/non-Asian or with low socioeconomic status (see Appendix A; data sources are described in Appendix B). This result satisfied the principle that the funding model be constructed so that colleges have a fair opportunity to earn performance awards regardless of student demographics. This was a key finding for the integrity of the funding model that there are no disincentives for serving a particular kind of student, which was an important step in stakeholder buy-in for the system.

Table 2. 
Dependent and Independent Variables for Regression Model Analyses
Net Award: Dependent Variables
ModelsMethod of AssessmentWeight
Independent Methods   
1. Points per student Flat rate 100 percent 
2. Total points Headcount 100 percent 
3. Total points FTE 100 percent 
4. Completions Headcount 100 percent 
5. Completions FTE 100 percent 
Combined Methods   
6. Total points/points per student Flat rate and FTE 50/50 percent 
7. Total points/points per student Flat rate and headcount 50/50 percent 
8. Total points/points per student Flat rate and headcount 75/25 percent 
9. Points per student/completions Flat rate and headcount 50/50 percent 
10. Total points/points per student/completions Flat rate and FTE 45/45/10 percent 
11. Total points/points per student/completions Flat Rate and headcount 45/45/10 percent 
Student and College Characteristics: Independent Variables 
Variables Definition 
College size Annual full-time equivalent (FTE) 
Percent over enrollment The difference between a college's allocated and 
 actual enrollment 
Size of basic skills FTE effort Percentage of state-supported FTE that are basic 
 skills courses 
Size of developmental education FTE Effort Percentage of state-supported FTE that are 
 developmental education courses 
Size of transfer/academic effort Percentage of state-supported FTE that are 
 transfer/academic courses 
Size of professional/technical effort Percentage of state-supported FTE that are 
 professional/technical courses 
Size of student population that attends part-time Percentage of student headcount that are part-time 
Size of student population that is non-white and non-Asian Percentage of student headcount by race/ethnicity 
Size of student population with low socioeconomic status Percent of student headcount within the bottom two 
 quartiles 
Net Award: Dependent Variables
ModelsMethod of AssessmentWeight
Independent Methods   
1. Points per student Flat rate 100 percent 
2. Total points Headcount 100 percent 
3. Total points FTE 100 percent 
4. Completions Headcount 100 percent 
5. Completions FTE 100 percent 
Combined Methods   
6. Total points/points per student Flat rate and FTE 50/50 percent 
7. Total points/points per student Flat rate and headcount 50/50 percent 
8. Total points/points per student Flat rate and headcount 75/25 percent 
9. Points per student/completions Flat rate and headcount 50/50 percent 
10. Total points/points per student/completions Flat rate and FTE 45/45/10 percent 
11. Total points/points per student/completions Flat Rate and headcount 45/45/10 percent 
Student and College Characteristics: Independent Variables 
Variables Definition 
College size Annual full-time equivalent (FTE) 
Percent over enrollment The difference between a college's allocated and 
 actual enrollment 
Size of basic skills FTE effort Percentage of state-supported FTE that are basic 
 skills courses 
Size of developmental education FTE Effort Percentage of state-supported FTE that are 
 developmental education courses 
Size of transfer/academic effort Percentage of state-supported FTE that are 
 transfer/academic courses 
Size of professional/technical effort Percentage of state-supported FTE that are 
 professional/technical courses 
Size of student population that attends part-time Percentage of student headcount that are part-time 
Size of student population that is non-white and non-Asian Percentage of student headcount by race/ethnicity 
Size of student population with low socioeconomic status Percent of student headcount within the bottom two 
 quartiles 

The next observation resulting from the regression analysis was that the models using share of FTE allocation as the assessment method (models 3, 5, 6, and 10; table 3) demonstrated a relatively high amount of variation explained by the college characteristics (table 2). In models 3, 6, and 10, over-enrollment (enrolling FTE above the state allocated target) was a significant factor, accounting for 24, 31, and 34 percent, respectively, of the unique variation in net award. This relationship occurred because FTE allocation share is a static number derived from a variety of historical factors, whereas point accumulation is generated from student headcount. Essentially, a college that can enroll a significant number of students beyond their funding level can also earn a larger share of points and funding compared with their contribution to the pool—a factor unrelated to pure performance. Further, mission mix was a factor for two of the models. In model 3 (Total Points), the percent of college Workforce FTE was significant (19 percent of unique variance), and for model 5 (Completions) the percent of Basic Skills FTE was significant (19 percent of unique variance).

Table 3. 
Regression Results for Funding Model
Funding ModelsSignificant College CharacteristicsPart Correlation/Unique VarianceR-Square/Model Variance
Assessment Method FTE Allocation Share 
3. Total points Over-enrollment 0.492 (24%) 0.549 (55%) 
 Workforce FTE −0.436 (19%) 
5. Completions Basic skills FTE −0.479 (19%) 0.230 (23%) 
6. Total points/points per student (50/50) Over-enrollment 0.561 (31%) 0.314 (31%) 
10. Total points (less completions), points per student, completions (45/45/10) Over-enrollment 0.582 (34%) 0.339 (34%) 
Assessment Method Flat Rate and Headcount 
1. Points per student College size −0.376 (14%) 0.299 (29%) 
 Part time −0.359 (13%) 
2. Total points DevEd FTE 0.513 (26%) 0.264 (26%) 
4. Completions College size −0.387 (15%) 0.266 (26%) 
 Part time −0.311 (10%) 
7. Total points and points per student (50/50) College size −0.337 (11%) 0.206 (31%) 
 Part time −0.296 (9%) 
8. Total points and points per student (75/25) DevEd FTE 0.430 (18%) 0.185 (18%) 
9. Points per student and completions (50/50) College size −0.428 (18%) 0.355 (35%) 
 Part time −0.360 (13%) 
11. Total points (less completions), points per student, completions (45/45/10) College size −0.316 (10%) 0.216 (21%) 
 Part time −0.329 (11%) 
Funding ModelsSignificant College CharacteristicsPart Correlation/Unique VarianceR-Square/Model Variance
Assessment Method FTE Allocation Share 
3. Total points Over-enrollment 0.492 (24%) 0.549 (55%) 
 Workforce FTE −0.436 (19%) 
5. Completions Basic skills FTE −0.479 (19%) 0.230 (23%) 
6. Total points/points per student (50/50) Over-enrollment 0.561 (31%) 0.314 (31%) 
10. Total points (less completions), points per student, completions (45/45/10) Over-enrollment 0.582 (34%) 0.339 (34%) 
Assessment Method Flat Rate and Headcount 
1. Points per student College size −0.376 (14%) 0.299 (29%) 
 Part time −0.359 (13%) 
2. Total points DevEd FTE 0.513 (26%) 0.264 (26%) 
4. Completions College size −0.387 (15%) 0.266 (26%) 
 Part time −0.311 (10%) 
7. Total points and points per student (50/50) College size −0.337 (11%) 0.206 (31%) 
 Part time −0.296 (9%) 
8. Total points and points per student (75/25) DevEd FTE 0.430 (18%) 0.185 (18%) 
9. Points per student and completions (50/50) College size −0.428 (18%) 0.355 (35%) 
 Part time −0.360 (13%) 
11. Total points (less completions), points per student, completions (45/45/10) College size −0.316 (10%) 0.216 (21%) 
 Part time −0.329 (11%) 

Note: DevEd: Developmental education.

These results suggested that models 3, 5, 6, and 10 violated the funding principles: (1) if base funds are used, the method used to create the performance fund must align with the award method, and (2) colleges have a fair chance to earn awards regardless of mission mix. The four models that used FTE allocation as the assessment method were removed from consideration, and the remaining seven models were evaluated to determine which one demonstrated the least amount of variation attributable to college characteristics and most closely aligned with the revised funding principles. The results of the final models for consideration are shown in table 3.

After reviewing the significant college characteristics identified in each model and the final model variance, the advisory committee determined that the combined funding metric model of total points less completions (45 percent), points per student (45 percent), and completions (10 percent) with a headcount assessment (model 11) was the best option for a funding model that both aligned with the funding principles and maximized the amount of the net award attributable to performance. It included all three funding metrics of interest and just over one fifth of variation in net award was attributable to college characteristics. The recommended changes to the funding model were approved by the college presidents in the system and subsequently the State Board for full implementation in the 2013–14 academic year.

Summary and Lessons Learned

This brief began with a description of the current state of PBF as well as a review of research that described some discriminating characteristics between states that have and have not been successful with sustained PBF systems. Two key interrelated themes emerged: the importance of the decision to fund the system through a base reallocation process or through new money given the fiscal context of the state, and the high level of influence that support from primary stakeholders has on the ultimate success of the system. Washington State is a good example of a state where PBF was implemented, discontinued shortly thereafter, and then successfully re-implemented in 2006 as the Student Achievement Initiative. The Great Recession of 2008 and subsequent unprecedented budget reductions caused a disruption to SAI's original design principles, however, and a Community College Research Center evaluation confirmed that the above-mentioned themes were areas of concern for a sustainable PBF practice going forward in Washington State.

Guided by an advisory committee of key stakeholders, an internal policy review of SAI in 2012 resulted in new funding principles that reflected the fiscal reality of the state. These principles were empirically tested for alignment with the operational elements of the funding model. The review, analysis, and revision of Washington's PBF system resulted in a conceptually sound, valid, and sustainable system that, given the fiscal and policy environment for community and technical colleges, was considered fair and in alignment with the goals of performance-based funding in Washington State. The critical lessons learned from Washington State's long history of PBF activity for the community and technical college system are offered here as guidance for state policy makers undertaking new or revised PBF systems.

Encourage and Facilitate Strong Communication and Input in Order to Generate Investment in the Process

Input and involvement from college constituents was a key factor in the ultimate successful implementation of the revised system. This was facilitated through the president-led advisory committee, which included commission representation and staff support from the SBCTC. The committee made regular progress reports on process and proposed recommendations to all of the major constituents in the system, to include the full presidents’ commission and the State Board. This widespread communication was vital for the system to have input and to raise issues that could potentially impact each college at a local level.

If a Base Reallocation Process is Necessary to Fund the PBF System, Create Alignment between the Methods Used to Assess the Funds and to Award Performance

The financial reality for higher education in Washington State in 2012 was that new funding was unlikely. Therefore, the revised system was designed to consider the impact of a base reallocation from the onset. Starting from this perspective, colleges conceded that some competition was inherent and part of the nature of shifting a portion of base funding from enrollment to performance. Nevertheless, in a base reallocation methodology it is critical that colleges have a reasonable chance to earn back what they contributed to the pool. Otherwise, it can result in a structural disadvantage for colleges simply because of a characteristic that has little to do with performance. This is a key design consideration when using a multi-method approach to awarding funds, as each funding metric may interact with college characteristics in different ways.

This principle represents an imperative design consideration for states that look to align their PBF systems with funding model implementation recommendations made by Harnisch (2011) and Jones (2012) to both increase the percent of the allocation dedicated to performance over time as well as build performance into the overall state funding formula. As of 2014, recommendations are underway in Washington State to increase the amount of funding for SAI to between 5 and 10 percent of the total state appropriation (SBCTC 2014) as part of a shift to a new state funding model similar to that in Massachusetts (Salomon-Fernandez 2014). The sensitivity of the interaction between the performance funding metrics and college characteristics was given significant consideration in building the components of the new funding formula, in the hopes of avoiding potential unintended consequences when increasing the size of the allocation dedicated to performance.

Ensure Colleges Have a Fair Opportunity to Earn Performance Awards Regardless of Mission Mix or Student Demographics

Accounting for mission is a commonly cited recommendation for successful PBF design. The advisory committee's exercise of empirically testing each funding metric against the principles for alignment not only evaluated the integrity of the principle within the new design but proved a crucial last step for stakeholder buy-in of the revised system. The comprehensiveness of the analyses for the funding considerations surpassed that of the original SAI, which garnered confidence among each of the colleges in the system that their unique circumstances and missions had been carefully studied and taken into account—a key element for longevity of any PBF system.

Notes

1 

“National experts” included researchers from the Community College Research Center (CCRC) and the National Center for Higher Education Management Systems (NCHEMS).

2 

“Basic skills” is coursework designed to prepare students to earn a high school equivalency, improve English skills, and learn workplace and college-readiness skills.

3 

“Developmental” is coursework that serves as prerequisites to college-level math and English.

4 

National experts included Dennis Jones and Peter Ewell from the National Center for Higher Education Management Systems.

Acknowledgments

I would like to thank the editors of Education Finance and Policy and the anonymous reviewers for their assistance, as well as Jennifer Whetham, Katherine Mahoney, Davis Jenkins, and Willard Hom for providing helpful comments and suggestions on early drafts of this paper. The views expressed in this brief come from the author's perspective as a State Board for Community and Technical Colleges (SBCTC) employee and staff member to the 2012 Student Achievement Initiative advisory committee and do not necessarily reflect the views of the SBCTC.

REFERENCES

Belfield
,
Clive
.
2012
.
Washington State Student Achievement Initiative: Achievement point analysis for academic years 2007–2011
. Available http://ccrc.tc.columbia.edu/publications/student-achievement-initiative-points-analysis.html.
Accessed 13 September 2012
.
Burke
,
Joseph C.
, and
Shahpar
Modarresi
.
2000
.
To keep or not to keep performance funding: Signals from stakeholders
.
Journal of Higher Education
71
(
4
):
432
453
. doi:10.2307/2649147.
Dougherty
,
Kevin J.
, and
Rebecca S.
Natow
.
2009
.
The demise of higher education performance funding systems in three states.
New York
:
CCRC, Columbia University
.
Friedel
,
Janice N.
,
Zoë Mercedes
Thornton
,
Mark
D’Amico
, and
Stephen G.
Katsinas
.
2013
.
Performance-based funding: The national landscape.
Available www.uaedpolicy.ua.edu/uploads/2/1/3/2/21326282/pbf_9-19_web.pdf.
Accessed 19 September 2013
.
Harnisch
,
Thomas
.
2011
.
Performance-based funding: A re-emerging strategy in public higher education financing.
A Higher Education Policy Brief
.
Washington, DC
:
AASCU Policy Matters
.
Hillman
,
Nicholas
,
David
Tandberg
, and
Alisa
Fryar
.
2015
.
Evaluating the impacts of “new” performance funding in higher education
.
Educational Evaluation and Policy Analysis
37
(
4
):
501
519
. doi:10.3102/0162373714560224.
Jenkins
,
Davis
,
Todd
Ellwein
, and
Katherine
Boswell
.
2009
.
Formative evaluation of the Student Achievement Initiative “learning year.”
Available http://ccrc.tc.columbia.edu/publications/student-achievement-initiative-learning-year.html.
Accessed 30 August 2013
.
Jones
,
Dennis
.
2012
.
Performance funding: From idea to action.
Available http://files.eric.ed.gov/fulltext/ED536832.pdf.
Accessed 16 February 2016
.
McPhail
,
Christine Johnson
.
2011
.
The completion agenda: A call to action
. Available www.aacc.nche.edu/Publications/Reports/Documents/CompletionAgenda_report.pdf.
Accessed 2 December 2013
.
Miao
,
Kysie
.
2012
.
Performance-based funding of higher education: A detailed look at best practices in 6 states
.
Washington, DC
:
Center for American Progress
.
Salomon-Fernandez
,
Yves
.
2014
.
The Massachusetts community college performance-based funding formula: A new model for New England? The New England Journal of Higher Education
. Available www.nebhe.org/thejournal/the-massachusetts-community-college-performance-based-funding-formula-a-new-model-for-new-england/.
Accessed 15 February 2015
.
Schulock
,
Nancy
, and
Davis
Jenkins
.
2011
.
Performance incentives to improve community college completion: Learning  from Washington State's Student Achievement Initiative.
Available http://ccrc.tc.columbia.edu/publications/performance-incentives-college-completion.html.
Accessed 11 July 2012
.
State Board for Community and Technical Colleges (SBCTC)
.
2006
.
System direction: Creating opportunities for Washington's future
. Available http://eric.ed.gov/?id=ED496226.
Accessed 16 February 2016
.
State Board for Community and Technical Colleges (SBCTC)
.
2014
.
Washington Association of Community and Technical Colleges: Business meeting minutes
. Available www.sbctc.edu/resources/documents/colleges-staff/commissions-councils/wactc/7_18_14_Business_Meeting_Minutes_with_Attachments.pdf.
Accessed 16 February 2016
.

Appendix A:

Correlations for Model Variables
Correlations
ModelCollege SizeOver-enrollmentBasic Skills FTETransfer FTEWorkforce FTEDevEd FTELow SESRace/EthnicityPart Time
−0.461** 0.150 −0.123 −0.181 0.204 0.053 0.212 0.163 −0.447** 
−0.212 0.383* −0.155 0.370* −0.377* 0.513** 0.196 0.199 −0.211 
0.013 0.622** −0.090 0.571** −0.578** 0.480** −0.028 0.023 0.325 
−0.463** −0.122 −0.149 −0.299 0.342* −0.039 0.300 0.066 −0.401* 
−0.310 0.402* −0.479** 0.115 0.061 0.174 0.147 −0.173 −0.176 
−0.244 0.561** −0.135 0.363* −0.357* 0.410* 0.092 0.097 0.036 
−0.408* 0.280 −0.158 0.096 −0.086 0.312 0.239 0.201 −0.375* 
−0.313 0.344* −0.161 0.247 −0.246 0.430* 0.222 0.205 −0.296 
−0.514** −0.049 −0.156 −0.294 0.334 −0.013 0.304 0.105 −0.459** 
10 −0.394* 0.287 −0.210 0.085 −0.053 0.303 0.247 0.165 −0.404* 
11 −0.217 0.582** −0.176 0.372* −0.348* 0.408* 0.084 0.065 0.041 
Correlations
ModelCollege SizeOver-enrollmentBasic Skills FTETransfer FTEWorkforce FTEDevEd FTELow SESRace/EthnicityPart Time
−0.461** 0.150 −0.123 −0.181 0.204 0.053 0.212 0.163 −0.447** 
−0.212 0.383* −0.155 0.370* −0.377* 0.513** 0.196 0.199 −0.211 
0.013 0.622** −0.090 0.571** −0.578** 0.480** −0.028 0.023 0.325 
−0.463** −0.122 −0.149 −0.299 0.342* −0.039 0.300 0.066 −0.401* 
−0.310 0.402* −0.479** 0.115 0.061 0.174 0.147 −0.173 −0.176 
−0.244 0.561** −0.135 0.363* −0.357* 0.410* 0.092 0.097 0.036 
−0.408* 0.280 −0.158 0.096 −0.086 0.312 0.239 0.201 −0.375* 
−0.313 0.344* −0.161 0.247 −0.246 0.430* 0.222 0.205 −0.296 
−0.514** −0.049 −0.156 −0.294 0.334 −0.013 0.304 0.105 −0.459** 
10 −0.394* 0.287 −0.210 0.085 −0.053 0.303 0.247 0.165 −0.404* 
11 −0.217 0.582** −0.176 0.372* −0.348* 0.408* 0.084 0.065 0.041 

Notes: DevEd: Developmental education; SES: socioeconomic status.

**Correlation is significant at the 0.01 level (2-tailed).

*Correlation is significant at the 0.05 level (2-tailed).

Appendix B:  Description of Data Sources

All of the data used for the analysis in the redesign came from the SBCTC data warehouse (more information at www.sbctc.edu/colleges-staff/research-data/data-warehouse/default.aspx) throughout the tenure of the advisory committee in 2012. The SAI points are engineered by using student demographic, transcript, and completion data, and are stored in a set of data tables on the SBCTC SQL server. The funding scenarios were generated using the points from the SAI data tables to create assessment and award pools using a hypothetical dollar amount of $10 million to clearly see the impacts, which were calculated within Excel spreadsheets. The college and student characteristic variables used in the correlation and regression calculations also derive from the SAI data tables.