## Abstract

Performance-based funding (PBF) in higher education has grown in recent years as a means of institutional accountability and incentive for improving student success. Although most states have successfully implemented their respective systems, research on early funding models suggests a difficult fiscal environment can introduce tension between theory and practice of the concept. This policy brief uses the case of Washington State's redesign of the Student Achievement Initiative to describe new implications around this tension. The revision focused on using a base reallocation as the funding source in the context of diminished state resources and the importance of college buy-in. Regression analyses tested the alignment between the principles and metrics for awarding funds, which resulted in a funding model that awarded the maximum dollars related to performance versus college characteristics. Policy makers considering new or revised PBF systems can benefit from critical lessons learned from Washington State's comprehensive process and final product.

## Introduction

Performance-based funding (PBF) systems have been in place since the Tennessee Higher Education Commission first adopted their policy in 1978. Interest in PBF has grown substantially in recent years because of the increased focus on student success and accountability through President Obama's Completion Agenda (Friedel et al. 2013). In contrast to the historical input-based, enrollment-driven funding model, this dedicated agenda embodies an output-based version of accountability for mission fulfillment and affordability. For state policy makers and education leaders, this paradigm shift has changed the conversation around the goals of higher education, which now include equal attention to both access and student success.

The primary objective of PBF is to incentivize a focus on student outcomes versus enrollment inputs by attaching a portion of the state allocation to measures of performance. The concept derives from the theory that public institutions of higher education, whose ability to operate rely heavily on the state appropriation, will adapt their behavior to achieve the outcomes that best protect their funding (Harnisch 2011). The most common PBF methodology used to link financial incentives to outcomes is a performance set-aside of the state appropriation (Friedel et al. 2013). The set-aside can consist of either a “new money” allocation (funding above and beyond the base allocation) or a separated portion of the base funds (Harnisch 2011). The new money method is often viewed as extra incentive or a bonus for colleges to increase their efforts to support student success. A historical challenge with this method is that the amount of funding dedicated to performance (typically between 1 and 15 percent) is often not substantial enough to incentivize significant change in the institutional behavior (Miao 2012). The separated portion (or base reallocation) method causes a greater degree of uncertainty and instability within a college's budget, even at small funding levels, which consequently increases the chance that colleges will pay attention and take the necessary action to preserve their funding level. From this view, the base reallocation method provides the greater incentive to focus on student outcomes. Nevertheless, as the stakes increase for colleges, they expect a voice within any process that impacts their budgets and ability to provide services to their students. Of particular concern, especially within a large state system of colleges, is the ability to create a standardized system that adequately accounts for differing college missions and student populations. As colleges are the primary stakeholders at the intersection between funding and delivering services to best serve their local populations, the incentive-based principles that undergird PBF can quickly cause outright rejection to the idea if colleges are not involved in developing the system.

A state's decision regarding the source of performance funding and stakeholder involvement in the process are two interrelated factors that carry weight in whether a PBF system will be successfully implemented and sustained over the course of time. Research on PBF has noted that these themes are especially paramount when circumstances change and resources become constrained, such as during the Great Recession (Dougherty and Natow 2009; Harnisch 2011; Schulock and Jenkins 2011; Jones 2012; Miao 2012; Friedel et al. 2013). Washington State represents a prime example of the tension that can ensue when the fiscal environment moves out of alignment with the design considerations for a PBF system.

As one of the longest-standing PBF systems in the nation, Washington State's Student Achievement Initiative (SAI) for the community and technical college system is cited in numerous studies and is considered “one of the most carefully designed performance accountability systems in the United States, serving as a model for the most recent wave of new performance funding reforms” (Hillman, Tandberg, and Fryar 2015, p. 2). Regardless of that designation, a 2012 review of SAI, conducted by a state system-level advisory committee, uncovered design flaws in the funding model that had resulted in some colleges having no equitable ability to earn performance awards. The complication occurred because SAI was originally designed and implemented as a new money–incentive program, but the state was forced to shift to a base reallocation funding model after the fiscal crisis in 2008 resulted in budget cuts and no new money for the system. College anxiety over the loss of state funding juxtaposed with the negative perception that the original funding principles were not being followed were significant challenges that threatened the long-term viability of PBF for the system. The situation resulted in distrust in the validity of SAI, which caused some of the colleges to reject the concept in theory.

The goal of this policy brief is to draw on Washington State's experience with PBF to address the implications of the interplay between funding challenges and the involvement and support of stakeholders previously identified in the literature. These implications, most commonly described by policy experts as recommendations for successful PBF design, are studied through a deep case-study analysis of Washington State's history with PBF. The result is an enhanced understanding of some of the more technical aspects of those design recommendations, which pertain to any PBF system in higher education and consequently are offered to help guide policy makers considering revising or implementing PBF.

The remainder of this brief is organized into five sections. First I describe some of the historical challenges of PBF, specifically, the distinguishing elements around funding and stakeholder support that have impacted the survival of systems over the course of time. Then I outline the history of PBF for community and technical colleges in Washington State, and describe how the original design principles of SAI proved incompatible with the fiscal environment of the state during the Great Recession. Next I detail the work of the system-level advisory committee and the considerations that resulted in a new set of principles and funding model. These included a review of prior research on SAI and PBF models in other states, an analysis of the technical aspect of assessing money from the base allocation to create a performance pool, and stakeholder input of policy goals for the PBF system. I describe how the advisory committee evaluated a series of regression models to test the alignment between the principles and the recommended new funding model—a step in the design process that has not been as pronounced in the prior literature on performance funding model design. Finally, I conclude the brief with key lessons learned from Washington State's experience.

## Review of Challenges for Performance-Based Funding Systems

Between 1979 and 2007, twenty-six states attempted some form of PBF; fourteen were eventually discontinued because of poor design of implementation (Miao 2012). As more states develop PBF systems, they have benefitted from both extensive research on the early systems as well as the lessons learned from issues plaguing early adopters. Studies on PBF have identified commonalities among states where the system was abandoned or reconstructed, resulting in recommendations for improvement. Two overarching and interrelated components emerge: the source of the funding and the involvement of stakeholders.

The fiscal condition of a state at the time a PBF system is deployed can not only determine the necessity of the funding source as new money or a base reallocation, but can influence the perceptions and behavior of the colleges and stakeholders involved. During difficult financial times, Dougherty and Natow (2009) find that colleges focused more on preserving their base funding over their performance funding, as they viewed the latter as less stable. Colleges whose performance was funded through the base reallocation method were frustrated by this type of model; they perceived the process not as a reallocation designed to motivate performance but rather as a cut to their appropriation—which they then had to earn back. Burke and Modarresi (2000) also identified stable funding as a key characteristic of a successful system and suggest new money be used as the funding source in order to avoid the problems associated with the volatility of the state revenue cycle. Jones (2012) acknowledges this perspective, but also cautions that, given the current fiscal outlook for most states, waiting for new money will make PBF implementation unlikely. Rather than relying solely on new money as the means of funding stability, Jones suggests a phased-in approach and stop-loss provisions to allow colleges time to adjust to the new appropriation. Harnisch (2011) adds that stable and predictable program funding is necessary to allow the incentive concept behind PBF to work, and the system should be set up so as to be protected from budget cuts.

The second critical component is the importance of widespread input to the PBF system and the support of stakeholders. In states where PBF was discontinued, colleges were insufficiently involved in the performance metric design. In addition, they viewed some aspects as not reflective of the diverse missions and goals of the different institutions (Dougherty and Natow 2009). This discrepancy can occur if state policy makers’ goals for higher education differ from the goals of the individual institutions—for example, a greater emphasis on completions versus progression for under-served populations (Miao 2012). Colleges are in the best position to understand their institutional mission in the context of the environment where the PBF system is deployed, therefore, college input from the beginning of the planning process is crucial for a sense of investment in the system. Additionally, college support helps garner backing from key constituents who have a stake in PBF at the state level (such as the legislature), thereby increasing the chances the system will be successfully implemented (Dougherty and Natow 2009).

Washington is one of the states cited by Dougherty and Natow (2009) where PBF has been implemented, discontinued, reinstated, and revised a number of times. The first attempt was implemented in 1997 as a budget proviso for the community and technical college system and was not renewed in the 2000–02 biennium. The relinquishment occurred because of a change in legislative priorities for higher education and overall push-back from the colleges on the metrics and funding design (Dougherty and Natow 2009). The next attempt at implementation was in 2006 when the State Board for Community and Technical Colleges (SBCTC) adopted a set of system strategic goals, one being “achieve increased educational attainment for all residents across the state” (SBCTC 2006). Drawing on lessons learned from the first failed attempt, SBCTC created a Student Achievement Task Force—comprising State Board members and staff, college presidents, faculty, and college trustees—in order to gain widespread input into the process. Working with national experts,1 this group was charged to develop a way to measure and reward colleges for increasing student achievement in alignment with the Board's system goal.

## Washington State's Performance-Based Funding System: The Student Achievement Initiative

As a result of the Student Achievement Task Force's work, in 2006 the SBCTC adopted the new performance funding system, called the Student Achievement Initiative (SAI). SAI is comprised of points that emphasize student momentum for college success by both building college readiness and earning college credits. In this way, SAI captures critical educational gains made by all students, from those who come in the least prepared to those who are college-ready. These gains are awarded through points for increasing basic skills,2 completing developmental coursework,3 earning 15 and 30 college-level credits, completing college-level math, and earning a credential. This progressive continuum of points recognized the system's strategic goal of increased educational attainment for all residents by not placing all of the weight on completions, but also awarding the major milestones that students make along the way to completion. The points were also constructed to be “mission neutral,” that is, colleges that serve a high number of underprepared students, academic transfer students, or students in professional-technical programs all have a chance to earn performance funds based on their work with their unique student populations.

### Funding Model: New Money

The principles guiding the distribution of performance funds at the onset of SAI were:

1. Each college is measured for total improvement (point gains) against own performance.

2. Total improvement represents a single number—not a rate.

3. There are no targets and no ceilings on achievement gains.

4. All gains are rewarded.

5. Rewards are stable and predictable, so colleges can invest funds for further improvements.

6. Reward funds are flexible.

When the initiative began, colleges were given $65,000 in start-up funds from the SBCTC, and this was permanently added to their base allocation. Beyond that initial start-up fund, the SBCTC set aside$500,000 to fund the first performance year. In keeping with the Board's recommendation, the funding pool was structured where each college competed against itself, through the single funding metric of “net point gain” (earn more points than in the prior year). A dollar amount per point was established in advance and colleges were awarded that amount per each point they accumulated above total points earned the prior year. This within-college comparison methodology meant that the rewards had no impact on the funding of other colleges, and because they did not originate from the base, they were considered stable.

Following the first performance year, it was assumed that the colleges would continue to invest in their student success strategies and would need additional money to sustain those initiatives over time. If the investment in strategies had the positive effect of increasing student achievement each year, the SBCTC estimated an average amount of approximately $100,000 would be added to each college's base allocation per year. Therefore, in order to continue to support the initiative, the SBCTC made a legislative request for the 2009–11 biennium of$7 million in new funds.