In Short

Here’s Why Gainful Program Impact Estimates Vary So Much

gainful-programs_image.jpeg

One of the big talking points used by the Obama Administration is that 1,400 programs will not pass its new gainful employment rules. That sounds like a lot, but it has also created confusion, with correct arguments that the rule is both stronger and weaker than prior iterations.

It turns out neither position is technically wrong. Understanding why that’s the case comes down to four factors: an additional performance tier, the usage of a single metric, the number of gainful programs with available data, and changes in how debt payments are calculated.

How many programs do not pass?

First, some quick background. From 2011 to today we’ve seen three major versions of the gainful employment regulation: (1) a final rule issued in June of 2011, (2) a proposed rule issued in March of 2014, and (3) the final rule issued in October 2014. While each has many elements in common, they depend on slightly different thresholds, metrics, and other factors. As a result, the number programs that are expected to not pass each rule varies quite a bit, as the table below shows. 

Note: The estimates here do not necessarily match what the Administration has stated. The reason for this likely has to do with changes in the way student loan amounts are calculated in the 2014 final rule that will be discussed later in this post.

An additional performance tier means not passing ≠ failing

The most important thing to understand about the different estimates of gainful programs that could lose eligibility for federal student aid is the difference between “non-passing” and “failing.” The final 2011 rule either classified programs as failing or passing, there was nothing in between. By contrast, the March and October 2014 versions included a third performance tier known as the “zone.” Programs in the zone are not technically failing, but they still have to worry about losing eligibility if they do not become a passing program for four consecutive years. So when the Administration talks about 1,400 not passing programs, it is those that fail or are in the zone. When it presented the final rule in 2011 there was no zone to include.

The zone represents a new set of thresholds that programs have to meet. In the final 2011 rule, a program failed the debt-to-earnings measure if its annual rate was above 12 percent and its discretionary one was above 30 percent. The 2014 rules kept that same failing level, but also said that a program has to have an annual rate at or below 8 percent or a discretionary rate at or below 20 percent in order to pass. Programs that fall somewhere in between–not good enough to pass and at least one measure not bad enough to fail, end up the zone. So the 2014 rules effectively put a number of colleges that would have passed in 2011 at risk of losing eligibility.

As the table below shows, a large number of programs end up in the zone–an estimated 832 representing 121,700 graduates in the final rule. That’s nearly double the number that are labeled as failing.

 

This table also shows the importance of the baseline used for comparison. The final 2014 rule has substantially more programs at risk of losing eligibility than the 2011 final rule did, but a bit fewer than the March 2014 proposed rule, as well as differing levels of zone programs. Understanding why this is the case requires next looking at the non-debt-to-earnings measures included.

Multiple measures

The final 2014 rule is the first formal iteration of gainful employment to only include a single accountability measure: debt-to-earnings rates. But the role of the second measure in other versions has varied. The 2011 final rule included a student loan repayment rate, while the March 2014 proposed rule had a program cohort default rate. These other measures and the role they played in a program passing or failing explain a great deal of the discrepancy in the number of non-passing programs.

The 2011 final rule required a program to fail both the debt-to-earnings and repayment rates. Either measure could thus save a program from poor performance on the other. And sure enough, that’s what happened: 159 programs failed the debt-to-earnings but passed the repayment rate. Another 1,305 programs were saved by the debt-to-earnings after failing the repayment rate.

The March 2014 proposed rule operated differently. Instead of potentially saving programs, the program cohort default rate became a second way a program could not pass. In other words, a program that passed debt-to-earnings but failed the program cohort default rate was a failing program. The same was true if it passed the program cohort default rate and failed the debt-to-earnings rate. Instead of the pass either test employed in 2011, the March 2014 required passage of both. This formulation turned a number of passing programs into failing ones. In particular, 323 programs with 68,360 graduates passed the debt-to-earnings rate but failed the program cohort default rate. Another 167 with 28,816 graduates were in the zone on debt-to-earnings and failed the program cohort default rate. Finally, 359 programs with  22,435 borrowers failed the program cohort default rate but had no debt-to-earnings data. That’s nearly 850 programs whose results improved by removing the program cohort default rate.

As the table below shows, once you take out the second measure, the number of programs that fail the measures aren’t radically different across the three rules. That makes sense, since the failure thresholds have been the same. The difference is that choices made in 2011 final rule helped to reduce the number, while those in the March 2014 proposed helped to increase it. In fact, the reason why the number of failing programs is still slightly different after removing the second measure reflects the different data set used more than a difference in standards.

More programs with data

The first look we had at results on the 2011 final rule came from a spreadsheet with information on students who completed in 2007 and 2008. It also only had about 3,700 programs with both debt-to-earnings and repayment rate data.It also didn’t have a count of the number of completers included. The 2014 data, meanwhile, is based upon separate reporting of programs with graduates in 2008 and 2009. This file has 7,934 programs total, including 5,539 with some debt-to-earnings data. This means that the 2014 estimates are likely going to be higher partly because the universe of programs is larger.

Interest rate changes 

How debt payments are calculated is the final major difference between the impact estimates across the three versions of the rule. The October 2014 version of the rule changed the assumed interest rate used on the debt-to-earnings rate. The interest rate for associate degrees, certificates, and master’s degree programs is a three-year average of the Unsubsidized Stafford Loan rate, while bachelor’s, doctoral, and first professional degrees will use a six-year average. The Department will also use the undergraduate or graduate interest rate to reflect the program’s level. The March 2014 proposal, by contrast, used just a six-year average of undergraduate unsubsidized rates.

It’s possible that this change could also affect the number of non-passing programs. For example, the three-year average interest could be slightly lower than the six- year one. This would lower the debt payments for certificate and associate degree programs, allowing lesser income levels to potentially pass. It’s unclear as of now how many programs might be affected by this change. For the existing data, the three-year average for certificates and associate degrees would be 6.8 percent, which is higher than the six-year figure of 5.42 percent. That change will increase the amount of money graduates from shorter programs have to earn to pass. (UPDATED to better reflect actual interest rate movement)

But in future years the reverse could be true. For example, interest changes made last year lowered the Unsubsidized Stafford Loan interest rate for students to 3.85 and then 4.66 percent for undergraduate students and 5.41 percent and then 6.21 percent for graduate students. Those lower rates will be more quickly reflected in the assumed payment calculations for shorter programs, lowering their debt payments and making it easier to pass. In any event, the effect is that the interest rate for shorter programs will more quickly adjust to interest rate changes than that of longer programs.

Estimates versus reality

Of course, the problem with all of these numbers is they are just estimates. And the Department has been off on the number of affected programs before. When it produced the final 2011 rule it thought it would catch about 8 percent of programs. The actual data showed it closer to 5 percent. Given that the last snapshot we have is of results for 2008 and 2009, it’s entirely possible that the ultimate failure numbers will also decrease. That’s not because of any relaxation. It’s because institutions got better. From that standpoint, what matters more are the standards programs have to manage toward, not just how many come up short.

More About the Authors

ben-miller_person_image.jpeg
Ben Miller

Former Higher Education Research Director, Education Policy Program

Programs/Projects/Initiatives

Here’s Why Gainful Program Impact Estimates Vary So Much