Introduction
Every year, Congress invests nearly a billion dollars in TRIO programs intended to support college access and success for low-income students and students of color, filling a critical need for a population far less likely than their high-income peers to enroll in or graduate from college. But are the programs working? How do they promote student success? And are students of color and low-income students going to, and finishing, college because of these investments? Supporters would say yes; in fact, they have embraced the motto “TRIO Works.” They have pasted it onto buttons that they wear when they champion the programs on Capitol Hill, hashtagged it on social media, and printed it on merchandise.
But the truth is that we do not know how well the TRIO programs work, and will not know anytime soon. That is because, instead of using evidence and research to ensure that the programs are effective and are serving students well, the U.S. Department of Education is forbidden by law from undertaking the kind of rigorous evaluation that is needed to affirm their effectiveness.
The story behind that prohibition starts in 2007, when the Bush administration proposed eliminating TRIO’s Upward Bound program because a national evaluation of the program several years earlier had found disappointing results. The Council for Opportunity in Education (COE), which lobbies on behalf of TRIO grantees, launched an aggressive campaign that it dubbed “Operation Rolling Thunder” to convince Congress that it was “unethical, even immoral” to require the grantees to participate in a randomized controlled trial, in which not all students would be eligible for Upward Bound services.1 U.S. Department of Education officials fought back, saying that these types of evaluations are invaluable in showing policymakers whether the programs the agency runs maximize investments in them.2
Using incendiary language comparing the Upward Bound study to the infamous and horrifyingly unethical Tuskegee experiment,3 in which federal scientists withheld life-saving treatments from Black men suffering from syphilis to study the progress of the disease, COE won the day. Lawmakers added a rider to the Department’s annual appropriations bill that barred it from continuing to fund the ongoing national evaluation.4 But the lobbying group’s real victory came when Congress added a provision to the Higher Education Act in 2008 banning the Department from ever conducting randomized controlled trials of the program, or of any TRIO program, further protecting the status quo for existing grantees.5
Where there is a program tied to federal funding, there are, of course, stakeholders with significant investments in the future of that program. But this saga shows how stakeholders who benefit financially from federal programs can make it difficult for policymakers to assess whether federal programs are working effectively, potentially undermining lawmakers’ ability to ensure that students are truly the ones benefiting from the program.
The U.S. Department of Education is forbidden by law from undertaking the kind of rigorous evaluation that is needed to affirm TRIO programs' effectiveness.
This is not just a political matter, and it is certainly not limited to the TRIO programs. For too long, lawmakers have forged higher education policy without knowing what works and what does not. Much of the federal money invested in support for institutions and other grantees is spent on efforts rooted in little more than a broad, vague goal of supporting student success, whether or not they actually do so. Across the Higher Education Act, from TRIO and GEAR UP to funds dedicated to minority-serving institutions to efforts to increase innovation, policymakers have rarely asked—and researchers have never answered—the question of how best to improve outcomes for low-income students.
At a time when too few students are graduating from college and reaping the financial benefits that a higher education bestows, despite increasing numbers of those students taking on debt to enroll in college, it is absolutely essential that lawmakers engage in evidence-based policymaking. Members of Congress must support greater data transparency, research, and evaluation so that they can finally start to close the gap between what we know works and what does not to improve students’ chances of success, and so we know which students need help, in which ways. Billions of taxpayer dollars and millions of students’ futures are on the line.
A concerted national effort to improve college is needed. It will require political will, a rethinking of the federal government’s higher education programs so they are centered on finding and expanding the strategies that work for students rather than treating evidence as an afterthought. This report seeks to explain the impetus for change, as well as identify solutions for policymakers in the Department of Education and on Capitol Hill.
The Need to Improve College Outcomes
Earning a college degree gives students of all backgrounds the best chance of finding a job, avoiding unemployment, and increasing their lifetime earnings considerably.6 But the American higher education system is often little more than a sieve, sifting students who do enroll in higher education out of college before they earn a degree, especially for the most vulnerable students.
Low-income students are less likely to go to college, and if they do enroll, are less likely to go to the colleges that will give them the greatest chance of success. According to the Equality of Opportunity Project, children who come from the wealthiest 1 percent of families are 77 times more likely to attend an Ivy League College than children who come from the poorest 20 percent of families.7 In fact, other research has shown that low-income and underserved students are more likely to enroll in programs that have labor-market returns too low to justify how much they cost.8 For-profit colleges disproportionately enroll low-income and Black students, while charging more and reporting lower earnings than public-college programs in the same field of study.9 And across sectors, the Center on Education and Skills at New America has found significant gender gaps both in labor market outcomes and costs for training programs.10
Moreover, once they enroll in higher education, too many students drop out, leaving school empty-handed and with debt loads they cannot afford to pay back.11 Low-income students and students of color are also less likely to persist and graduate with a degree. For instance, nearly two out of three associate and bachelor’s degrees are awarded to white students, according to a Department of Education report.12 Even at community colleges, where higher education programs are most affordable, fewer than one in three students graduates within one-and-a-half times the length of the program.13
Low-income students are less likely to go to college, and if they do enroll, are less likely to go to the colleges that will give them the greatest chance of success.
There are bright spots, as some colleges have tackled these problems systematically, using a data-driven and evidence-informed approach, with demonstrated results. The City University of New York (CUNY) in 2007 instituted the Accelerated Study in Associate Programs (ASAP) intervention, a comprehensive set of supports that include financial assistance, advising and career counseling, tutoring, training in study skills, and reformed remedial courses.14 A rigorous evaluation of the program by MDRC found that it increased both course taking and retention. Within three years, ASAP nearly doubled graduation rates, from a shockingly low 22 percent to 40 percent, and increased transfer rates to a four-year school from 17 percent to 25 percent.15 The program was scaled up to include community colleges in Ohio, with a rigorous evaluation showing similar, positive results.16
Other interventions that have been rigorously studied have also found promising results.17 Scholarships tied to academic goals—like maintaining a C average—increased college graduation rates at a couple of community colleges in Ohio by 21 percent after two years, and increased the number of credits students earned in pilot programs conducted across six states.18 Informational outreach via text message has been found to increase retention rates by nearly 14 percent.19 And colleges that offered coaching on how to juggle work, school, and life, and how to be successful in college found that their students were over 5 percent more likely to stay enrolled six months and even a year later, and they were 4 percent more likely to graduate from their programs—a lasting effect that few other interventions have found.20
But the research on what works to improve students’ odds of success in higher education remains limited. Only a handful of organizations and researchers even conduct such evaluations. Once studies have been conducted, the distribution of that research to practitioners at colleges and universities is limited and haphazard, without an explicit effort by the Education Department. And there has been a lack of coordination by the federal government to solve the research problems or tackle the challenges of increasing college completion rates and other problems.
Moreover, even when promising interventions have been identified, colleges have few incentives to apply that research. Many of these interventions require significant upfront costs, and the benefits and savings to campuses, which are often realized through increased retention and graduation rates, may take a long time to achieve. Federal policy requires only that colleges avoid extremely high default rates and comply with federal rules and regulations, few of which are based on outcomes. Federal dollars to support improvement and quality assurance are limited. States with performance-based funding models may starve the institutions most in need of improvement of the funding they might need to implement new practices. As a result, practitioners often lack the resources—or incentive—they need to implement such practices.
Still, the time is ripe for a renewal of higher education’s commitment to serving students, particularly those most at risk of dropping out of college. While the public continues to have faith in local colleges that serve their communities, its trust in higher education as a whole is at historic lows.21 Practitioners in the field are searching for new and smarter ways to prove the value of higher education.
Even when promising interventions have been identified, colleges have few incentives to apply that research.
Meanwhile, there is growing interest on Capitol Hill and in the states on improvement and evaluation in policy. In 2016, Sen. Patty Murray (D-WA) and former Rep. Paul Ryan (R-WI) convened the Commission on Evidence-Based Policymaking. The resulting landmark report issued in 2017 laid out dozens of strategies for federal agencies to use, some of which were signed into law earlier this year.22 Recent legislation, the Fund for Innovation and Success in Higher Education (FINISH) Act introduced by Sens. Young (R-IN), Bennet (D-CO), and Scott (R-SC), would invest in evidence-based practices and increase the Education Department’s evaluation authority.23 Then-chair of the House education committee Virginia Foxx introduced the PROSPER Act, a comprehensive bill to reauthorize the Higher Education Act that would require the Education Department to rigorously evaluate projects when it grants regulatory flexibility to colleges,24 eliminate a ban on evaluating TRIO grantees’ efforts and dedicate 10 percent of the programs’ funding to an evidence-based competition, and increase the Department’s authority to conduct “pay-for-success” projects to innovate and test new ideas within its existing programs.25
These lawmakers’ embrace of evidence-based policymaking is encouraging and long overdue. And if they follow through, using research not to cut spending but to reinvest resources in what works, we could see significant advances in achieving greater student success.
Citations
- Kelly Field, “Are the Right Students ‘Upward Bound?’” Chronicle of Higher Education, August 17, 2007, source; and Doug Lederman, “Little Change for Upward Bound,” Inside Higher Ed, September 25, 2006, source.
- Kelly Field, “Are the Right Students ‘Upward Bound?’” Chronicle of Higher Education, August 17, 2007, source.
- Ibid.
- Kelly Field, “Education Department Agrees to End Controversial Upward-Bound Study,” Chronicle of Higher Education, February 25, 2008, source; Kelly Field, “Senate Approves Measure Blocking Evaluation of Upward Bound,” Chronicle of Higher Education, October 19, 2007, source; and Kelly Field, “Are the Right Students ‘Upward Bound?’” Chronicle of Higher Education, August 17, 2007, source.
- Higher Education Opportunity Act, §428H, enacted August 14, 2008.
- Jennifer Ma, Matea Pender, and Meredith Welch, Education Pays 2016: The Benefits of Higher Education for Individuals and Society (New York: The College Board, 2016), source.
- Raj Chetty, John N. Friedman, Emmanuel Saez, Nicholas Turner, and Danny Yagan, “Mobility Report Cards: The Role of Colleges in Intergenerational Mobility,” Opportunity Insights, July 2017, source. The Equality of Opportunity Project is now called Opportunity Insights.
- Ibid.
- “The State of For-Profit Colleges,” Center for Responsible Lending (website), January 29, 2019, source; and U.S. Department of Education, “Fact Sheet: Department of Education Announces Release of New Program-Level Gainful Employment Data,” November 16, 2017, source.
- Lul Tesfai, Kim Dancy, and Mary Alice McCarthy, Paying More and Getting Less: How Nondegree Credentials Reflect Labor Market Inequality Between Men and Women (Washington, DC: New America, September 2018), source.
- Robert Kelchen, “A Look at Pell Grant Recipients’ Graduation Rates,” Brookings (website), October 25, 2017, source; and Jean Johnson, Jon Rochkind, Amber N. Ott, and Samantha DuPont, With Their Whole Lives Ahead of Them: Myths and Realities About Why So Many Students Fail to Finish College (Brooklyn, NY: Public Agenda, December 9, 2009), source.
- Advancing Diversity and Inclusion in Higher Education (Washington, DC: U.S. Department of Education, November 2016), source.
- U.S. Department of Education, National Center for Education Statistics, “Table 362.20: Graduation Rate from First Institution Attended within 150 Percent of Normal Time,” prepared December 2017, source.
- City University of New York (website), “About: What is ASAP?” source.
- Susan Scrivener, Michael J. Weiss, Alyssa Ratledge, Timothy Rudd, Colleen Sommo, and Hannah Fresques, Doubling Graduation Rates: Three-Year Effects of CUNY’s Accelerated Study in Associate Programs (ASAP) for Developmental Education Students (New York: MDRC, February 2015), source.
- MDRC, “Ohio Programs Based on CUNY’s Accelerated Study in Associate Programs (ASAP) More than Double Graduation Rates,” press release, December 2018, source.
- James Kvaal and John Bridgeland, Moneyball for Higher Education: How Federal Leaders Can Use Data and Evidence to Improve Student Outcomes (Washington, DC: Results for America, January 2018), source.
- Reshma Patel, Lashawn Richburg-Hayes, Elijah de la Campa, and Timothy Rudd. “Performance-Based Scholarships: What Have We Learned? Interim Findings from the PBS Demonstration,” MDRC policy brief, August 2013, source.
- Benjamin L. Castleman and Lindsay C. Page, “Freshman Year Financial Aid Nudges: An Experiment to Increase FAFSA Renewal and College Persistence,” Journal of Human Resources 51, no. 2 (June 2014): 389–415, source.
- Eric P. Bettinger and Rachel B. Baker, “The Effects of Student Coaching: An Evaluation of a Randomized Experiment in Student Advising,” Educational Evaluation and Policy Analysis 36, no. 1 (March 2014): 3–19, source.
- Eric Kelderman, “‘Higher Education’ Isn’t So Popular, Poll Finds, but Local Colleges Get Lots of Love,” Chronicle of Higher Education, May 21, 2018, source. For more on the issue of trust in higher education, see Rachel Fishman, Ernest Ezeugo, and Sophie Nguyen, Varying Degrees 2018:New America's Annual Survey on Higher Education (Washington, DC: New America, May 2018), source.
- The Promise of Evidence-Based Policymaking (Washington, DC: Commission on Evidence-Based Policymaking, September 7, 2017), source; and Foundations for Evidence-Based Policymaking Act of 2017, P.L. 115-435, enacted January 14, 2019.
- The Fund for Innovation and Success in Higher Education (FINISH) Act. S. 1059.
- For more on the need for evaluation in federal experiments, see Clare McCann, Amy Laitinen, and Andrew Feldman, Putting the Experiment Back in the Experimental Sites Initiative (Washington, DC: New America, January 23, 2018), source.
- The Promoting Real Opportunity, Success, and Prosperity through Education Reform (PROSPER) Act. H.R. 4508; introduced in the 115th Congress.