Jan. 23, 2018
Click here to read a new report, Putting the Experiment Back in the Experimental Sites Initiative, by Amy Laitinen and Clare McCann of New America and Andrew Feldman of the Evidence-Based Policymaking Collaborative.
A little-noticed provision in the recently released PROSPER Act--a Higher Education Act reauthorization proposal from House Education and the Workforce Committee Chair Virginia Foxx (R-NC)--attempted to solve a long-running flaw at the Department of Education. For years, the Department has launched and run and wound down experiments within the federal financial aid program, without ever learning whether any of those experiments worked. Today, New America (with Andy Feldman of the Evidence-Based Policymaking Collaborative) is releasing a report that looks at the history of that program and explains how the Administration, Congress, and colleges can all help to fix it.
The Experimental Sites Initiative’s earliest iteration was very narrowly focused only on verification of financial aid applicants’ financial aid data. Known at the time as the Quality Assurance program, it was included in the 1992 reauthorization by Congress to address the still-thorny issue of how to ensure financial aid applicants were submitting accurate data without forcing too many obstacles in the way of low-income students. But in the years that followed, the Department used its newfound authority to grant waivers liberally, offering colleges exemptions from rules far beyond verification and requiring little in return.
Congress significantly tightened the reins in response. In 1998, the Congressional Research Service wrote that “there [was] substantial sentiment in the committee for the elimination of experimental site provisions altogether, given the history of the implementation...and given that none of these experiments has yet yielded results.” Instead, lawmakers tacked on an experimental authority separate from verification, and narrowed what the Department could waive, required approval from Congress to launch each one, and mandated that the Department evaluate and report back on the findings from each one.
Rather than meet that new, higher bar, the Department apparently opted to avoid experiments altogether for a few years. It launched no new experiments for nearly a decade--until finally, in 2008, lawmakers loosened the reins and permitted the Department to launch new experiments without Congressional approval and without mandated evaluations on the back-end. The Department had something of a renaissance with its new freedom. In total, the Obama Administration launched 17 new experiments, from permitting colleges to lower loan limits for categories of their students to waiving the ban on Pell Grants going to incarcerated individuals. Of those, just two (providing Pell Grants for students with bachelor’s degrees to attend career education programs and for students to enroll in short-term training programs) are being rigorously evaluated--and though a report is expected next year, small sample sizes will probably limit the findings. A third was designed to be tested with a randomized control trial, but the Department has yet to commit funding to its evaluation.
The Trump Administration can make a new start with the Experimental Sites Initiative, by reviving its original mission as a strategy to innovate and learn what works in higher education. We provided some recommendations for how the Education Department can responsibly conduct all future experiments:
Use the experimental sites authority to test and evaluate new federal financial aid policies;
Identify important policy questions and then develop evaluation plans that can answer them;
Consult a wide range of stakeholders in designing new experiments;
Clarify [upfront] with institutions what data are needed;
Publish reports on ongoing experiments at least biennially, as required by law;
Ensure ongoing input from colleges; and
Collaborate across White House offices and the Department.
Meanwhile, as lawmakers begin to tackle a reauthorization process for the Higher Education Act, it must find a workable solution to this persistent problem. To do so, we recommend that Congress:
Require that all experiments be evaluated using an approved methodology;
Provide a dedicated funding stream for evaluation;
Revise the Paperwork Reduction Act to incorporate exceptions for rigorous evaluation; and
Insist on biennial reports and policy recommendations.
Finally, these experiments will be carried out on the ground, with real students at real colleges. Any results from the experiments--evaluations and data analysis that yield areas of promise and possible problems, or even changes to the law and regulations--depend on how accurately schools conduct those experiments. To that end, we recommend that colleges and universities:
Provide ideas on future potential experiments; and
Contribute to the success of the evaluations.