Are There Truly No Differences in Teacher Preparation Program Quality?

Blog Post
Aug. 31, 2015

NOTE: New America’s recommendations for HEA Title II data collection and reporting have evolved over time, particularly with regard to connecting the data reported to specific consequences. For example, we no longer recommend using the results of HEA Title II data reports to determine a preparation program’s eligibility to offer TEACH grants.

For New America’s most up-to-date recommendations on HEA Title II, please see our newest brief on the topic, or reach out to tooley@newamerica.org.

September is just around the corner, which is when the U.S. Department of Education (ED) is slated to release its final regulations for Title II of the Higher Education Act (HEA) governing teacher preparation. As released for public comment, the proposed regulations included a few highly contentious requirements, as Stephen Sawchuk of Education Week has documented. One of these is requirements is that states rate teacher preparation programs (TPP)—and hold them accountable—based in part on their graduates’ performance in the classroom, which must include a measure of students’ learning growth. Recently, new research on teacher preparation program (TPP) quality in Missouri found no substantive differences in the effect of attending a given program on graduates' performance, as measured by graduates’ impact on student achievement. In light of the Missouri study, can we expect new regulations to have any impact on teacher preparation program quality or will they only lead to wasted time and effort?

At first blush, the Missouri research findings may seem like a cautionary tale: if there are no real differences in teacher preparation program quality, then new regulations could require states and institutions of higher education to go through the work of collecting data and assigning ratings for no good reason.  But here are three important caveats to the study’s findings:

  1. The study only investigated differences at the university level, but not at the level that ED’s proposed regulations would require: for the specific teacher preparation programs within a university's school of education (e.g., the elementary education program versus the secondary mathematics program). Lumping all of an institution’s various programs together may obscure differences that actually exist, as a strong program in one area could make up for a weak program in another area.
  2. The authors (Koedel, Parson, Podgursky, and Ehlert, all in the economics department at the University of Missouri) only looked at “traditional” preparation programs in public institutions in the state of Missouri and at graduates that ended up teaching in elementary schools. In a footnote, the researchers’ explain how their “focus on traditional programs and on teachers moving into elementary schools reduces within-institution heterogeneity,” but fail to acknowledge that it could reduce the performance variation they found between institutions as well. And, while current Title II regulations only require states to assess performance of “traditional” preparation programs, ED's proposed rules would expand this to include alternative preparation programs as well (e.g., Teach for America).
  3. This research assessed program “quality” solely via a measure of teacher impact on student achievement, whereas ED's proposed regulations would allow states to determine and use multiple measures for this purpose, although student learning growth must be factored in.

Additionally, as the authors of the Missouri study highlight, an absence of strong federal reporting and accountability requirements to date have led states, districts, and teacher preparation programs to have “little incentive to innovate and improve,” and could explain why they found very little differentiation in program quality. In fact, a recent report from the U.S. Government Accountability Office (GAO) found that seven states had no process in place to report on low-performing TPPs, despite the current requirement to do so under Title II of HEA. And while the other states did have assessment processes in place, most were not particularly rigorous. For example, many used alignment with their state’s teaching standards (assessed primarily via reviews of syllabi and course materials, as well as through interviews of TPP staff), but fewer than 10 used teacher evaluation or used student assessment data.

(As an aside, many teacher preparation programs have expressed outrage at the National Council on Teacher Quality (NCTQ) rating teacher preparation programs based largely on reviews of syllabi and course materials, which they have deemed unfair. Sharon Robinson, head of the American Association of Colleges for Teacher Education, described NCTQ's ratings as "little more than a document review—hardly adequate evidence to judge graduates’ readiness to teach." This draws into question why preparation programs are pushing to keep states' preparation program rating systems unchanged at the same time they protest NCTQ's rating methods. A good guess: states are lenient, and NCTQ is not.)

Given GAO’s findings that states’ TPP quality assessment processes are light and loose, and that ED does not monitor the execution of these processes, it should come as no surprise that states rarely identify any programs as low-performing: from 2013 through 2014, GAO found that only six states identified at least one program as low-performing, and only 13 identified at least one as at-risk of becoming low-performing. The fact that, currently, ED can only require states to assess and report quality for entire education schools at institutions of higher education—as opposed to the individual preparation programs within them, as the new proposed regulations would do—likely only exacerbates this issue, which occurs despite the widespread understanding that most new teachers end up feeling woefully underprepared during their first years on the job.

Little evidence exists to suggest that states and institutions of higher education will work to improve the quality of their teacher preparation programs without stronger federal oversight and interventions.

Unfortunately, little evidence exists to suggest that states and institutions of higher education will work to improve the quality of their teacher preparation programs without stronger federal oversight and interventions. Still, tougher federal regulations and oversight alone won’t lead to improvement among teacher preparation programs within and across states. In order to improve the teacher preparation program landscape, state policymakers, districts and preparation program providers, must also play their part to ensure quality in the field. Look for our upcoming post delving into strategies these various stakeholders can employ to do just that.