Teacher Prep Says Profession Holding Themselves Accountable – But Are They?

Research has shown that many new teachers enter the profession underprepared to be successful in the classroom, despite having invested in formal preparation. As a result, many become frustrated and cut their careers short, while the students they taught fall behind.

Teacher performance assessments—standardized models for assessing pre-service teachers’ readiness to teach—have been touted by some in the educator preparation field as the solution to this problem, with edTPA being the most widely heralded of these. Now the summary report of the 2013 field test of edTPA is available, along with recommendations on how states can use edTPA to set entry standards for the teaching profession. At the official edTPA roll-out event last Friday, representatives of the American Association of Colleges for Teacher Education (AACTE) indicated that the development and implementation of edTPA is a sign that, for the first time, the “profession is holding themselves accountable.” But the recommendations the field has issued for using edTPA scores reveal a hesitancy to truly embrace strong accountability.

Stanford University’s Center for Assessment, Learning, and Equity (SCALE) led the edTPA development process, which was supported by AACTE. edTPA is designed to offer formative feedback to teacher candidates and their preparation programs throughout their training, and to provide a summative assessment of candidates’ readiness to lead a classroom. SCALE’s 2013 field test report claims that edTPA is also designed “to assure the public that preparation programs are accountable for candidate performance.”

edTPA is customized for 27 different education disciplines, from early childhood to technology & engineering, but shares a common architecture focused on candidates’ skills in planning, instruction, and assessment. Trained and certified educators with pedagogical and subject knowledge in each discipline score the summative assessments. For most disciplines, the summative score is based on 15 rubrics, each on a five-point scale (minimum score of 15, maximum score of 75). A level “3” on the scale is intended to represent “the knowledge and skills of a candidate who is ready to teach.”

Data from the field test report show a wide range in teacher candidates’ summative scores, with a median score of 43. As field test participation was voluntary and without consequences, these results may not be generalizable to a “fully operational” system. But they do highlight two initial findings:

  1. The summative assessment appears to be effective at differentiating prospective teachers’ level of skill and/or effort, and
  2. If edTPA’s rubrics and summative measure are valid and reliable measures of candidates’ classroom readiness—as represented in SCALE’s report—and a score of at least “3” on each rubric would yield a summative score of 45, then many teachers are indeed graduating from preparation programs insufficiently ready to teach.
SCALE’s report recommends that states use the summative edTPA score to set a standard for performance that all prospective teachers must meet to enter the profession. (Six states—Hawaii, Minnesota, New York, Tennessee, Washington, and Wisconsin—already plan to require edTPA for program completion, state licensure, and/or state program accreditation over the next several years). But the maximum recommended cut score—a 42—only requires that teachers meet expectations on 13 of 15 rubrics. At Friday’s event, AACTE President and CEO Sharon Robinson explained that 42 was a reasonable maximum score because of the scoring error margins, but she also suggested that states may consider setting initial cut scores lower than 42 while programs ramp up.

The recommendations the field has issued for using edTPA scores reveal a hesitancy to truly embrace strong accountability.

Recommending a maximum score—but not a minimum—and then advocating for using a lower one seems like saying, “although we’ve determined that many of our candidates aren’t ready to teach, we think we should keep sending them out to teach students anyway.” That seems more like an attempt to shrink from accountability than to embrace it. And while implementation of any new initiative takes time to get right, a better solution would be to follow the lead of several states and encourage or require programs to administer the assessment for at least one year prior to stakes being attached to results.

At this point, the teacher preparation field appears to be trying to meet preparation programs and candidates where they currently are (perhaps because even with a cut score of 40, only two-thirds of field test candidates would have passed), instead of pushing programs and candidates to meet the level of quality that their own field has deemed necessary. So while the edTPA holds some promise for better aligning teacher preparation with the skills teachers need to be successful, states must also hold programs accountable in other ways, such as assessing whether programs’ graduates are effective once in the classroom—an area the edTPA is unable to measure.


Melissa Tooley is the director of Educator Quality with New America's Education Policy program. She is a member of the PreK-12 team, where she provides research and analysis on PreK-12 policies and practices that impact teaching quality and school leadership.