Inconsistencies in Scores for i3’s Early Learning Winners

Blog Post
Aug. 10, 2010

Of the winners to receive the most money in last week’s Investing in Innovation (i3) awards, three promoted “early learning” as one of their priorities. But an analysis of their scores shows that their stated intentions may not line up with what the U.S. Department of Education was looking for. In fact, the scoring itself raises many questions about the reviewers’ understanding of how to evaluate an early learning plan. 

Our analysis started with two questions: What types of projects did the U.S. Department of Education officials have in mind when they included early learning as a competitive priority for the i3 competition? And, more importantly, did i3 reviewers receive adequate instructions on how to do the scoring -- and if so, did they follow them?

The first question is easy to answer, as the requirements were included in the application.

According to the i3 application, reviewers were supposed to give one “competitive priority” point to applicants “that would implement innovative practices, strategies, or programs that are designed to improve educational outcomes for high-need students who are young children (birth through 3rd grade) by enhancing the quality of early learning programs.” 

The applications had to focus on all three of the following (See slide 20 from the Department of Education’s Power Point for scale-up reviewers)

  1. improving young children’s school readiness (including social, emotional, and cognitive readiness) so that children are prepared for success in core academic subjects;
  2. improving developmental milestones and standards and aligning them with appropriate outcome measures; and
  3. improving alignment, collaboration and transitions between early learning programs that serve children from birth to age three, in preschools and in kindergarten through third grade.

The two key words in the instructions are the ones we have bolded above: “would” and “and.” “Would” implies that officials are more interested in plans for the future – what would the applicant do with i3 funding to enhance its early learning initiatives? The word “and,” of course, means all three areas must be addressed. It does not say “or.”

As Early Ed Watch has reported, 13 of the highest-rated applicants said that they were focusing on “early learning” as one of their priorities. So far, the department has made public the applications for those who won “scale-up” grants – grants of up to $50 million to scale-up programs with strong evidence of effectiveness. The three with an early learning focus are the Knowledge is Power Program (KIPP), the Success for All Foundation, and Teach for America. (All three have to come up with matching grants by September 8 to actually receive the federal award.) 

Yet highest-rated applicant – Success for All – discussed only what it is currently doing, not what it would plan to do with i3 funds in the area of early learning. KIPP discussed a somewhat early-learning focused project, but did not address all three of the focus areas. Actually, the only one that talked about future efforts and addressed all three areas was Teach for America. Early Ed Watch reviewed the competitive priority section of each proposal, and here are the scores we would give.

Required Focus

KIPP

Success for All

Teach for America

School Readiness

No credit

No credit

Yes

Milestones and Standards

No credit

No credit

Yes

Alignment, Collaboration and Transitions

Yes

No credit

Yes

Final Score

0

0

1

 

But, just because applicants said they deserved that extra point doesn’t mean they should automatically earn it. That’s where the reviewers come in. We don’t know who they are – the department hasn’t released any names – but according to the information about the review process, they are supposed to be experts who are prepared to judge which applicants adequately met the competitive criteria.

 

(By the way, not all reviewers scored the entire proposal. There were three reviewers for each proposal who scored the competitive priorities and two that scored the research and evaluation components. There were five total reviewers for each of the scale-up proposals. The process was slightly different for the validation and development proposals. For more information on the review process click here.)

So, then let’s see how the actual reviewers scored the inclusion of the early learning priority in these proposals.

 

KIPP

Success for All

Teach for America

Reader 1

Did not Score

Yes

Did not Score

Reader 2

No Credit

Did not Score

No Credit

Reader 3

No Credit

Did not Score

Yes

Reader 4

Did not Score

Yes

Did not Score

Reader 5

No Credit

Yes

No Credit

 

(Note: The five reviewers were not necessarily the same for each of these proposals.)

 

Where are the final scores for the competitive point? We could not add them because we could not find information about the final breakdown on the Department of Education’s website. Michele McNeil, who writes the Politics K-12 blog, already pointed out the challenges in deciphering the scores. We agree. There’s no page – that we could find – showing the synthesis of the readers’ scores.

This omission begs the question: Did these applicants actually earn that competitive point or not? Until we know, we cannot say for sure whether these early learning projects were deemed acceptable in the final review.

Even the readers' individual scores aren't what Early Ed Watch expected to see.

Actually, the fact that KIPP received no credit from any reviewer does not surprise us. The charter school network’s application said it would use grant funds to support principal development for 35 to 50 new primary schools. So far, according to KIPP’s website, only one of eight new schools in the coming year will include pre-k.

Teach for America (TFA) provided a clear and detailed discussion of what they plan to do to improve and expand their early childhood education (ECE) initiative, which launched in 2006. The application spoke to each of the three requirements under the “early learning” priority. TFA also included their ECE initiative as part of their evaluation component, the part of the application that describes how they will measure the grant’s impact. ”The impact analysis will focus on grades pre-k through five for several reasons,” the application says, “First, there is limited research on the effectiveness of Teach for America at the pre-k level.”

Still only one of the reviewers awarded TFA the competitive point. This didn’t make sense to us so we checked out the reviewers comments. Here’s what reviewer # 5 had to say, “The applicant did not adequately address this competitive preference as the focus of the TFA model is to develop K-12 teachers for high needs students.” Did this reviewer not even read the proposal? On the other hand, reviewer #3 (the one that awarded the point) said this: “Pages e79 to e81 articulate a clear picture of steps TFA has addressed to meet this preference.”  Reader #3 got it right. Reader #5 was out to lunch.

Then, there is the Success for All (SFA) program, which according to its application has documented research of effectiveness in elementary schools. And, while Success for All does have a preschool program, Curiosity Corner, that has been previously studied (see the What Works Clearinghouse Intervention Report), it is not a part of SFA’s proposed work. SFA plans to follow students beginning in kindergarten through the early grades to study the literacy program’s impact on student achievement in low-achieving schools. Early Ed Watch would like to have seen SFA include pre-k to determine if the Curiosity Corner program has a positive effect on literacy as children transition from pre-k to kindergarten in addition to evaluating the impact of SFA’s elementary program on children in the early grades. However, since they didn’t include pre-k, they should not have been awarded the competitive point.

All of this goes to the second question that propelled our analysis: Did the reviewers know how to score the early learning priorities? What instructions were they given?

Were the instructions so ambiguous as to lead to the inconsistencies? Did the reviewers suffer from it a lack of knowledge about what makes a quality early learning proposal? We just don’t know.

We also have no way of knowing whether this one competitive point, if granted, was a deciding factor in whether these proposals made the cut. The applications and scores for the losing applicants are not yet public.

Regardless, the lack of consistency in scoring leads to uncertainty about fairness in the review process. What else did judges overlook? Were the reviewers actually content experts with deep knowledge about the competitive priorities? What if there were proposals that better addressed the early learning or other competitive priorities that were not selected? And, more generally speaking, what about the sections of the proposals that weren’t related to early learning… was there anything essential missed in those? We’d like to hear your concerns too!

The Department of Education has not yet posted the narratives for other applicants with high ratings – those who submitted applications for “validation” or “development” grants. We will be sure to comment on those when they are up. Additionally, Early Ed Watch is interested to read the narratives for and what reviewers had to say about the seven proposals we highlighted last month and other preK-3rd projects that did not receive awards. That information may take a little longer to get so keep checking back.