A recent report by UC Berkeley economics professor Christopher Walters uncovers specific factors that might explain many of the disparities between Head Start programs across the country. In the study, Walters used Head Start Impact Study data to determine what exactly is driving these differences in center effectiveness. He analyzed how various inputs, child characteristics, and center practices—over many of which center directors maintain discretion—might explain why some children experience sustained gains and others do not.
It turns out that program length of day and year and home visiting are two of the strongest factors impacting center effectiveness. Offering full-day services had a particularly significant impact on children’s cognitive skills. Full-day pre-K programs offer more time for high-quality interactions between adults and children than half-day programs, and research suggests that these interactions are essential to children’s social-emotional and academic development. In fact, NIEER’s randomized trial examining pre-K dosage found that “the added hours of preschool education were substantially effective at closing the achievement gap.” Thus, it’s not surprising that children attending full-day Head Start programs would experience greater cognitive gains than those attending for only half the time.
The benefits of home visiting for families with young children are also well documented. Home visiting, which is rarely used in private or state pre-K programs, is a key component of Head Start’s “whole child” approach. Home visiting fosters relationships between families and teachers, engages parents in their child’s education, and gives teachers insight into children’s home lives. While all Head Start programs must conduct home visits, Walters found that centers offering more than three home visits per year were particularly effective at improving skills such as making friends easily, enjoying learning, being able to concentrate, and using self-control. Unfortunately, only 20 percent of centers in the Head Start Impact Study offered frequent home visits.
Other factors often thought to increase center effectiveness did not have an impact, at least in Walters’ analysis. Past research has suggested that higher levels of teacher education, smaller class sizes, more experienced center directors, and use of the High/Scope curriculum are all associated with pre-K program effectiveness. But none of these significantly impacted child outcomes in Head Start according to Walters’ analysis. The 2007 Head Start reauthorization intensified teacher education requirements in an effort to improve teacher quality, but Walters’ findings suggest that more teacher education is not the answer to improving the program. His findings also raise questions about whether the well-respected High/Scope curriculum, often believed to be “central to the success of the Perry Preschool Project,” is as important as researchers believe.
Walters also found that Head Start had the greatest impact on children with less educated mothers, suggesting that disadvantaged children benefit more from program participation. And even though Walters examines a broad array of inputs, practices, and characteristics of Head Start programs, still only about a third of the variation in the programs can be explained through his analysis. That means there’s a lot more research to be done on the other factors that Head Start centers can affect—and maybe on the factors they can’t change—to help determine the most essential quality-improvement activities for programs.
Policymakers have a vested interest in ensuring the federally funded Head Start program appropriately serves low-income children and their families. After all, the program’s $8 billion price tag and more than 900,000 children served annually are not trivial. It’s clear that some Head Start centers have figured out how to provide quality services, and while there likely isn’t one guaranteed method that ensures center effectiveness, there are certainly lessons to be learned from these high-quality programs. As researchers continue to dive into the Head Start Impact Study data to determine “what works,” policymakers at all levels, not to mention program directors, can and should take these findings into account."