Assessing the Kindergarten Readiness of DLLs
In addition to gaps in tracking the enrollment of DLLs and rating the quality of services for these learners, there is also a lack of meaningful assessment data to validly capture the full range of DLLs’ development in ECE.
Age-appropriate testing of students’ proficiencies can serve many purposes in ECE, including formative assessment for instruction, screening for special needs, or program-wide research or evaluations.1 State policy leaders are increasingly focused on student outcomes through more standardized assessment data, collected and aggregated at the systems level, to inform decision-making and the allocation of ECE resources.
Kindergarten readiness assessments (KRAs), in particular, have gained traction as a strategy to provide educators, families, and district and state leaders with more standardized data on the status of children’s abilities when they enter kindergarten—a “snapshot on development,” according to the BUILD Initiative.2 KRAs are intended to both support instruction in the early elementary years and provide information that can help policy leaders support school readiness, not to prevent children from enrolling in school.3
More than 40 states are currently developing or implementing KRAs, up from just seven in 2010.4 The Obama administration promoted states’ adoption of KRAs through the federal Race to the Top – Early Learning Challenge, a discretionary grant program launched in 2011. In part, the grant program encouraged states to measure children’s outcomes in a range of developmental domains in tandem with “implementing comprehensive data systems and using data to improve instruction, practices, services, and policies.”5 At least 25 states now mandate KRA use by state law.6
States are rolling out their KRA systems in a variety of ways. Some use a commercial assessment tool, such as Teaching Strategies GOLD®, while others created their own state tool or participated in one of three interstate consortia supported by federal grants.7 The tools can involve direct assessment (requiring a direct interaction between the test administrator and the child), observation of the child in authentic classroom activities, or a combination of these two approaches.8
KRAs have weathered a fair amount of concerns and pushback in their roll-out. Some teachers experienced the testing as an extra burden.9 Researchers cautioned against drawing inferences from a one-time assessment of young children when development is in great flux. As a National Education Goals Panel report asserted, “the younger the child, the more difficult it is to obtain reliable and valid assessment data. It is particularly difficult to assess children’s cognitive abilities accurately before age 6.”10 Due to questions of validity and reliability, policy experts have also stressed that KRAs should not be used punitively as an accountability measure for ECE providers.11
These broader concerns over KRAs have implications for all children—including DLLs. But states also need to specifically consider how to incorporate DLLs in KRAs as a special population. The development of bilingual children looks fundamentally different than their monolingual peers given that DLLs’ knowledge and skills are spread across two languages. For example, research suggests that DLLs have smaller vocabularies in English and their home language when taken separately, but their total vocabulary sizes—the sum of what they know in both languages—are similar to monolingual peers.12 As such, it is critical that young DLLs have an opportunity to show what they know and can do in their home language.
And yet, at present, almost all state KRAs assess children only in English. “If we want equitable assessments for DLLs, we have to assess across both languages,” said DLL expert Linda Espinosa, who works with states and districts across the country and served on the National Technical Advisory Committee for KRAs.13 She said that nearly all states are failing “to take on the challenge of creating truly equivalent [bilingual] forms of these test items,” a complicated, costly endeavor in the context of tight state budgets.14 According to the Migration Policy Institute, New Jersey, Oregon, and Texas have developed KRAs entirely in Spanish while Illinois and Washington state allow DLLs to be assessed in their home language for some test items.15 Bilingual KRA testing also goes hand-in-hand with a need for more bilingual, bicultural assessors, which requires additional investments.
While most states have not even attempted to do so, even those pursuing bilingual KRA assessments tend to use direct translation of the English version, Espinosa said. This method can produce tests that are psychometrically unsound (e.g. not normed or validated on DLLs), irrelevant in content for linguistically and culturally diverse children (e.g. asking a DLL in Alaska to recognize a beach umbrella on a vocabulary test), or both.16 For these reasons, truly bilingual KRA testing for DLLs will likely remain a long-term challenge.
Still, KRA implementation needs major improvements even in English-only testing contexts to collect higher-quality DLL data. The U.S. Department of Education highlighted this reality in a 2016 case study of four states’ initial implementation of KRAs. Teachers reported that they did not fully understand guidelines for assessing DLLs, and a majority were unsure about testing procedures for DLLs. For example, some states allow certain testing accommodations for DLLs, such as accepting correct answers in non-verbal forms like pointing or gesturing.17 Teachers also voiced a desire for greater support, such as more explicit training on administering KRAs with DLLs and on-site assistance from bilingual staff.18
In addition to serving DLLs more equitably at the stage of assessment, state leaders should consider how to share and disseminate data on DLLs’ KRA results. Decision-making around how to publish KRA results often reflects the different ways states view KRAs and their purpose. “Whether to publicly report [KRA] data and what should be included…is often a complicated discussion, involving many stakeholders,” concluded ECE researcher G.G. Weisenfeld in a 2017 report.19
Some states, like Maryland and Oregon, publish KRA data publicly on state websites.20 Washington state also uses KRA data on its state report card, but officials “recognized the paradox of reporting formative assessment data in a summative presentation, and landed on multiple ways to accurately portray the data…opt[ing] not to use a single, composite ‘readiness’ score.”21 Others states have resisted aggregating or reporting out results. New Jersey, for example, emphasizes that its voluntary KRA is a formative tool to inform instruction and professional development—not to publicize achievement gaps and trends. Michigan similarly states that it finds such summative use “inappropriate.”22
For DLLs in states that do decide to publicly report, state leaders must also decide whether to disaggregate the data results by DLL status for public users. In the K-12 context, federal law requires the disaggregation of academic data by English learner (EL) status in grades 3-8 (as well as by race and ethnicity, family income, and disability status). Civil rights groups stress the importance of these mandates for ensuring that disadvantaged students do not get ignored or masked in data systems. At the same time, for ELs at lower proficiencies, language barriers will definitionally interfere with their academic performance and thereby drive down the subgroups’ results.
Similarly, KRA systems leaders can use subgroups to expose achievement gaps in ECE, but they also must consider if data is reliable and valid enough to do so.23 As New America stressed in a 2017 report on K-12 EL data,24 below a certain threshold of English proficiency, it is impossible to make valid claims about academic proficiencies in English. In the case of English-only KRAs, DLLs’ scores in literacy and math development may reflect English proficiencies rather than true knowledge of concepts and skills. If states publicly report KRA data without this context, and rely on “native English speakers as the norm against which all students are compared, the unique characteristics of DLLs are likely to be misinterpreted, or worse, determined to be delays,” according to DLL expert Espinosa. Again, this is why testing young DLLs bilingually is so critical—to capture an accurate, complete picture of their development.
Above: Maryland disaggregates by DLL/EL status in its public reporting of KRA data. The gaps between EL and English proficient kindergarteners are significantly greater in literacy and math than in physical and social domains (which are less language-dependent), perhaps indicating the extent to which language interferes with DLL data’s validity. As the report notes, “Because the KRA is not given in the student’s home language, the knowledge and skills of ELs may not be fully captured.”
Citations
- Lorrie Shepard, Sharon Lynn Kagan, and Emily Wurtz, Principles and Recommendations for Early Childhood Assessments (Washington, DC: National Education Goals Panel, 1998), source.
- “Kindergarten Entry Assessment – KEA,” The BUILD Inititiave, accessed May 24, 2018, source. The term “kindergarten entry assessment” (KEA) is often used interchangeably with KRAs. A KRA/KEA is typically administered within a window during the first few months of kindergarten.
- Linda M. Espinosa and Eugene García, Developmental Assessment of Young Dual Language Learners with a Focus on Kindergarten Entry Assessment: Implications for State Policies (Chapel Hill, NC: Center for Early Care and Education Research – Dual Language Learners, Frank Porter Graham Child Development Institute – UNC, Chapel Hill, 2012), 4, source. A 2016 report from the Institute of Education Sciences found that 24 percent of schools in the research study had used KRAs “to support a recommendation that a child delay entry for an additional year.” For more, see Katherine A. Shields, Kyle DeMeo Cook, Sara Greller, How kindergarten entry assessments are used in public schools and how they correlate with spring assessments (Washington, DC: U.S. Department of Education, Institute of Education Sciences, National Center for Education Evaluation and Regional Assistance, Regional Educational Laboratory Northeast & Islands, 2016), source.
- “Pre-K/K Assessment,” The State of the States, The Center on Standards and Assessment Implementation, accessed May 24, 2018, www.csai-online.org/sos?t=early_childhood&m=ki; G.G. Weisenfeld, Assessment Tools Used in Kindergarten Entry Assessments (New Brunswick, NJ: The Center on Enhancing Early Learning Outcomes, 2017), source.
- Race to the Top–Early Learning Challenge (Washington, DC: The Early Learning Challenge Technical Assistance Program, U.S. Department of Education and the U.S. Department of Health and Human Services, 2013), source.
- BUILD Initiative, “Kindergarten Entry Assessment – KEA.”
- Weisenfeld, Assessment Tools Used.
- Center on Standards and Assessment Implementation, “Pre-K/K Assessment.”
- Ovetta Wiggins, “Evaluating Md. kindergartners has become a one-on-one mission,” The Washington Post, October 31, 2014, source; Christina A. Samuels, “Kindergarten Assessments Begin to Shape Instruction,” Education Week, October 10, 2017, source. States have made attempts to respond to teacher concerns. Maryland, for example, reduced the length of the test in response to feedback from its 2014 Version 1.0 administration of its KRA. For more, see The 2017-2018 Kindergarten Readiness Assessment Technical Report (Baltimore, MD: Maryland State Department of Education, 2018), 21, source.
- Shepard, Kagan, and Wurtz, Principles and Recommendations for Early Childhood Assessments.
- For instance, though no longer state policy, Florida used KRA data to place state pre-K programs on probationary status and required them to submit improvement plans. Such accountability measures are inappropriate without contextualizing data with factors of demographic risk or adequate resourcing for ECE. For more on these issues, see Laura Bornfreund and Anna Sillers, “Don’t Use Kindergarten Readiness Assessments for Accountability,” New America, April 3, 2017, source; Elliot Regenstein, Maia Connors, Rio Romero-Jurado, Joyce Weiner, Uses and Misuses of Kindergarten Readiness Assessment Results (Chicago, IL: The Ounce Policy Conversations, The Ounce of Prevention Fund, 2017), source.
- Barbara T. Conboy, “Neuroscience Research: How Experience with One or More Languages Affects the Developing Brain,” in California’s Best Practices for Young Dual Language Learners: Research Overview Papers (Sacramento, CA: California Department of Education, State Advisory Council on Early Learning and Care, 2013), 19, source.
- Linda Espinosa, interview with author, March 12, 2018; Linda Espinosa, Kelly Perez, Marlene Zepeda, “Child and Program Assessment Considerations for Dual Language Learners in QRIS,” (Morning breakout session, 2014 QRIS National Meeting, Denver, CO, July 25, 2014), source.
- For more on rigorous approaches to DLL assessment design and implementation, see: Sandra Barrueco and Michael L. López, Assessing Dual Language Learners: Critically Examining Our Measures, source.
- Park, O’Toole, and Katsiaficas, Dual Language Learners: A National Demographic and Policy Profile, 5.
- Espinosa and García, Developmental Assessment of Young Dual Language Learners, 8.
- Park, O’Toole, and Katsiaficas, Dual Language Learners: A National Demographic and Policy Profile, 5.
- Shari Golan, Michelle Woodbridge, Betsy Davies-Mercier, and Carol Pistorino, Case Studies of the Early Implementation of Kindergarten Entry Assessments (Washington, DC: U.S. Department of Education, Office of Planning, Evaluation and Policy Development, 2016), 66, source.
- G.G. Weisenfeld, Implementing a Kindergarten Entry Assessment (KEA) System (New Brunswick, NJ: The Center on Enhancing Early Learning Outcomes, 2017), 8, source.
- “2017-18 Kindergarten Readiness Assessment Report,” Division of Early Childhood Development, Maryland State Department of Education, accessed May 24, 2018, source; “Kindergarten Assessment,” Oregon Department of Education, accessed May 24, 2018, source.
- Weisenfeld, Implementing a Kindergarten Entry Assessment, 4.
- Birth to Grade 3 Indicator Framework: Opportunities to Integrate Early Childhood in ESSA, (Washington, DC: Center on Enhancing Early Learning Outcomes [CEELO] and the Council of Chief State School Officers [CCSSO], 2017), 29, source; Fall 2016 Kindergarten Entry Assessment (KEA) – Everything You Need to Know (Lansing, MI: Michigan Department of Education, 2016), 1, source.
- See CEELO and CCSSO, Birth to Grade 3 Indicator Framework, 29, which notes that KRA “measures may not be sufficiently reliable for…disaggregating by student subgroup.”
- Janie Tankard Carnock, Seeing Clearly: Five Lenses to Bring English Learner Data into Focus (Washington, DC: New America, 2017), source.