Re-starting and Strengthening Accountability for English Learners:

Part One of a Two-part Series
Blog Post
Shutterstock
Jan. 21, 2022

Just before the 2021 holidays, the Office of Elementary and Secondary Education (OESE) released a draft FAQ document for public comment intended to help state and local education agencies (SEAs and LEAs respectively) think through how to reinstate their school accountability systems. The guidance comes after a two-year hiatus of education accountability as set forth in the Every Student Succeeds Act (ESSA) due to interruptions caused by the pandemic. As can be expected from an FAQ, the guidance provides more questions than answers about what is to come from states in terms of accountability for the 2021-22 school year. States can choose to implement their full suite of measures as they stood in the fall of 2019, submit changes to their accountability systems for one year through the COVID-19 state plan addendum, and/or make longer-term changes to their state plan using the regular amendment process outlined in 2019.

Accountability systems implemented as a result of ESSA led to an inconsistent and patchy policy landscape for the five million English learners (ELs) enrolled in K-12 schools. This means that SEAs currently have the opportunity to fine-tune their approach to EL education accountability. Part one of this two-part series will briefly recap ESSA’s accountability requirements, and highlight some key considerations for states and districts as they reinstate and revise accountability systems to ensure ELs’ opportunity to learn is fully represented.

Pre-pandemic Education Accountability 101

Accountability, as defined in ESSA, hinges on the ability to compare school performance statewide using a system of annual meaningful differentiation (AMD). These systems are composed of a suite of indicators, some of which are clearly delineated in law while others are left up to the discretion of each SEA. Some indicators apply to all schools while others apply either only to elementary and middle schools, or only to high schools. Indicators are separated into two categories (Table 1).

On the whole, academic achievement indicators, in the aggregate, are required to represent much greater weight than SQSS indicators in the methodology used to compare schools, though how much each indicator weighs is up to the discretion of each SEA. What is more, this methodology is required to include all indicators for all students and for each subgroup of students, such as English learners. The purpose of AMD is to identify schools in need of additional support to improve student outcomes. ESSA created three levels of school identification each with its own varying levels of support (Table 2). Once identified, ESSA also ensures schools are provided with the financial resources needed to improve the student outcomes that had them identified in the first place.

Minding COVID-19 Data Gaps by Incorporating Opportunity to Learn Indicators

Prior to COVID-19, academic achievement indicators in state ESSA plans were heavily focused on overall performance and year-to-year growth in areas such as reading/language arts and math as measured by statewide assessments. States were also keenly focused on closing gaps between high and low performing groups of students, like English learners and their English proficient peers. However, given the wide variability in test administration throughout the pandemic, the data needed to calculate these either do not exist or come with significant questions about its validity and reliability. This means that most states will likely need to make at least a few alterations to their indicators and/or methodologies used to compare school performance as they kick-start accountability this fall.

Luckily, the U.S. Department of Education (ED) appears poised to offer states a lot of flexibility in how they reinstate accountability. Possible changes include shifting the timelines for measuring interim progress, modifying the exit criteria for schools identified for additional support in 2022, and adding/removing indicators of academic achievement. Essentially, apart from not using locally-derived indicators (e.g. district/classroom level assessments) that cannot be used to compare school performance statewide, ED has left the door wide open.

For academic indicators, such as student performance on statewide assessments and graduation rates, states can opt to use one year worth of data (i.e. 2021-22), or average data from the current year and earlier school years (i.e., 2018-19 and 2021-22). In terms of measuring student growth, ED recommends that SEAs that rely on prior years’ data in their calculations reexamine the quality of the data to determine whether the indicator will need to be modified or replaced. Things to consider in determining data quality include:

  • Are the data complete as determined by the participation rate?;
  • Are the data sufficiently comparable across years considering student demographics and test forms and administration methods?;
  • Are data quality issues spread unevenly across the State by student subgroups (such as ELs), grade levels or geographic regions?

To make up for some of these shortcomings, states can also consider using a cohort-based measure rather than individual student growth, or replace the student growth measure altogether with another valid and reliable statewide indicator, such as an SQSS indicator. For the English language proficiency (ELP) indicator specifically, SEAs have discretion to revise the methodology for calculating progress, which means that results from the 2019-20 school year can be used to determine whether an English learner made progress by the end of the 2021-22 school year.

More importantly, however, instead of holding onto measures that may not necessarily reflect or capture the impact COVID-19 has had on schools and students (such as growth), SEAs have the opportunity to incorporate more opportunity to learn indicators which advocates have long-argued shift the focus from individual students to systemic barriers. These indicators can be used as the ‘other’ academic indicator required for elementary and middle schools, or as part of the suite of indicators SEAs use to measure SQSS. And as the FAQ notes, SEAs should strongly consider which indicators accurately reflect their context, priorities, and needs as they determine which measures best capture student need.

As such, SEAs should consider incorporating indicators that can better gauge whether ELs have equitable opportunity to learn (before, during, and after COVID-19) by measuring things like:

  • Access to qualified educators who are certified, effective, and prepared to support ELs’ linguistic and academic needs;
  • EL Chronic absenteeism rates;
  • Number of long-term English learners;
  • Time spent in various modes of instruction (i.e. remote/in-person, asynchronous, synchronous time ratios) and the impact on lost instructional time;
  • Participation and successful completion of coursework;
  • Access to technological infrastructure (i.e. internet access and personal learning devices at home) measured by surveying family need;
  • An indicator that captures the number of English learners who have attained proficiency within their personalized timeline to proficiency;
  • College and career readiness including earning a seal of biliteracy (if offered), ability to skip remedial work, advanced course participation and completion, and post-secondary enrollment rates; and
  • School discipline data such as in-school and out-of-school suspension (including multiple suspensions and length of suspensions), and expulsion rates.

As previously mentioned, states are required to incorporate individual subgroups in their methodology used to compare schools. Yet prior to COVID, only eight states fully incorporated ELs’ academic performance in their systems of AMD. This means that 44 states adopted accountability systems, and had those systems approved by ED, without fully accounting for English learners. Considering that ELs have been among those most significantly affected by the pandemic and school closures, it is critical that SEAs take this opportunity to ensure that whatever set of indicators they choose to move forward with are disaggregated for each group of students, such as ELs. This will ensure that schools identified as in need of improvement accurately reflect student needs.

Stay tuned for the second half of this series which will delve into data collection improvements states can implement to ensure accountability systems reflect the heterogeneity of English learners.

Enjoy what you read? Subscribe to our newsletter to receive updates on what’s new in Education Policy!

Related Topics
Accountability, Assessment, and Data