Public Schools, Private Eyes: How EdTech Monitoring Is Reshaping Public Schools

A Lawsuit Seeks Transparency in Texas Schools
Blog Post
A ten year old boy typing on a school computer.
Shutterstock
Aug. 25, 2025

As K-12 public schools in the United States adopt a growing range of educational technologies, many are also implementing digital surveillance tools that monitor students—often without their full knowledge or consent. A 2023 survey of education technology (edtech) surveillance systems suggest nearly 82 percent of K-12 students report being subject to some form of monitoring in the classroom, and 38 percent of teachers report this continues outside school hours. Despite a growing body of research flagging the danger of edtech surveillance, there is a persistent lack of transparency around the efficacy, use, and misuse of this technology.

Numerous schools have contracts with surveillance-focused edtech companies, the largest of which include Gaggle, GoGuardian, iboss, Bark, and Securly. These surveillance systems use artificial intelligence (AI) to see if students’ web activity, correspondence, or keyword searches can reveal their state of mind, if they are being bullied, or even their potential for committing an act of violence. This monitoring occurs through three primary pathways: school-issued devices, school-managed internet connections, or school-managed accounts (e.g., learning management systems or email accounts). Research indicates, however, that the edtech companies spearheading classroom surveillance technologies often extend their authority beyond classroom hours, produce unsubstantiated marketing claims, and engage in data collection and sharing practices that lack transparency.

In March, the Knight First Amendment Institute at Columbia University filed a public records lawsuit to uncover how AI-powered monitoring technologies are being used on school-issued devices in North Texas’s Grapevine-Colleyville Independent School District. The lawsuit was filed after the school district denied a Texas Public Information Act request from the Knight Institute to release records on its use of edtech surveillance. The school district claimed that the requested records could not be released to the Knight Institute due to potential risks to network security. Notably, it remains unclear how the Knight Institute’s request would jeopardize the security of the district’s network. Still, the Knight Institute narrowed its request to focus on information such as flagged content types and emails referencing the district’s edtech surveillance vendors. Despite a revised information request scope and the legally tenuous nature of the network security invocation, access is still being denied. The Knight Institute’s lawsuit seeks to compel the release of these records in order to promote transparency and enable a public evaluation of whether digital monitoring practices are appropriate and their potentially disparate impacts on students. Without access to this information, the scope and nature of the district’s monitoring practices remain opaque. Those most directly affected by this technology—students, teachers, and parents—remain inadequately informed about how their data is being used, despite their rightful expectation of transparency, clarity, and accountability.

How Are Current State and Federal Policies Driving School Surveillance?

Texas has more school districts with contracts for edtech surveillance platforms than any other state, with over 200 districts involved. In the last decade, spending on edtech surveillance technology in Texas rose 66 percent per student, compared to only a 28 percent increase in social services. The Knight Institute v. Grapevine-Colleyville Independent School District lawsuit highlights concerns about the lack of transparency and accountability in the use of these technologies.

In October 2022, the Biden administration released its Blueprint for an AI Bill of Rights which warned that “continuous surveillance and monitoring should not be used in education…where the use of such surveillance technologies is likely to limit rights, opportunities, or access.” Building on this foundation, in 2023, President Biden signed The Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence Executive Order. This order has been replaced by a new executive order on AI by President Trump; it has a deregulatory, private-sector approach with significantly fewer references to bias, accountability, and data privacy.

As AI expands in education, bolstered by nearly 70 companies signing the Trump administration’s Investing in AI Education pledge, agencies like the now defunct Office of Educational Technology, which in the past provided guidance, no longer exist. Given the White House’s push for AI implementation that deemphasizes civil rights auditing, edtech surveillance tools could enter education more rapidly than previous years and with fewer brakes and guardrails, particularly for those who are already most vulnerable. Legislation increasing data tracking of undocumented students, warnings about the use of surveillance technology to enforce anti-trans laws, and schools monitoring reproductive health-related searches in states where abortion is criminalized are all urgent signs that pressure for states and school districts to expand surveillance and data collection on already marginalized communities is well underway. Participating fully in the classroom can then come with risks to these students’ safety. A 2025 study by the Center for Democracy and Technology (CDT) found that a small but concerning number of students flagged by school monitoring software have been contacted by immigration enforcement directly. Similarly, increasingly stringent legislation surrounding educator mandates to report changes in a student’s gender identity risk being enabled by invasive monitoring of trans students.

The Children’s Internet Protection Act of 2000 (CIPA) stipulates that schools “filter obscene content” by “monitoring the online activities of minors.” In an effort to protect minors online, CIPA requires schools and libraries that receive federal funding for internet access use filters and monitoring tools to block pornography and other content deemed harmful or obscene. In 2000, youth internet access was rising in schools, making teachers among those best situated to prevent harmful content access through simple filtering software. Today, critics argue CIPA is frequently used as a justification for the constant surveillance students are subjected to, well beyond a reasonable interpretation of the law.

This surveillance has a greater impact on students in higher-poverty districts. The E-Rate program, which provides affordable broadband access and other services for schools and libraries, is more commonly used by schools with a higher percentage of students eligible for the National School Lunch Program. E-Rate recipients must comply with CIPA, often through the use of web filtering, to keep explicit content from children. Since low-income and rural community students are more dependent on access to school-issued devices, research shows that students in those districts are subject to more pervasive monitoring than students in wealthy districts who have access to personal devices. In the past decade, the Federal Communications Commission, CIPA’s administering body, has done little to clarify the boundaries of CIPA’s monitoring with no recent amendments or stated plans to update it in the future.

Regulations like the Family Educational Rights and Privacy Act (FERPA) and the Children’s Online Privacy Protection Rule (COPPA) are designed to safeguard student privacy and safety, but they have also been used to justify access restriction to certain educational platforms and extractive data use. For example, 2011 amendments to FERPA broadened the scope of who could be granted access to student records; the amendments redefined “authorized representative” as a nongovernmental actor who may represent schools. This change allows edtech surveillance companies to operate as educational partners, permitting schools to disclose student data to these companies in the same capacity as school officials. This loophole is even more problematic because of the absence of financial penalties for edtech companies that violate FERPA. Enforcement focuses on schools—through the possible loss of federal funding—rather than holding the companies accountable.

Two hands typing on a laptop keyboard
Source: Shutterstock

Why Does EdTech Surveillance Matter for Students?

The edtech surveillance industry, now generating record-breaking profits, is dominated by figures with limited ties to pedagogy or the realities of student learning; as highlighted by a UCLA report, popular surveillance company founders are professional entrepreneurs, cybersecurity officials, and law enforcement professionals. The disconnect between who builds these technologies and what students actually need in the classroom helps explain why edtech surveillance often fails to support—and can even undermine—meaningful learning. The ability to learn and explore without a constant fear of monitoring, criminalization, and retaliation is critical to students in the public education system. School investment in edtech surveillance platforms that are not proven to be effective can divert resources and money from meaningful learning and safety for all students. Investing in edtech surveillance rather than on resources that truly promote school safety, like evidence-backed mental health support, seems like a poor decision.

While surveillance raises concerns for all students, its harms are not distributed evenly. There is clear evidence that edtech surveillance often reinforces existing biases, leading to disproportionate discipline and scrutiny for already marginalized groups. Because edtech surveillance systems operate by flagging behaviors consistent with being “anomalous,” ACLU research on edtech surveillance suggests that “disabled students are more likely to be flagged as potentially suspicious…simply because of the ways disabled people already exist.” Platforms are built to flag behavior that deviates from a perceived norm, so certain natural differences in how students with disabilities may communicate or interact with technology can be misinterpreted as suspicious or threatening. Students with disabilities who use assistive technologies or interact differently in chat platforms and digital tools are more likely to be disciplined or flagged for cheating, even when they have school accommodations.

Monitoring also disproportionately flags BIPOC students for inappropriate behavior, further perpetuating and expanding the school-to-prison pipeline through technology. For example, a monitoring system that flags students’ online messages for potential cyberbullying could rely on natural language processing algorithms, many of which struggle to accurately interpret African American Vernacular English. Thus, it’s possible that some content from Black students is disproportionately flagged, making them more likely to be referred for disciplinary action. In other cases, flagged students are not first approached by school administrators but are instead interrogated by police, even for relatively benign infractions.

Queer and trans students also face misuse of surveillance technologies inside and outside the classroom. Monitoring algorithms can flag students just for searching terms like gay or lesbian. While these systems aim to prevent cyberbullying, they can end up creating an environment where no space—online or offline—feels private or safe. This kind of surveillance discourages free exploration and learning on sexuality and gender, and it can wrongly flag these students as disciplinary concerns simply because the content they engage with references sex or sexual identity. Similarly, sharing of LGBTQ+ student data with companies can lead to unwelcome infringement of student privacy. 2023 online surveys by the CDT of nationally representative samples of 6th to 12th grade students found that nearly 30 percent of students reported being outed as a result of their school’s digital activity monitoring.

Is it Possible to Have Student Safety Without Privacy and Transparency?

A central justification for the promotion of surveillance technologies in educational settings is their purported ability to enhance student safety. While safeguards are needed to protect K–12 students in school and the digital sphere, evidence suggests that current edtech surveillance practices miss the mark. The manner in which these surveillance tools can label students as potential threats, play fast and loose with student data, and create punitive learning environments suggests that rather than fostering security, these tools can undermine it.

With even fewer restrictions than years past, edtech surveillance companies may be poised to accelerate the transformation of schools into engines of data extraction and control, deepening a trend that seems to be well underway. The lack of transparency surrounding Grapevine-Colleyville’s use of AI-powered monitoring technologies reflects a national pattern of limited public oversight in school surveillance practices. The ongoing lawsuit seeks to obtain public records to clarify how these technologies are being used, emphasizing the importance of transparency in evaluating their impact. Without access to information like this, it remains difficult for researchers, policymakers, and communities to make informed judgments about the legitimacy and impact of these technologies. As school districts continue to deepen their reliance on these technologies, they risk normalizing invasive surveillance without adequate information. If this trend is left unchallenged, students will be learning under conditions of surveillance, sorting, and suspicion rather than trust, care, and agency.