Quantcast

Edtech surveillance tools raise concerns over student privacy and effectiveness

Performance

Education Daily Wire Oct 14, 2025

Webp 6t1idbritcvpajggfzchlwnq9row
Rebecca Koenig Interim Senior Editorial Director | EdSurge Research

Last year, journalism students at Lawrence High School in Kansas persuaded their school district to exempt them from a digital monitoring service. The district had spent over $162,000 on a contract with Gaggle, aiming to improve student mental health and crisis management. This decision followed discussions where students argued that surveillance conflicted with their First Amendment rights.

Gaggle is one of several companies providing surveillance technology to schools, alongside GoGuardian and Bark. These services use artificial intelligence to scan student messages and search histories for signs of bullying or self-harm and block access to unapproved websites. Schools often turn to these tools due to concerns about teen mental health and limited counseling staff.

However, some students report that these technologies interfere with learning by blocking educational resources. There are also privacy concerns; the Electronic Frontier Foundation gave Gaggle an “F” rating for student privacy, noting the AI’s limitations in understanding context when flagging content.

Jim Siegl from The Future of Privacy Forum points out that worries about digital surveillance have existed for years. He compares this trend to other safety measures adopted by schools but notes ongoing questions about their effectiveness and potential trade-offs.

Research published in the Journal of Medical Internet Research found about a dozen companies specializing in school surveillance, most operating around the clock. Devices provided by schools tend to monitor students more than personal devices, raising equity issues for low-income families who may have less privacy.

William Owen from the Surveillance Technology Oversight Project argues that reliance on biased algorithms has normalized constant monitoring of students. He says these systems disproportionately flag students with disabilities, neurodivergent individuals, and LGBTQ youth.

The study cited indicates that while most companies use AI for monitoring, fewer than half employ human reviewers as part of their process. Owen adds that marketing by surveillance firms can make it difficult for parents and administrators to understand possible harms fully.

Some companies previously signed edtech’s voluntary “privacy pledge,” which was retired earlier this year as privacy issues shifted toward artificial intelligence developments. John Verdi from The Future of Privacy Forum told EdSurge that the landscape has changed significantly due to advances in AI.

Companies argue their services have saved lives based on internal data related to alerts about self-harm or violence among students. However, researchers like Jessica Paige at RAND have questioned whether there is sufficient evidence showing these tools effectively identify suicidal students while raising privacy risks and making it hard for parents to opt out.

A 2022 Senate investigation highlighted similar issues—finding little effort among major providers to address bias or adequately inform parents and schools about data misuse risks. In response, companies provided anecdotes supporting the value of their products.

In 2023, after criticism regarding discrimination against LGBTQ youth, Gaggle stopped flagging certain terms such as “gay” and “lesbian,” citing greater acceptance of LGBTQ identities as a reason for the change.

Students interviewed by EdSurge described how AI-driven filters sometimes blocked academic resources like JSTOR or support sites such as the Trevor Project—used by LGBTQ youth—leading to confusion over what content would be restricted.

Critics worry these systems foster fear among students about expressing themselves freely and could increase interactions between students and law enforcement if districts do not review contracts carefully—a concern highlighted by Siegl based on his experience working with public schools near Washington D.C.

Siegl advises districts to develop clear policies for handling student data responsibly while considering bias issues and reviewing contracts thoroughly. He suggests parents and students should ask districts what they hope to achieve with such tools and how those goals will be supported safely.

Some advocate avoiding or banning these technologies altogether due to potential harm—especially increased risk for marginalized groups interacting with police because of flagged data reports. For example, New York prohibits facial recognition technology in schools but allows other biometric methods like fingerprint scanners for lunch lines.

Owen believes bias is inherent in many current algorithms: “There's no correcting the algorithm when these technologies are so biased to begin with, and students [and] educators need to understand the degree of that bias and that danger that is posed.”

Want to get notified whenever we write about EdSurge Research ?

Sign-up Next time we write about EdSurge Research, we'll email you a link to the story. You may edit your settings or unsubscribe at any time.

Organizations in this Story

EdSurge Research

More News