
Elizabeth “Betsy” Corcoran, Co-founder and CEO | EdSurge Research
For Christianna Thomas, a senior at Heights High School in Texas, artificial intelligence has become both a tool and an obstacle in her education. As part of the International Baccalaureate program, Thomas’s school uses AI to detect plagiarism. “We use AI to check for other types of AI,” Thomas says.
AI also plays a role in filtering information accessible to students. While researching the education system in Communist Cuba during the Cold War for a history project, Thomas found herself blocked by her school’s web filter on both her school-issued computer and her personal laptop while on campus. Schools commonly use AI-powered web filters to restrict access to unsafe materials, but students say these systems sometimes prevent them from finding important resources. For example, organizations like The Trevor Project and databases such as JSTOR and The Internet Archive can be inadvertently blocked due to their content or features.
As a result of these restrictions, Thomas had to change the focus of her assignment. She is not alone in expressing concerns about how AI shapes student learning experiences. While educators’ concerns about AI have received significant attention, less focus has been given to student perspectives on issues such as surveillance and accusations of cheating.
State policies that provide guidance on school use of AI have largely overlooked civil rights concerns related to surveillance, according to some advocates. With increased monitoring through AI tools, there are worries about more frequent interactions between students and law enforcement. Some students fear that enhanced surveillance could affect immigrant communities or those expressing political opinions.
Schools sometimes rely on AI systems to monitor online activity and assess risk by flagging certain behaviors for adult intervention. Studies indicate that nearly all educational technology companies monitor students both inside and outside of school settings. However, how this collected data is used is often unclear.
Earlier this year, the Knight First Amendment Institute at Columbia University filed a lawsuit against Grapevine-Colleyville Independent School District in Texas after the district declined to release information regarding its use of student surveillance data from school-issued devices. In another incident, a 13-year-old Tennessee student was arrested and strip-searched after an automated scan misinterpreted a joke made in a private chat linked to her school email account; the school used Gaggle’s monitoring service.
Student journalists in Kansas have also filed lawsuits claiming that such monitoring violates constitutional rights.
Some students are taking action against these practices. Thomas works with Students Engaged in Advancing Texas (SEAT), a nonprofit organization focused on involving students in policymaking by training them for public speaking and organizing around issues like book bans and immigration enforcement policies within schools. She helps other students organize around topics like web filtering—a process she finds troubling due to uncertainty over human oversight.
When Thomas asked IT staff at a neighboring district for their list of banned websites, she was told it was “physically impossible” because the list would be too long—leaving no way for her to verify whether humans review these decisions.
SEAT has lobbied for Texas House Bill 1773, which proposes nonvoting student trustee positions on state school boards. The group has also challenged legislation restricting social media access under claims it limits free speech and is currently advocating for a Student Bill of Rights aimed at protecting freedom of expression and student agency.
Other youth-led organizations are working toward similar goals across the country.
Deeksha Vaidyanathan led California’s chapter of Encode—a student-run advocacy group founded by Sneha Revanur—to address ethical issues related to AI in schools. Vaidyanathan worked with Dublin High School’s chief technology officer over six months to develop consistent guidelines governing classroom use of AI tools after observing varied approaches among teachers regarding permissible uses and reliance on surveillance platforms like Bark, Gaggle, and GoGuardian.
The resulting policy provided one of the first frameworks for regulating classroom use of AI within 100 miles according to Vaidyanathan; it has since inspired similar efforts elsewhere including Indiana, Philadelphia, and Texas.
Vaidyanathan highlights deepfakes as one area where existing policies fall short—especially apps that generate fake explicit images targeting young girls using only photographs—which can cause trauma among victims. According to surveys referenced by Vaidyanathan’s group Encode, this practice is widespread among students nationally.
A recent review by the Center for Democracy & Technology found that most states offer little guidance about addressing deepfakes or other emerging threats posed by generative AI technologies within schools (https://cdt.org/insights/state-guidance-on-student-monitoring-technology-remains-limited-and-focused-on-compliance/). Even effective guidelines from states like California or Oregon remain unofficial recommendations rather than enforceable rules.
Suchir Paruchuri leads Encode’s Texas chapter and emphasizes limiting access to student data while incorporating affected voices into policy decisions—particularly regarding non-consensual sexual deepfake legislation under consideration locally.
Paruchuri says: “The goal is ‘AI safety.’ To him, that means making sure AI is used in a way that protects people's rights, respects their dignity and avoids unintended harm, especially to vulnerable groups.”
Students involved with these organizations continue advocating for ethical frameworks ensuring that artificial intelligence supports rather than controls their educational experience.
Alerts Sign-up