
Elizabeth “Betsy” Corcoran, Co-founder and CEO | EdSurge Research
As schools continue to explore the use of artificial intelligence in classrooms, questions remain about how best to implement education-focused AI tools. A recent report from Common Sense Media examines whether these tools, designed specifically for educational settings, can effectively support teachers and students.
Common Sense Media, a nonprofit that assists parents with technology and media issues, released its risk assessment of “AI Teacher Assistants” earlier this month. Unlike general chatbots such as ChatGPT, these assistants—including products like Google School and Adobe’s Magic School—are tailored for classroom use and aim to help teachers save time while supporting student learning.
“As we see adoption of these tools continue to skyrocket, districts are really asking questions,” says Robbie Torney, senior director of AI programs at Common Sense Media. “It’s looking at, ‘Are they safe? Are they trustworthy? Do they use data responsibly?’ We’re trying to be comprehensive into how they fit into school as a whole.”
The report emphasized the pedagogical applications of AI tools—such as generating discussion questions based on curriculum readings—over administrative uses like building syllabi.
Torney advises that institutions should establish clear guidelines early when adopting these technologies. “My main takeaway is that this is not a go-it-alone technology,” he says. “If you're a school leader and you as a staff haven't had a conversation on how to use these things and what they’re good at and not good at, that’s where you get into these potential dangers.”
Paul Shovlin, an AI faculty fellow at the Center for Teaching and Learning at Ohio University, observes that K-12 schools appear to have embraced new AI platforms more quickly than higher education institutions. He notes concerns about their rapid adoption: “I think they are becoming more prevalent,” he says. “This is just a feeling, but I feel K-12 has picked up on platforms sooner than higher ed; and there are some concerns related to them.”
One significant concern highlighted in the Common Sense Media report is bias within AI systems. The study found evidence of what it calls “invisible influence.” When researchers tested responses using names coded as white or Black, the results showed subtle differences: white-coded female names received more supportive responses compared to shorter or less helpful answers given to Black-coded names.
“I’m always surprised how difficult it is to see bias; sometimes it’s obvious, sometimes it's invisible and hard to detect,” Torney says. “If you are just generating outputs on a one-off basis, you may not be able to see the differences in outputs based on one student versus another. It could be truly invisible and you may only see them at the aggregate level.”
Shovlin points out that commercial interests can introduce further bias into these platforms: “There are affordances and limitations with any technology and I don’t want to completely discount these platforms, but I’m highly skeptical because they are commercial products and there is that imperative built into how they create these things and market them,” he says. “This industry that has created these tools also has embedded bias as a result of who is doing the coding originally. If it’s dominated by one identity, it will be baked into the algorithms.”
Emma Braaten, director of digital learning at the Friday Institute for Educational Innovation at North Carolina State University, urges educators to scrutinize company terms regarding data privacy rather than relying solely on previous experiences with trusted brands.
“There are educators who trust this program or platform because we've used it before,” Braaten says. She encourages critical evaluation: “How do we review and revisit that [tool] as they incorporate AI? Do we give a blanket of trust or start to review and think critically about those?”
Braaten also emphasizes keeping humans central in educational processes involving AI—a concept she calls having a "human in the loop." She stresses maintaining both teacher and student involvement alongside technological integration: “That piece both for students and educators is a huge focus to think about; making sure all these groups stay in the loop and not just give it all away to the tool,” she says. “When we have a teaching assistant in the classroom space, it’s looking at … do we have guidance to make lessons to include both technology and the human connection in that space?”
Experts interviewed by EdSurge agree that if used properly, AI teaching assistants offer advantages for teachers despite possible drawbacks. The report recommends integrating such tools within existing lesson plans instead of letting them generate standalone content.
“The [AI] model is not as good as the curriculum you're teaching from,” Torney says. “If you're teaching from an adopted curriculum, the output will be so much better than getting a random generated lesson about fractions.”
As schools increasingly adopt these technologies across various products https://www.commonsensemedia.org/ai, experts advise ongoing critical assessment during implementation.
“You can't just block AI with one sweeping wave of your hand; at this point it's embedded into so many things,” Braaten says. “There’s looking at that integration into the products themselves, but also how you're part of that system and how you incorporate it into your application [are what] we have to be critical thinkers about.”
Alerts Sign-up