
Jonathan Levin, President | Stanford University
Stanford University recently hosted a symposium to explore how generative artificial intelligence (AI) could improve educational experiences and equity for students with disabilities. The event, organized by the Stanford Accelerator for Learning, brought together education researchers, technologists, teachers, and students—including those with disabilities—to collaborate on AI-driven solutions.
The symposium featured a hackathon where participants worked together to develop product prototypes. Ideas generated included using AI for early identification of learning disabilities and co-designing tools directly with students who would use them. These outcomes were summarized in a white paper released by Stanford.
Isabelle Hau, executive director of the Stanford Accelerator for Learning, spoke about the importance of designing technology that addresses the needs of students often overlooked by traditional systems. "There is a long history of people with disabilities having innovated essentially at the margins for specific topics and issues that they're facing, but then those innovations benefiting everyone," Hau said. She pointed to text-to-speech as an example of technology originally developed for people with disabilities but now widely used.
Hau referenced Angela Glover Blackwell’s concept of the “curb-cut effect,” which describes how accommodations designed for one group can benefit many others: "Angela Glover Blackwell invented this term called 'curb-cut effect,' where if you have these roads where you have a curb for people with wheelchairs, then it also benefits people who may have a cart or who may have a stroller."
Rather than feeling overwhelmed by the challenges faced during the symposium, Hau described an atmosphere of inspiration and empowerment among participants. "Everyone was asked to participate and contribute, and everyone had great contributions, coming at it from different perspectives or levels of expertise," she said.
One personal story highlighted at the event was that of David Chalk, who struggled with dyslexia throughout his life and only learned to read at age 62. Chalk is now working on an AI tool aimed at addressing some challenges he experienced in school. Hau explained how earlier identification through AI-powered assessment tools could help children like Chalk receive support sooner: "If you're identified with dyslexia much earlier than age 62...you can have then specialized supports and avoid what a lot of kids and families are currently going through."
Stanford has been developing such tools—one example is ROAR (Rapid Online Assessment of Reading), which aims to identify reading difficulties early in children's education. Another tool highlighted in their report is Kai, designed to provide reading support interventions in classrooms.
The symposium also examined how AI might assist educators in developing Individualized Education Programs (IEPs). One hackathon prototype provided parents translations and explanations about IEPs using AI—a process that can be complex without technical assistance. Another tool offered recommendations tailored to teachers managing diverse student needs within one classroom.
A key difference between new generative AI technologies and previous edtech solutions lies in their ability to analyze larger datasets more effectively and generate tailored recommendations based on individual student contexts—something older adaptive technologies could not do as dynamically.
However, privacy concerns remain central when deploying AI tools that analyze sensitive student data. Hau acknowledged these risks: "Privacy, security is a huge one...the number one concern here is privacy and security of those data." She noted improvements in secure environments over recent years but emphasized ongoing vigilance due to the sensitivity surrounding information about students with learning differences.
Bias within AI systems was another major topic discussed at Stanford’s event. Research has shown certain demographic groups are overrepresented in special education assessments—a trend known as disproportionality—which could be perpetuated if historical biases are encoded into new algorithms. According to Hau: "[Bias] is an issue with learning differences that has been well documented by research...I think this is an area where tech developers are actually eager to do better."
To address bias concerns directly, co-designing products alongside individuals with lived experience was identified as crucial: "It's actually the entire benefit of this effort that we led is a concept of co-designing with and for learners with learning differences," Hau said.
Another promising development involves multimodal approaches enabled by evolving AI capabilities—incorporating video and audio alongside text—to align more closely with universal design for learning principles recommended for supporting diverse learners’ needs.
Hau concluded on an optimistic note regarding future possibilities: "This technology could actually bring some new possibilities for a broader set of learners, which is very hopeful."
Alerts Sign-up