Speech-language pathologist Marisha Speights is developing an AI system that could make early diagnosis of speech disorders more inclusive, after finding traditional tools failed to identify children from underrepresented backgrounds. Her work is centered at the PedzSTAR Lab at Northwestern University, where researchers are collecting and analyzing acoustic speech biomarkers from children across varied geographic, cultural, and socioeconomic groups.
“We have many children that are not represented [in current tools],” said Speights in an interview with EdSurge. So far, her team has compiled samples from over 400 children, with a goal of 2,000. These recordings are used to train machine learning systems to predict potential disorders more reliably across diverse populations.
The AI-driven approach comes at a time when demand for speech-language pathologists is outpacing supply. According to the American Speech-Language-Hearing Association’s (ASHA) 2024 school survey, 27% of professionals reported considering leaving the field due to burnout, citing increased caseloads and administrative burdens.
Researchers like Venu Govindaraju at the University at Buffalo, who is leading a $20 million NSF-backed AI project, emphasize early detection. “The sooner you detect, the easier [treatment] will be,” he told EdSurge.
AI’s role will remain assistive, not diagnostic, experts stress. “The technology won’t replace speech pathologists,” said ASHA’s Lauren Arner, who supports AI use to streamline paperwork and improve access in rural areas.
Speights ensures child privacy is protected: data is stored on secure internal servers and personally identifiable information is excluded.
ASHA plans to release official AI guidance for clinicians this summer.