AI undercounts 'at-risk' students, causes funding cuts in US schools
Nevada has ignited a controversy by using an artificial intelligence (AI) system to flag students who may struggle academically. The state had previously labeled all low-income students as "at risk" of academic and social problems. But the new AI algorithm from Infinite Campus, takes into account several factors beyond income to assess if a student could fall behind in school. This has drastically cut down the number of at-risk students, from over 270,000 in 2022 to less than 65,000 now.
AI's impact on school funding and programs
The AI system's reclassification of at-risk students has also affected school funding in Nevada. Many schools have witnessed state funds they depended on disappearing, forcing districts to cut programs and revise budgets. The development has alarmed school leaders who claim the number of children needing extra support has increased, not decreased, since the COVID-19 pandemic began. The situation raises questions about the use of AI in school administration and who should be considered an at-risk child.
Nevada's approach to identifying at-risk students
Nevada is the only state that currently depends entirely on this machine-learning system to identify and distribute funds for at-risk children. The program, offered by Infinite Campus, combs through student data like grade-point average, unexcused absences, and disciplinary incidents. It also looks at factors like guardian engagement with school portals, family structure, and home language. The system then produces a "grad score" for each student with lower scores meaning a higher risk of failing to complete school.
AI system's grad score influences state funding
The AI system classifies students with scores in the bottom 20% as medium to high risk. This category was employed by Nevada to allocate funds, resulting in a drop of over 200,000 students from the state's tally. The exact weightage given to each factor in producing this score is kept confidential by Infinite Campus. This method has sparked debate over transparency and fairness in using AI for such critical decisions impacting student welfare and education funding.