Schools Deploy AI to Monitor Student Online Activity, Raising Privacy Concerns
To prevent self-harm among students, schools across the United States are increasingly turning to artificial intelligence-powered software to monitor online activities. These programs, installed on school-issued devices, analyze text input by students and flag potential threats based on specific keywords and phrases.
However, the implementation of such technology has not been without controversy. Instances of misinterpretation have led to unnecessary and distressing interventions, as exemplified by a recent case in Neosho, Missouri. A 17-year-old student’s old poem triggered an alert from the monitoring software, GoGuardian Beacon, resulting in an unexpected police visit. The student’s mother described the incident as “one of the worst experiences” for her child.
The use of these monitoring systems saw a significant uptick during the COVID-19 pandemic, as schools sought ways to keep tabs on students’ well-being remotely. While the software aims to prevent self-harm by identifying concerning language, there is a lack of transparency regarding its effectiveness.
Some schools report successful interventions, but the invasive nature of the technology has raised serious privacy concerns. Civil rights groups argue against involving law enforcement in these situations, citing potential trauma and violation of student privacy.
The debate surrounding these systems is further complicated by the pressing issue of teen suicide, which remains the second leading cause of death for individuals aged five to 24 in the United States. Law enforcement officials acknowledge the high rate of false alerts but emphasize the potential to save lives.
Critics, including Baltimore City Councilman Ryan Dorsey, question the appropriateness of police involvement and the lack of concrete data on outcomes. As schools continue to grapple with the balance between student safety and privacy, the role of AI in student welfare remains a contentious issue.