• Ideas
  • Technology

Schools Let AI Spy on Kids Who May Be Considering Suicide. But at What Cost?

7 minute read
Ideas
Paige is a social scientist at RAND whose research is focused on racial inequality in the U.S. Boudreaux is a policy researcher at RAND who conducts research on the ethics of technology. Ayer is a clinical psychologist and senior behavioral scientist at RAND who conducts research on youth suicide

Suicide is now the second leading cause of death among American youth between the ages of 10 and 14. The problem of youth suicide has only gotten worse lately, in part due to a nationwide shortage of mental health professionals, particularly in schools, where if available, an on-staff psychologist, counselor or social worker can help identify at-risk youth, and take steps toward an appropriate intervention.

As a remedy, school administrators, faced with daunting funding and staffing shortages, have increasingly looked to technology to help them manage the youth suicide crisis. Specifically, companies such as Bark, Gaggle, GoGuardian and Securly have developed technology in the form of AI-based student monitoring software that tracks students’ computer use to identify students facing mental health challenges. It is generally designed to operate in the background of students’ school issued computing devices and accounts, and flag activity that may indicate that they are at risk for self-harm.

This tracking software is being used nationwide, on millions of students. But many parents and community members remain unaware of its existence. Students may have some sense that their school devices are being monitored, but likely have a limited understanding of how it is used. And even though identifying suicide risk might be a worthwhile objective, AI surveillance may feel like a significant breach of privacy, while also posing other unanticipated harms.

As researchers whose work has focused on dimensions of inequality, mental health, and technology policy, we interviewed and talked to staff to better understand the benefits and risks of this software. One superintendent told us that this monitoring software can identify at-risk students who may not already be on the radar of school staff, providing them with an opportunity to intervene before the situation gets worse.  

We are researchers, but we are all also parents, and this added layer of safety for suicide risk detection can feel, at first, like a no-brainer. The idea of losing a child is terrifying, and so it is completely understandable that schools would reach for a seemingly low-cost tool that can “catch” the private, sensitive, suicide-related thoughts that students might not disclose to anyone outside their Google search bar.

But the problem is that, apart from anecdotes, there is little hard evidence supporting the accuracy of this software, and there are numerous examples throughout history where well-meaning approaches to mental health intervention have caused unintended harms. Similarly, it is increasingly clear that emerging technology also has a range of harmful collateral effects on youth mental health.

Read More: It Will Take More Than Robots to Manage the Robots

Through a careful review of the existing evidence, and through interviews with dozens of school staff, parents, and others, we found that AI based monitoring, far from being a solution to the persistent and growing problem of youth suicide, might well give rise to more problems than what it seeks to solve.

First, the use of AI-based monitoring threatens student privacy. Because the software runs while students use their school-issued computing devices and accounts, it has the potential to collect large amounts of data about their lives. While some companies have taken voluntary pledges to safeguard student data, there is no national regulation restricting much of the data that are collected, how they are stored, and whether they are shared.

Adding to this privacy risk, families may find it difficult to opt-out of using the software. We find that across many school districts, families are required to consent to AI-based monitoring as a condition of using school-issued computing devices to begin with. If families opt out of monitoring, they must provide their own computer for school use, which is not an affordable option for many families.

Second, our research shows that many parents and researchers have concerns that using AI-based algorithms to identify at-risk students could exacerbate inequalities. For example, there have been reports that internet searches of LGBTQ+ students have been flagged at disproportionate rates by AI software. Their activities may then be brought to the attention of school officials, involuntarily “outing” these students.

The potential for suicide risk prediction algorithms to be biased against minoritized groups has been well documented through other studies. And while many have claimed that these algorithms can be corrected for bias, there is a lack of transparency about just how and when AI alerts get generated, which makes it difficult to audit the data in order to better understand if, indeed, it is biased. Another 2023 study raised further concerns about the alerts generated by AI-based student monitoring software, documenting that the programs consistently flag content related to race, gender and sexual orientation. This includes searches related to topics such as Malcolm X and the Gay Men’s Chorus of Washington.

Lastly, while the AI software does the flagging of kids, it is then up to schools to decide how to respond to alerts they receive. Throughout our interviews, we heard stories of alerts generated by AI-based monitoring being used to discipline students. For example, we talked to a teacher who told us about a student experiencing a mental health challenge who was suspended from school, rather than meeting with a counselor or other mental health professionals.

Worse still, AI based monitoring might lead to increased encounters between students and law enforcement. For example, we found that, on weekends and school holidays, when they do not have staff on hand to review information, many schools automatically direct AI-generated suicide risk alerts to local law enforcement. From the school’s point of view, such a move is often the best way to ensure that a student experiencing a mental health crisis receives immediate help. But law enforcement might not be best positioned to help students in need of support, and might even exacerbate problems. This is something we have already seen in other situations, when police have been called in to assist with mental health crises; the risk of violent interactions with law enforcement is real—especially for youth of color—and must be considered in weighing the pros and cons of using these tools.

Some people we interviewed also pointed out that this software has the potential to fuel existing inequalities regarding school discipline. For example, we know that students of color already face disproportionately high school disciplinary actions, such as suspensions and expulsions, which is connected to the school-to-prison-pipeline. Alerts created by AI software could fuel these disparities by increasing the likelihood of law enforcement contact.

Read More: How Racism Affects the Mental Health of Black Youth

Ultimately, it remains unclear whether tools can accurately detect suicide risk in students. So far no studies have followed up with the students these programs have flagged as “at risk” for suicide, to see if they actually were at risk (“true positives”) or not (“false positives”); nor have studies looked at the extent to which students at risk for suicide were not flagged by the programs (“false negatives”). School and law enforcement responses to these alerts, and ultimate student outcomes—whether a student receives medical attention or mental health care, or if a flagged student has a violent encounter with law enforcement—are also not documented. This lack of evidence means it is not clear that benefits of the software outweigh the risks we found in our research.

Parents, students, school staff, and health professionals must carefully weigh the potential benefits and challenges of AI-based monitoring. While it may serve as an important resource for schools amidst a growing youth mental health crisis, the actual, realized benefits and harms of this technology—including whether it can accurately detect suicide risk—are unknown.

In the meantime, as school districts use their budgets to deploy AI based tools for suicide risk detection, it is important to recognize the known problems. The software raises important privacy concerns and has the possibility to perpetuate existing inequalities. As a result, AI companies and schools must ensure that families have comprehensive information about how the software is being used. Families should also be able to opt-out of monitoring without penalty. In addition, more regulation is needed on the federal, state, and local levels, to ensure safeguards are in place to protect students so that this software, which is after all designed to improve students’ mental health outcomes, does not end up doing more harm than good.

More Must-Reads from TIME

Contact us at letters@time.com

TIME Ideas hosts the world's leading voices, providing commentary on events in news, society, and culture. We welcome outside contributions. Opinions expressed do not necessarily reflect the views of TIME editors.