• Ideas
  • Technology

The Police Are Using Computer Algorithms to Tell If You’re a Threat

5 minute read
Ideas

Ferguson is a professor of law at the University of the District of Columbia School of Law and the author of The Rise of Big Data Policing: Surveillance, Race, and the Future of Law Enforcement.

Can a computer predict violence? In Chicago, Illinois, an algorithm rates every person arrested with a numerical threat score from 1 to 500-plus. The process has been going on for four years, and almost 400,000 Chicago citizens now have an official police risk score. This algorithm — still secret and publicly unaccountable — shapes policing strategy, the use of force, and threatens to alter suspicion on the streets. It is also the future of big data policing in America — and depending on how you see it, either an innovative approach to violence reduction or a terrifying example of data-driven social control.

In practical effect, the personalized threat score automatically displays on police computer dashboards so an officer can know the relative risk of the suspect being stopped. The predictive score also shapes who gets targeted for proactive police intervention. These interventions can range from a home visit by police officers, to additional police surveillance, to an invitation to a community meeting — all of which convey the same clear message: law enforcement is watching. And while Chicago is in the vanguard of predictive policing, it is not alone; other cities like New York and Los Angeles are considering how to use big data policing to target at-risk individuals.

Person-based predictive policing began in 2009 as an attempt to apply a public health approach to violence. Just as epidemiological patterns reveal environmental toxins that can increase health risks (like getting cancer), criminal patterns can increase life risks (like getting shot). The key is to identify the predictive risk factors and try to remedy the underlying environmental causes. Researchers at Chicago ITT developed an algorithm for the police to prioritize those most at risk by analyzing: past arrests for violent crime, weapons offenses or narcotics; age at the most recent arrest (the younger the age, the higher the score); incidents where the individual was a victim of a shooting or assault; and the trend line of criminal activity (whether the rate is increasing or decreasing). A computer then crunches the variables and spits out a relative threat score to determine a likelihood of either shooting someone or getting shot. This is the risk score that places someone on the Strategic Subjects List (colloquially known as “the heat list”).

Police claim the targeting mechanism works by pointing to a high percentage of shooting victims as being accurately predicted by the heat list. Critics have pointed out that the targeting is overbroad and ineffective, including tens of thousands of people with high scores but no history of prior arrest for violent crimes.

But, whether it works or not, three bigger unanswered questions exist about this use of a predictive risk score for everyone: Is it fair? Is it biased? And is it the future we want for policing?

Fairness involves both how people get scored and how police treat those with threat scores. Relying on a black-box computer algorithm to rank threats in a society obviously creates its own risks. Society needs to be able to trust the data, to approve the inputs and to evaluate the outputs. Currently, there is no public oversight of the police data, inputs or outputs, so communities are left in the dark unable to audit or challenge any individual threat score.

Equally troubling, the threat scores impact the fairness of how police interact with people on the streets. High-risk scores guide violence-interruption strategies, influencing who police contact and who falls under their heightened surveillance. But the threat scores also distort the day-to-day police decisions about use of force and reasonable suspicion. After all, once police have information that a person has a high threat score, this knowledge will color criminal suspicion and increase perceived danger, resulting in more frequent and more aggressive interactions with people the algorithm deems “high risk.”

Bias can also creep into the system. As described in the 2017 Department of Justice Civil Rights Division’s investigation of the Chicago Police Department, patterns of racial discrimination remain a real problem. While one could hope that algorithmic justice would avoid human bias, the reality is that those inputs (especially arrests) are impacted by the all-too-human discretionary decisions of police officers as they patrol or investigate suspected crime. So while the big data math may be “objective,” the inputs are not free from human biases, distorting the ultimate outcomes.

The biggest question remains whether we want police ranking every arrestee as a possible high- or low-risk threat. The original predictive targeting system in Chicago only focused on the top 400 or so highest risk individuals. But expanding the program citywide to everyone arrested or targets merely connected to a victim of violence means many people without significant criminal justice involvement are now identified as a risk.

The answers are not easy. Law-abiding citizens frustrated with being racially profiled may welcome an objective metric to reduce police contact. After all, the threat scores provide an alternative to suspicion based on stereotyping race, age or neighborhood. Threat scores could also reduce police suspicion for many low- or no-risk individuals.

At the same time, the inability to control, audit or question the scoring process raises real legitimacy concerns. The temptation to intentionally or unintentionally manipulate the data to justify targeting certain people creates its own threat. And most fundamentally, the ease of quantifying risk masks the difficulties in deciding whether society should quantify humans as societal risks. These are the questions that big data policing raises as it spreads across the nation, and that local communities must debate before they enact new predictive policing strategies.

More Must-Reads From TIME

Contact us at letters@time.com

TIME Ideas hosts the world's leading voices, providing commentary on events in news, society, and culture. We welcome outside contributions. Opinions expressed do not necessarily reflect the views of TIME editors.