Working in product management for facial-recognition companies for five years in Silicon Valley during the mid-2000s, John Honovich noticed how surveillance companies often misrepresented or exaggerated claims to consumers, on subjects including their ability to capture high-quality and unbiased information. That led him to found a niche trade publication in 2008 called the Internet Protocol Video Market (IPVM), which aimed to provide reliable reporting on the limits of security technology.

Some 15 years later, IPVM has gone from being an industry-focused outlet read mostly by security technicians to a leading source of information on the harms of facial-recognition technology—and, in turn, influencing U.S. policy. Honovich, 47, now leads a team of 30 journalists and researchers specializing in surveillance who report for IPVM’s website by digging through company filings and financial documents, and engineers who test surveillance cameras and other security equipment in a Pennsylvania security lab.

The team is firmly committed to revealing and uncovering problems of video surveillance, especially as governments continue to use the technology for documentation and, in some instances, persecution. “There are clearly deep ethical and moral problems there, but even pragmatically, I question how you can distinguish ethnic groups just by looking at them,” Honovich says. “Take Europeans, for example: Can you detect someone who’s French? What is a ‘French person’? So there are all sorts of massive issues there.”

In 2020, IPVM unearthed critical documents on Beijing’s use of surveillance apparatus to persecute Uighurs, an overwhelmingly Muslim ethnic minority in northwestern China. It partnered with multiple news outlets to deliver scoops: a December report by the Washington Post detailed how the Chinese tech giant Huawei tested a face-scanning system that could trigger a “Uighur alarm,” while another report by the L.A. Times revealed how the Chinese firm Dahua claimed to identify Uighurs through camera technology that would automatically alert authorities. The U.S. Congressional-Executive Commission on China and the State Department acknowledged IPVM’s work in official hearings.

As AI increasingly makes surveillance technology more sophisticated, IPVM’s work has become even more important. At the same time, AI-powered facial recognition also results in all sorts of errors. One worrying prospect, Honovich says, is that the technology is worse at accurately detecting people of certain ethnicities—which can lead to incorrect identification of people, for example, in criminal cases. This is already happening. In Detroit, a Black woman filed a lawsuit in August against the police department for falsely arresting her in February while she was eight months pregnant after facial-recognition software allegedly incorrectly identified her as a carjacker and robber. The criminal charges were later dismissed, but the landmark federal lawsuit seeks to recover punitive and other damages, noting “the troubling implications of facial recognition technology in this case.”

More Must-Reads From TIME

Write to Astha Rajvanshi at

Sougwen Chung
Yann LeCun
Abeba Birhane
Audrey Tang