As Tech CEOs Are Grilled Over Child Safety Online, AI Is Complicating the Issue

10 minute read
Updated: | Originally published:

The CEOs of five social media companies including Meta, TikTok and X (formerly Twitter) were grilled by Senators on Wednesday about how they are preventing online child sexual exploitation.

The Senate Judiciary Committee called the meeting to hold the CEOs to account for what they said was a failure to prevent the abuse of minors, and ask whether they would support the laws that members of the Committee had proposed to address the problem.

It is an issue that is getting worse, according to the National Center for Missing and Exploited Children, which says reports of child sexual abuse material (CSAM) reached a record high last year of more than 36 million, as reported by the Washington Post. The National Center for Missing and Exploited Children CyberTipline, a centralized system in the U.S. for reporting online CSAM, was alerted to more than 88 million files in 2022, with almost 90% of reports coming from outside the country.

Mark Zuckerberg of Meta, Shou Chew of TikTok, and Linda Yaccarino of X appeared alongside Jason Spiegel of Snap and Jason Citron of Discord to answer questions from the Senate Judiciary Committee. While Zuckerberg and Chew appeared voluntarily, the Committee had to serve Spiegel, Citron, and Yaccarino subpoenas.

Senator Richard Durbin, a Democrat from Illinois and the Committee chair, opened the hearing with a video showing victims of online child sexual exploitation, including families of children who had died by suicide after being targeted by predators online.

Senator Lindsey Graham, a Republican from South Carolina and the ranking member, told attendees about Gavin Guffey, the 17-year-old son of South Carolina state house Rep. Brandon Guffey, who died by suicide last after he was sexually extorted on Instagram. “You have blood on your hands,” Graham told the CEOs, singling out Zuckerberg in particular.

Many of the assembled lawmakers expressed their frustration with what they said were insufficient efforts taken by the social media companies to tackle the problem and affirmed their own eagerness to act. In the past year, in addition to holding a number of hearings, the Judiciary Committee has reported a number of bills aimed at protecting children online onto the Senate floor, including the EARN IT Act, which would remove tech companies’ immunity from civil and criminal liability under child sexual abuse material laws.

In their testimonies, the CEOs all laid out the measures they were taking to prevent online harms against children. However, when pressed on whether they would support the bills reported by the Judiciary Committee, many demurred.

At one point, Senator Josh Hawley, a Missouri Republican, asked Zuckerberg whether he would like to apologize to the parents of children affected by online CSAM that were present at the hearing. "I’m sorry for everything you’ve all gone through," Zuckerberg said. "It’s terrible. No one should have to go through the things that your families have suffered."

On multiple occasions, the CEOs highlighted their companies’ use of artificial intelligence to address the issue of online CSAM. In his testimony, Citron highlighted Discord’s acquisition of Sentropy, a company that developed AI-based content moderation solutions. Zuckerberg said that 99% of content Meta removes is automatically detected AI tools. However, the lawmakers and tech bosses did not discuss the role that AI is having in the proliferation of CSAM.

AI-generated child abuse images

The advent of generative artificial intelligence is adding to concerns about harms to children online. Law enforcers around the world have been scrambling to deal with an onslaught of cases involving AI-generated child sexual abuse material—an unprecedented phenomenon before many courts. 

On Jan. 10, 17-year-old Marvel actress Xochitl Gomez, who portrayed teen superhero America Chavez in the 2022 film Doctor Strange in the Multiverse of Madness, spoke about  how difficult it was to scrub X of AI-generated pornographic images of her.

Speaking on a podcast with actor Taylor Lautner and his wife on, Gomez said her mother and her team have been trying—without success—to get the images taken down. “She had emails upon emails, but like, there's been a lot and she's dealt with it all,” she says. “For me, it wasn't like something that was mind boggling, but just more like, ‘Why is it so hard to take down?’”

Authorities are faced with the complicated task of stopping the spread of AI-generated CSAM as the technology evolves at a rapid pace and the ease of access to tools such as so-called nudify apps increases, even among children themselves.

As AI models improve and become more accessible, they will also become harder to police should they be used for illegal purposes, like creating CSAM, according to Dan Sexton, the chief technology officer of the U.K.-based Internet Watch Foundation (IWF). He says the world needs to agree on a solution fast: “The longer we wait to come up with a solution for each of these potential issues that might happen tomorrow, the greatest chance is that it will have already happened and then we are chasing behind, and you're trying to undo harms that already happened.” 

A growing problem

For the most part, creating any sort of CSAM is already widely criminalized, including using AI. In its latest report, the International Center of Missing and Exploited Children found that 182 of 196 countries have legislation specifically addressing CSAM or sufficient enough to combat it. For instance, U.S. federal law defines CSAM as any visual depiction of sexually explicit conduct involving a minor—which may include “digital or computer generated images indistinguishable from an actual minor” as well as “images created, adapted, or modified, but appear to depict an identifiable, actual minor.” In Ireland, the law is much more strict: CSAM is illegal, whether simulated or not.

Some offenders have already been convicted under such laws. In September, a South Korean court sentenced a man in his 40s to 2.5 years in prison for using AI to illustrate hundreds of lifelike pornographic images of children. Last April, a Quebec judge sentenced a 61-year old man to three years in prison for using deepfake technology to create synthetic child sexual abuse videos. In the same month, in New York, a 22-year-old man was sentenced to six months imprisonment and 10 years of probation as a sex offender after pleading guilty to several charges related to generating and disseminating sexually explicit images of more than a dozen underage women.

Read More: Taylor Swift Deepfakes Highlight the Need for New Legal Protections

But the resolution of these cases is not always straightforward. Spain was rocked by a case of more than 20 girls aged between 11 and 17 having AI-generated nudes of them circulating online in September. But it took a while for law enforcement authorities to determine the criminal liability of the alleged culprits, who also are believed to be minors. Manuel Cancio, professor of criminal law at the Autonomous University of Madrid, told TIME that “if it was a clear case where everybody would know where to locate it in which section of the [Spanish] criminal code, then the charges would have been put forward already.” David Wright, director of the U.K. Safety Internet Centre, also tells TIME that the child-protection organization has also received reports of school children creating and spreading their peers’ AI-generated naked images.

AI technology today can use any unsuspecting child’s likeness, or create an image that isn’t based on a real child, to generate sex abuse material in just a few clicks, even though many of the developers don’t allow use for such material. The Stanford Internet Observatory found that some AI models were trained on datasets that contained at least 3,000 images of known CSAM—sourced from mainstream platforms like X and Reddit—even though their policies ban the posting of such content. Sexton says the IWF also received reports of the images of previous child victims being re-used to create more CSAM of them.

X did not respond to requests for comment, while a Reddit spokesperson said in a statement to TIME after publication that the platform uses both automated tools and human intelligence to detect and prevent the spread of CSAM, and reports users who distribute such material. “We actively maintain policies and procedures that don’t just follow the law, but go above and beyond it,” the spokesperson said, adding that the models referenced in the report had training data culled by unauthorized third-party scrapers, which meant that these data were taken without Reddit’s consent and not held to the company’s safety standards.

David Thiel, chief technologist at the Stanford Internet Observatory, says AI-generated CSAM has outpaced the solutions used to track and take down the content. “It’s just a constant flow of new material instead of this recirculation of known material which makes the visual fingerprinting part really difficult,” Thiel said.

How can the spread of AI CSAM be stopped?

AI model developers say that their tools have specific guardrails that prohibit their abuse. OpenAI prohibits the use of its image generator DALL-E for sexual images, while Midjourney requires that content must be PG-13. Stability AI updated its software to make creating adult content harder. But according to internet safety organization ActiveFence, some users have found ways to jailbreak these models. OpenAI’s leadership has called on policymakers to step in and set parameters about the use of AI models.

To purge all existing abuse material on the internet would require copious amounts of computational power and time, so tech companies and organizations like Thorn have developed machine-learning technologies that detect, remove, and report CSAM. One technology used is called hash-matching: a process that enables platforms to tag visually similar files. Another is the use of classifiers, machine-learning tools that indicate the likelihood of content being CSAM.

Another solution being studied is limiting access to the technology. Advocates for open-source AI models argue that open sourcing fosters collaboration among users, promotes transparency by relinquishing the grip of companies running them, and also democratizes access. Sexton says that while access for all may sound favorable in principle, there are risks. “In reality, the effect we see of putting really powerful technology like this in the hands of everyone means you're putting it in the hands of child sexual offenders, you're putting it in the hands of pedophiles and perpetrators and organized crime. And they will and are creating harmful content from that.”

Rebecca Portnoff, head of data science at Thorn, however says the debate over access has created a false binary between open-source and closed models, and she suggests that the biggest opportunity to stop the spread of AI-generated CSAM is with developers. She says that developers should focus on creating “safety by design” models to mitigate the damages to children rather than having prevention measures taking a reactive response to existing threats.

Portnoff emphasizes that time is of the essence.“It’s not going to slow down,” she says. “That takes me back to this concept of using every tool that we have at hand in order to properly address it. And those tools include both the ones that we actually build, they include collaboration with regulatory bodies, they include collaboration with tech companies.”

Correction, Feb. 2, 2024

The original version of this story mischaracterized Thorn. The organization primarily builds technology to defend children from sexual abuse; it is not an anti-trafficking organization.

More Must-Reads From TIME

Write to Will Henshall at will.henshall@time.com