Twitter Inc. will inform users who may have seen posts covertly crafted by Russians during the 2016 presidential campaign, the company’s director of U.S. public policy told a Senate committee Wednesday.
The company is “working to identify and inform individually the users” who could have come across accounts linked to the Internet Research Agency, which aided Russian efforts to meddle in the race, Carlos Monje told the Commerce, Science and Technology Committee.
Alongside Google’s YouTube and Facebook Inc., Twitter faced a committee hearing into social media efforts to combat terrorism, the latest scrutiny from lawmakers on both sides of the aisle who have grilled the companies over issues including their platforms being manipulated by Russians.
Monje, who said details of the plan would be available shortly, responded to questioning by Senator Richard Blumenthal. The Connecticut Democrat, who had requested such alerts, thanked Facebook for similar efforts to notify those who may have seen false or inflammatory content from Russian accounts, while saying he was“disappointed” by Google’s response.
‘Blew Off’
“It essentially blew off my concerns by saying the nature of the platform made it difficult to know who has viewed its content,” Blumenthal said.
Democrats on the committee urged the social media platforms to harden themselves against foreign and domestic interference ahead of the midterm elections in November.
“Based on results, you’re not where you need to be for us to be reassured you’re securing our democracy,” said Brian Schatz of Hawaii. “How can we know that you’re going to get this right — and before the midterms?”
Monje told Schatz that Twitter hopes to prevent interference.
“We’re continuing to improve our tools and we’re going to get better,” he said.
Orlando Shooting
The companies sought to assure skeptical lawmakers that they’ve tweaked their methods in the past year to more effectively identify and take down terrorist-related content.
While the social-media companies ticked off statistics on their progress, members of the committee cited their past failures at a hearing.
“The killer responsible for the Orlando nightclub shooting — in which 49 innocent people were murdered, and 53 were injured — was reportedly inspired by digital material that was readily available on social media,” Committee Chairman John Thune, a Republican from South Dakota, told the executives.
The technology companies have long concurred with lawmakers and law enforcement on the need to curb terrorist-related content. They are more at odds with demands that they make judgments to root out “fake news” and bogus messages. Since the disclosure that Russian agents used social media to stoke discord during the 2016 campaign, some members of Congress have urged legislation to impose regulations on political advertising on social networks like those that apply to television.
Citing Statistics
Much of the companies’ testimony on terrorist messaging was information they have offered in the past, through blog posts and at other congressional hearings, to assure the government they are working on the issue.
“Since June, YouTube has removed over 160,000 violent extremist videos and has terminated approximately 30,000 channels for violation of our policies against terrorist content,” said Juniper Downs, YouTube’s public policy director, in prepared remarks for the hearing. “We achieved these results through tougher policies, enhanced enforcement by machines and people, and collaboration with outside experts.”
Today, 98 percent of videos removed from YouTube for violent extremism are caught by algorithms, a speedier solution than human monitors that accounted for half of the material taken down as recently as June, Downs said. Still, Google has 10,000 people monitoring content on its site.
Twitter reported that it suspended more than half a million terrorist accounts in 2017 alone, three quarters of them before they had even begun tweeting because the company’s algorithms caught them when they were created, according to Monje.
Monika Bickert, Facebook’s head of product policy and counterterrorism, said 99 percent of ISIS and al-Qaeda-related terror content removed by Facebook is detected and removed before anyone reports it, mostly due to automated systems. She also said the company intends to double the number of people working on safety and security from 10,000 to 20,000 by the end of 2018.
The next big threat could be foreign governments adopting the tactics of terrorist groups by using social media to encourage people in the U.S. to conduct attacks at home, Clint Watts, a fellow at the Foreign Policy Research Institute, told the committee.
He cited the example of Edgar Welch, who was arrested in 2016 after believing an online conspiracy theory related to a Washington pizza parlor and taking an assault rifle to the restaurant. If fake news online could lead to such incidents, it’s likely foreign governments could do the same to create chaos in the U.S., Watts said.
More Must-Reads from TIME
- Why Trump’s Message Worked on Latino Men
- What Trump’s Win Could Mean for Housing
- The 100 Must-Read Books of 2024
- Sleep Doctors Share the 1 Tip That’s Changed Their Lives
- Column: Let’s Bring Back Romance
- What It’s Like to Have Long COVID As a Kid
- FX’s Say Nothing Is the Must-Watch Political Thriller of 2024
- Merle Bombardieri Is Helping People Make the Baby Decision
Contact us at letters@time.com