• World
  • France

A French Muslim Council Is Suing Facebook and YouTube Over the New Zealand Attack Video

2 minute read

A French Muslim group is suing Facebook and YouTube after the internet giants broadcast a livestream of the March 15 New Zealand mosque shootings, Agence France-Presse reports.

The French Council of the Muslim Faith (CFCM) said it is taking action against the companies for “broadcasting a message with violent content abetting terrorism, or of a nature likely to seriously violate human dignity and liable to be seen by a minor,” according to AFP, which received a copy of the complaint.

In France, these offenses can carry charges of up to three years’ imprisonment and a 75,000 Euro ($85,000) fine.

In a statement published five days after the shootings, Facebook said it removed the original video “within minutes” of being alerted by the New Zealand police. No users flagged the 17-minute clip, which was viewed around 4,000 times on the social platform before being taken down, the social media company said.

But the footage was copied and extensively reposted after the Christchurch attacks, which killed 50 people, sending Facebook, YouTube and other platforms scrambling to block and delete it.

Facebook said in the first 24 hours alone it removed more than 1.2 million videos of the attack.

In the wake of the shooting, tech giants have come under heavy criticism for their inability to stop the circulation of content portraying violence and acts of terror.

During a parliamentary address on March 18, New Zealand Prime Minister Jacinda Ardern spoke out about the role of tech companies in spreading extremist content.

“We cannot simply sit back and accept that these platforms just exist and that what is said on them is not the responsibility of the place where they are published,” Ardern said.

“They are the publisher, not just the postman,” she added. “It cannot be a case of all profit, no responsibility.”

The U.S. House Committee on Homeland Security also published an open letter to the CEOs of Facebook, YouTube, Twitter and Microsoft, calling on the companies to “ensure that the notoriety garnered by a viral video on your platforms does not inspire the next act of violence.”

Facebook said in the New Zealand case, it was fighting a “core community of bad actors” that continually re-uploaded edited versions of the video in various formats.

The Menlo Park company added that it is working toward improving artificial intelligence to combat the spread of hateful material.

More Must-Reads From TIME

Write to Hillary Leung at hillary.leung@time.com