Meta Platforms Inc.’s independent Oversight Board agreed with the company’s recent decision to leave up a misleading video of US President Joe Biden, but criticized its policies on content generated by artificial intelligence as “incoherent” and too narrow.
The board, which was set up in 2020 by management to independently review some of the company’s most significant content moderation decisions, on Monday urged Meta to update its policies quickly ahead of the 2024 U.S. general election.
“The Board is concerned about the manipulated media policy in its current form, finding it to be incoherent, lacking in persuasive justification and inappropriately focused on how content has been created, rather than on which specific harms it aims to prevent, such as disrupting electoral processes,” the organization said in a statement.
Read More: How a New Bill Could Protect Against Deepfakes
The criticism from the board came after reviewing Meta’s decision to leave up a manipulated video of Biden, which was edited to make it look like he was inappropriately touching his adult granddaughter’s chest. The video included a caption that referred to Biden as a “pedophile.”
The Oversight Board agreed that the video didn’t violate Meta’s manipulated media policies, but said those policies are far too narrow and insufficient. The board also recommended that Meta stop removing altered media that doesn’t violate any other policy against harmful content, but instead start to label it and leave it up.
“We are reviewing the Oversight Board’s guidance and will respond publicly to their recommendations within 60 days in accordance with the bylaws,” a Meta spokesperson said. The company isn’t obligated to take the board’s recommendations, and at times has ignored its advice.
In January, an influx of deepfake content of high-profile individuals showed the public and lawmakers the pace at which manipulated content can appear online with the help of new AI tools. Explicit images of Taylor Swift inundated social platform X, formerly known as Twitter, while fake robocalls of Biden’s voice were sent to voters ahead of the New Hampshire primary, urging them not to vote.
Read More: Taylor Swift Deepfakes Highlight the Need for New Legal Protections
Meta’s current policy only blocks fake videos of people saying things they didn’t say. It doesn’t cover portrayals of people doing things they didn’t do – like the Biden post – and explicitly only applies to videos created with artificial intelligence tools. Content that’s been edited without using AI, like looping or reversing a video clip, could be misleading to the average user but isn’t prohibited.
The board recommended that Meta expand its policies to cover manipulated media that portrays an action a person didn’t actually take and to address fake audio. It also urged the company to make clear what harms it’s aiming to prevent, like interfering with the voting process, and focus more on those harms rather than whether content was created using AI tools or not.
“As it stands, the policy makes little sense,” Michael McConnell, the board’s co-chair, said in a statement. “It bans altered videos that show people saying things they do not say, but does not prohibit posts depicting an individual doing something they did not do. It only applies to video created through AI, but lets other fake content off the hook.”
Read More: As Tech CEOs Are Grilled Over Child Safety Online, AI Is Complicating the Issue
Meta, along with peers like X and ByteDance Ltd.’s TikTok, will find itself dealing with more and more manipulated media as AI technology improves and the election inches closer. Meta first rolled out its policy ahead of the 2020 election, part of a broader effort by the company to crack down on election-related misinformation after the 2016 US campaign exposed how social networks could be gamed.
In many ways, this type of situation is exactly why the Oversight Board was created. The independent body, which is made up of academics and others with public policy backgrounds, is funded by Meta, but is meant to serve as a check on the company’s power over content. In the past, the board has reviewed issues across Facebook and Instagram regarding election misinformation and drug-related posts. It upheld Meta’s suspension of Donald Trump following the Capitol riot on Jan. 6, 2021, while simultaneously criticizing the company’s implementation of the suspension.
More Must-Reads from TIME
- Your Vote Is Safe
- The Best Inventions of 2024
- How the Electoral College Actually Works
- Robert Zemeckis Just Wants to Move You
- Column: Fear and Hoping in Ohio
- How to Break 8 Toxic Communication Habits
- Why Vinegar Is So Good for You
- Meet TIME's Newest Class of Next Generation Leaders
Contact us at letters@time.com