Experts Warn Congress of Dangers AI Poses to Journalism

5 minute read

AI poses a grave threat to journalism, experts warned Congress at a hearing on Wednesday.

Media executives and academic experts testified before the Senate Judiciary Subcommittee on Privacy, Technology, and the Law about how AI is contributing to the big tech-fueled decline of journalism. They also talked about intellectual property issues arising from AI models being trained on the work of journalists, and raised alarms about the increasing dangers of AI-powered misinformation.

“The rise of big tech has been directly responsible for the decline in local news,” said Senator Richard Blumenthal, a Connecticut Democrat and chair of the subcommittee. “First, Meta, Google and OpenAI are using the hard work of newspapers and authors to train their AI models without compensation or credit. Adding insult to injury, those models are then used to compete with newspapers and broadcasters, cannibalizing readership and revenue from the journalistic institutions that generate the content in the first place.”

Big tech, AI, and the decline of local news

Tech companies and the news industry have been in conflict since the rise of digital platforms over a decade ago, which has resulted in tech platforms profiting as many news organizations have gone out of business. Researchers at the Medill School of Journalism, Media, Integrated Marketing Communications at Northwestern University found that the U.S. has lost almost a third of its newspapers and almost two-thirds of its newspaper journalists since 2005.

Countries around the world are starting to take action to force big tech to support their local journalism industries. In June 2023, Canada passed a law requiring tech companies to pay news outlets for any content featured on their platforms. Australia previously passed a similar law in 2021. In the U.S., comparable legislation has been proposed by Senators Amy Klobuchar, a Democrat from Minnesota, and John Kennedy, a Republican from Louisiana, both of whom are members of the Subcommittee on Privacy, Technology, and the Law.

“Over the last several years, there have been countless studies, investigations, and litigation by the DOJ and the FTC in the past two administrations that have found anti-competitive conduct by the monopoly distributors of news content,” Danielle Coffey, president and CEO of trade association News Media Alliance, said at the hearing. “This marketplace imbalance will only be increased by [generative AI].”

Coming copyright battles

Generative AI systems—ones that are capable of generating text, images, or other media—must be trained on vast amounts of data. In order to secure access to high-quality text data, prominent AI developer OpenAI has partnered with the Associated Press, a U.S.-based nonprofit news agency, gaining access to part of AP’s archive in exchange for use of OpenAI’s products. OpenAI has a similar partnership with Axel Springer, a German multinational media company, as part of which ChatGPT will summarize articles by Axel Springer-owned news outlets and provide links and attribution.

But not all news outlets have come to similar deals. On Dec. 27 2023, the New York Times sued OpenAI and its major investor and partner, Microsoft. The lawsuit argues that OpenAI’s models were trained on the New York Times’ and offer a competing product, causing “billions of dollars in statutory and actual damages.” OpenAI responded with a blog post on Jan. 8 2024, in which it contested the Times’ legal claims and noted the various actions it has taken to support a health news ecosystem.

The New York Times lawsuit is the highest-profile copyright case of many launched against AI developers. In July 2023, comedian Sarah Silverman and authors Christopher Golden and Richard Kadrey sued OpenAI and Meta for training their AI models on her writing without their permission. And in January 2023, artists Kelly McKernan, Sarah Andersen, and Karla Orti sued Midjourney, Stability AI, and DeviantArt—companies that develop image generating AI models—again for training their AI models on their work. In October, U.S. District Judge William Orrick dismissed parts of the lawsuit, and the plaintiffs amended and resubmitted the lawsuit in November.

Generative AI tools have been built with “stolen goods,” argued Roger Lynch, CEO of Condé Nast, a media company that owns publications including the New Yorker, Wired, and GQ, who called at the hearing for “congressional intervention” to ensure that AI developers pay publishers for their content. “The amount of time it would take to litigate, appeal, go back to the courts appeal, maybe ultimately make it to the Supreme Court to settle between now and then there'll be many, many media companies that would go out of business,” he said.

However, Curtis LeGeyt, president and CEO of trade association National Association of Broadcasters, said talk of legislation was “premature,” contending that current copyright protections should apply. “If we have clarity that current law applies to generative AI, let's let marketplace work,” he said.

Misinformation concerns

LeGeyt also warned the Senators about the dangers that AI-generated misinformation poses to journalism. “The use of AI to doctor, manipulate, or misappropriate the likeness of trusted radio or television personalities risks spreading misinformation, or even perpetuating fraud,” he said.

LeGeyt also cautioned of the increased burden placed on newsrooms that have to vet content in order to determine whether it was genuine and accurate. “Following the recent Oct. 7 terrorist attacks on Israel, fake photos and videos reached an unprecedented level on social media in a matter of minutes," he said. "Of the 1000s of videos that one broadcast network sifted through to report on the attacks, only 10% of them were authentic and usable.”

More Must-Reads From TIME

Write to Will Henshall at will.henshall@time.com