Modern Science Has a Publish-or-Perish Problem

4 minute read

Got a spare $14,800? If so, you can be first co-author on a new research paper about cancer. Want to add a friend? That’ll be $26,300.

Those are–or were–the going rates for bylines from a Chinese publishing outfit offering to make life easier for academics in need of a quick career boost. “The heavy labor can be left to us,” promised the sales document. “Our service can help you make progress in your academic path!”

The scam was exposed by the journal Science in a 2013 sting, but nobody pretended that that remotely meant the end of scientific fraud. As competition grows for tenured positions at universities and plum jobs at prestige hospitals, the temptation to fudge results, tweak data and invent studies wholesale has pushed some scientists to the academic dark side.

On Aug. 18, Springer, a major academic publishing company, announced that it was retracting 64 papers because of irregularities in the peer-review process. That followed a similar retraction of 43 papers by one Springer imprint late last year. In the early 2000s, an average of 30 research papers were withdrawn per year; in 2011 alone, the figure was 400.

The website Retraction Watch–the very existence of which says a lot–keeps an eye on such things. The site includes a leaderboard listing the 30 scientists worldwide with the most retractions to their names. The winner: Yoshitaka Fujii, a Japanese expert in postoperative nausea, who has a whopping 183 my-bads. That is obviously bad news for Fujii, but it also has implications for the rest of us, who rely on solid research in medicine, agriculture and chemistry, for instance, to improve–and even save–lives.

The problems that led to the Springer retractions center on weak spots in the peer-review process. Legitimate journals do not publish research unless experts in the field (peers) have read the work and signed off on it (review). But how to find the experts? With an estimated 1.8 million papers published each year in more than 28,000 journals–Springer alone publishes 2,000 titles–it can be hard to wangle the experts needed to spend hours or days reviewing someone else’s work. Sometimes the authors recommend reviewers, who may be perfectly good scientists but who also may be their colleagues, grad students or friends. In one scam cited in the journal Nature, the author of a paper recommended herself as a reviewer–under her maiden name.

The new batch of retracted papers came to light because while the names of the scientists who reviewed the work were real, their email addresses were fake. The journals say they did not notice that at the time. It’s possible this happened when a third-party company was hired to find people to review the work; plenty of legitimate for-profit companies offer editing and counseling services to authors, especially those abroad who may need help with the language or with recommendations for reviewers. But some speculate that the recent fraud was the result of this outsourced reviewing process.

“We are encouraging institutions to provide guidance to authors on legitimate third-party services,” said William Curtis, a Springer executive vice president, in an email. But it is beyond the scope of the publisher “to investigate what happened in the case of each article,” he added.

Better policing could catch more fraud before it reaches publication, but the problem won’t truly be solved until the industry as a whole finds ways to recognize success through something other than a tally of how many papers a researcher publishes. In the U.S., no more than 20% of scientists have a peer-reviewed paper to their name.

The pressure to join that elite quintile, with all of its career-boosting cachet, is intense. The journals only make it worse, publicizing what’s known as a paper’s “impact factor”–the number of times it is cited by later papers. The system, says Dr. Charlotte Haug, vice chair of the Committee on Publication Ethics, an industry watchdog group, has a troubling whiff of social media about it.

“We see the same thing with likes and tweets and retweets,” she says. “It’s the same kind of system. It makes for very strange incentives.” That, in turn, makes for very bad science.

For more on these ideas, visit time.com/theview

More Must-Reads From TIME

Write to Jeffrey Kluger at jeffrey.kluger@time.com