COVID-19 changed the way public policy experts, the healthcare industry and journalists covering the pandemic weighed the value of clinical studies that had not yet been peer-reviewed, a new analysis found.
The peer-review process used by major publications like the New England Journal of Medicine and the Journal of the American Medical Association takes “a median time of 186 days from preprint to publication,” according to a study published today in JAMA Network that examines the reliability of preprint studies.
That's more than half a year. For a COVID-19 variant, six months is a lifetime, or at least it’s a span in which a virus variant can go from being dominant to being usurped by another iteration.
Many peer-reviewed studies about Delta came out when Omicron began taking over. Then came the sub-subvariants of Omicron, a list that keeps growing.
In the JAMA Network study, Canadian researchers from various academic institutions conducted a meta-epidemiological analysis of 152 preprint studies that gathered data from randomized clinical trials focusing on COVID-19. As of October 1, 2022, 119 of 152 preprints (78.3%) had been published in medical journals.
“Of the 119 published preprints, there were differences in terms of outcomes, analyses, results or conclusions in 65 studies (54.6%),” the JAMA Network study said. “The main conclusion in the preprint contradicted the conclusion in the journal article for two studies.”
Eric Rubin, M.D., the editor-in-chief of the NEJM, said that he couldn’t definitively comment on the JAMA Network study because he’s only seen the conclusion statement.
However, Rubin told Fierce Healthcare that “it looks as if the [peer-review] process actually worked—higher quality studies were more likely to be published than lower quality studies, meaning that peer review actually offered value. And I’d add that, among the thousands of studies that were performed during COVID, not all that many were practice-changing and really needed to be rushed. In my own experience, the most important studies, meaning those that actually affected patient care, got out there very quickly even with peer review. I know that was true at our journal and get the sense that it was largely true at other major journals.”
In the JAMA Network study, researchers found that preprints with smaller sample sizes and a high risk of bias were less likely to make it into a peer-reviewed journal. They located the studies—published between Jan. 1 and Dec. 31, 2021—through the World Health Organization COVID-19 database and Embase.
The researchers said that they used randomized clinical trials because they provide “high-quality evidence for knowledge synthesis,” and all the studies had been posted on the medRxiv website, a well-known repository for preprint studies.
The study finds a “substantial time lag” between when the studies were posted on medRxiv and when they appeared in scientific journals. The study concludes that “preprints make evidence publicly available much earlier than published journal articles. However, preprints may have a higher risk of bias, and their results may change when they are eventually published in a journal. Therefore, it is important to critically appraise preprints before applying the trial results.”
Which is the advice given by Harlan Krumholz, M.D., a cardiologist and research scientist at Yale University and Yale New Haven Hospital. Krumholz also is one of the co-founders of medRxiv.
“In the pandemic, and going forward, medRxiv and others are playing a key role in the sharing of early science,” Krumholz told Fierce Healthcare. “As with peer review, people need not just accept what it posted (or published) but need to read the pieces critically—take into account the track record of the group—and recognize that science is progressive and self-correcting (a phrase I first heard from Francis Collins).”
The JAMA Network study noted that “the preprint of the Randomized Evaluation of COVID-19 Therapy (RECOVERY) trial on the benefits of dexamethasone led to implementation of this agent as the standard of care for COVID-19 even before the official journal publication.”
Krumholz said that because of his familiarity with the “stellar” group of scientists in charge of RECOVERY, “when they posted a preprint, I knew I needed to read it critically” but also “had a strong belief that I could rely on their work. For other studies, less well-designed and by less well-known groups, I might not be as confident. It is not that I defer to reputation—but track record does matter.”
Kevin Kavanagh, M.D., is the founder and president of the patient advocacy organization Health Watch USA. He’s also been keeping a close watch on COVID-19 throughout the pandemic, often using information from preprint studies. He was one of the first experts to sound the alarm about Delta’s lethality, and early on noted that COVID-19 doesn’t just affect the respiratory system, but other organs in the body as well.
Kavanagh argued that “by the time journal articles are formally reviewed and printed often the information will be outdated or lives lost in the delay.” However, Kavanagh echoes Krumholz in saying that preprint studies need to be handled with care, and the reader should be able to spot shoddy research.
“Both the history of the authors and institution is very important and of course you have to read the whole article, plus any comments which are posted on the preprint’s webpage,” Kavanagh told Fierce Healthcare. “Some journals have an open review process where anyone can make and post comments during the formal review process of the article. These steps should also be performed for articles that have been formally published, since peer review is largely a volunteer process and not an ironclad guarantee of quality.”
Krumholz said that one of the main purposes of medRxiv is to enhance scientific dialogue and perhaps even collaborative research.
“It turns out that science posted by credible groups that goes on to be published in peer review happens to have conclusions with high concordance with the preprint,” said Krumholz. “I am not surprised by that finding. Importantly, many preprints do not progress—and that is fine too. On the server, there is a wide spectrum of scientific quality and merit. In fact, a function of the server is to enable people to get feedback that might lead them not to progress the science further.”