Friday, August 25, 2006

Peer review in the sciences

In the September issue of Wired magazine, Adam Rogers has an interesting (and amusing) piece about how peer review is evolving -- Get Wiki With It: Peer review – the unsung hero and convenient villain of science – gets an online makeover. Rogers notes:

"Almost every journal does it, from marquee pubs like Nature to highly specialized periodicals like International Journal of Chemical Reactor Engineering. (No offense to IJCRE – you guys are a helluva read.) When it works, it's genius – quality control that ensures the best papers get into the appropriate pages, lubricating communication and debate. It's the quiet soul of the scientific method: After forming hypotheses, collecting data, and crunching numbers, you report the results to learned colleagues and ask, "What do you folks think?"

But science is done by humans, and humans occasionally screw up. They plagiarize, fake data, take incorrect readings. And when they do? Oy! Somebody always blames peer review. The process is lousy at policing research. Bad papers get published, and work that's merely competent (boring) or wildly speculative (maverick) often gets rejected, enforcing a plodding conservatism. It seems silly to say this about a system that's been in development since the mid-1700s, but the whole thing seems kind of antiquated. "Peer review was brilliant when distribution was a problem and you had to be selective about what you could publish," says Chris Surridge, managing editor of the online interdisciplinary journal PLoS ONE."


The piece mentions new peer review processes being tested as part of Nature's new peer review debate and by the new journal from the Public Library of Science, PLoS ONE.

Related posts:
JAMA editorial: "The influence of money on medical science"
Recent media commentary on peer review
Image editing in the medical literature

2 Comments:

At 9:47 AM, Blogger Rachel said...

I feel like Surridge gets it wrong by saying that, because peer review isn't perfect, we should just scrap it. Peer review should be better, and more appropriate. I think he also has a bias toward publishing more just for the sake of it, given that he seems to frame peer review as a process with a primary role of preventing too many things from being published. You do still have to be selective about what you publish, regardless of whether you have all the server space in the world, because some things rejected for publication are just poorly done, irrelevant, or wrong.

 
At 10:53 AM, Blogger Becky said...

An excellent point -- when something isn't working perfectly, you fix it, you don't necessarily completely discard it until you exhaust other options.

I think I might have skewed the piece a little bit by terminating the quote where I did. Surridge is a managing editor for the new PLoS venture, PLoS one, which, from what I've read, really does try to adapt the model of peer review, rather than discarding it completely.

Their scope note: "PLoS ONE features reports of primary research from all disciplines within science and medicine. By not excluding papers on the basis of subject area, PLoS ONE facilitates the discovery of the connections between papers whether within or between disciplines."

Their peer review process seems to include a combination of initial editorial review that focuses on technical execution, and subsequent open peer review by the scientific community to examine the merit of the article's approach, results, and conclusions.

I think it's an interesting model and it will be exciting to see how it develops--the range of scientific inquiries included in the journal, the type and quality of commentary from the scientific community.

One of my concerns about the model is that it perhaps is more suited to the basic and physical sciences, rather than clinical items -- if one of the main barriers to the practice of EBM is that individual clinicians may not have the skills and almost definitely do not have the time to assess the rigor of each article independently, how would an article undergoing this kind of peer review be useful? I wonder if, as they develop these efforts, if the editors might try to do some kind of period summarization of the key critiques, to aid individuals in "judging" the quality of each work and its conclusions.

 

Post a Comment

<< Home