George Siemens sent me a link  to a post by Cameron Neylon that attempts to pound yet another nail in the coffin of peer review. As an editor of a peer reviewed Journal (IRRODL) I was naturally both curious and a bit defensive about the charges.

Neylon argues that for peer review “ Whatever value it might have we largely throw away. Few journals make referee’s reports available, virtually none track the changes made in response to referee’s comments enabling a reader to make their own judgment as to whether a paper was improved or made worse. Referees get no public credit for good work, and no public opprobrium for poor or even malicious work. And in most cases a paper rejected from one journal starts completely afresh when submitted to a new journal, the work of the previous referees simply thrown out of the window.

Let me respond by describing some  IRRODL practices. We don’t make referee’s reports public, mostly because of the work that would be involved in cleaning them up, getting permissions and my questioning the value of these reports. Do the readers really care if reviewer A doesn’t like the wording of the abstract, or Reviewer B’s thinks the author has missed a major reference, or reviewer C says the statistical analysis of the data is weak? We return all these comments to the reviewer and ask them (using tracked changes and a point form response) how they responded (or why they choose not to respond) to the concerns and suggestions of the reviewers. Now of course this applies only if we want to see the article again for further review or publication. As Neylon suggests if we decline the offer to publish, the process may start anew with a second journal, but the work of the reviewers is not “simply thrown out the window”. In some cases the author gives up (often appropriately so, as the reviewers found fatal flaws or ones that demand more work than the author is willing to give to the paper). The author also gains the lesson learned and experience (hard though it may feel) of  how NOT to write a scholarly article. But in other cases, a smart author will take into account the comments of the reviewers and improve the paper- thus increasing its chances of successful publication elsewhere. This demonstrates the ongoing value of the reviewers’ work, even if the article is, or is not, published elsewhere.

Reviewers at IRRODL do get public credit for reviewing work on the IRRODL web site and if they are academics, their reviewing contributions usually appear in annual reports and applications for promotion and tenure.

Neylon’s solution is to cut what he perceives as the unsupportable costs and inherent bias of peer review by “.. radically cut the number of peer reviewed papers probably by 90-95% leaving the rest to be published as either pure data or pre-prints.” I think this solution misses the real value of peer review and that is as an effective tool for expert filtering and actively improving the content

Since Alvin Tofler popularized the term in the 1990’s the hysteria  and popularity of disparaging information overload continues to grow. More recently Clay Shirky argued that information overload is just a symptom of filter failure. Peer review serves to filter at least 50% of the articles submitted to IRRODL, thereby insuring the readers that what is published has been judged to be accurate, interesting and providing a contribution to the distance edcuation literature. I hardly have time to read all of the blogs, articles, newspapers and email list postings in this area, so the filtering by expert peers is much appreciated. But rather than just filter, the reviewers add value, because their comments influence the work that I finally read. I can honestly say that in my 7 years as editor of IRRODL, not a single article has been printed without at least minor improvements contributed by peer reviewers. I appreciate the informal filtering of friends, who email or tweat ideas or postings they know I will read and the more formal aggregation postings from people like Stephen Downes or the Commonwealth of Learning. But these annotations and references are all from a look in the rear view mirror and very rarely result in improvements to the work.

It is true that the number of articles submitted to journals is increasing – likely in response to increases in the numbers of scholars, but so also due to increases in the number of journals. So the cost of peer review remains an issue, and may be improved by offering renumeration to the reviewers, but this will only increase the cost of publication and put further stress on Open Access publications. Scholars learn by undertaking reviews and I consider peer review to be a component of my job and one that is rewarded through recognition and a salary check at the end of each month.

I have also seen the efforts by JIME to make public their reviewers names and their reviews and the responses by authors. They attempted to publish articles “with links to a commentaries area, which may include elements from the article’s original review debate. Readers are invited to make use of this resource, and to add their own commentaries. ” But comments seemed to be few and sparse and they seem to have discontinued the heroic effort. Scholars just don’t seem to want to engage in spontaneous debate on publishers web site- saving that for sporadic blog fights and full paper rebuttals.

So Peer review is certainly not perfect and does consume scholarly resources, but it serves to both filter and to improve the materials we use to build knowledge within our disciplines and (at least open access journals) to expose these ideas to everyone. Neylon’s ideas of just relying on happenstance and popularity ratings of posts, seem to be a recipe for compounding, rather than solving the challenges of utilizing relevant, interesting and important works.