George Siemens sent me a link to a post by Cameron Neylon that attempts to pound yet another nail in the coffin of peer review. As an editor of a peer reviewed Journal (IRRODL) I was naturally both curious and a bit defensive about the charges.
Neylon argues that for peer review “ Whatever value it might have we largely throw away. Few journals make referee’s reports available, virtually none track the changes made in response to referee’s comments enabling a reader to make their own judgment as to whether a paper was improved or made worse. Referees get no public credit for good work, and no public opprobrium for poor or even malicious work. And in most cases a paper rejected from one journal starts completely afresh when submitted to a new journal, the work of the previous referees simply thrown out of the window.
Let me respond by describing some IRRODL practices. We don’t make referee’s reports public, mostly because of the work that would be involved in cleaning them up, getting permissions and my questioning the value of these reports. Do the readers really care if reviewer A doesn’t like the wording of the abstract, or Reviewer B’s thinks the author has missed a major reference, or reviewer C says the statistical analysis of the data is weak? We return all these comments to the reviewer and ask them (using tracked changes and a point form response) how they responded (or why they choose not to respond) to the concerns and suggestions of the reviewers. Now of course this applies only if we want to see the article again for further review or publication. As Neylon suggests if we decline the offer to publish, the process may start anew with a second journal, but the work of the reviewers is not “simply thrown out the window”. In some cases the author gives up (often appropriately so, as the reviewers found fatal flaws or ones that demand more work than the author is willing to give to the paper). The author also gains the lesson learned and experience (hard though it may feel) of how NOT to write a scholarly article. But in other cases, a smart author will take into account the comments of the reviewers and improve the paper- thus increasing its chances of successful publication elsewhere. This demonstrates the ongoing value of the reviewers’ work, even if the article is, or is not, published elsewhere.
Reviewers at IRRODL do get public credit for reviewing work on the IRRODL web site and if they are academics, their reviewing contributions usually appear in annual reports and applications for promotion and tenure.
Neylon’s solution is to cut what he perceives as the unsupportable costs and inherent bias of peer review by “.. radically cut the number of peer reviewed papers probably by 90-95% leaving the rest to be published as either pure data or pre-prints.” I think this solution misses the real value of peer review and that is as an effective tool for expert filtering and actively improving the content
Since Alvin Tofler popularized the term in the 1990’s the hysteria and popularity of disparaging information overload continues to grow. More recently Clay Shirky argued that information overload is just a symptom of filter failure. Peer review serves to filter at least 50% of the articles submitted to IRRODL, thereby insuring the readers that what is published has been judged to be accurate, interesting and providing a contribution to the distance edcuation literature. I hardly have time to read all of the blogs, articles, newspapers and email list postings in this area, so the filtering by expert peers is much appreciated. But rather than just filter, the reviewers add value, because their comments influence the work that I finally read. I can honestly say that in my 7 years as editor of IRRODL, not a single article has been printed without at least minor improvements contributed by peer reviewers. I appreciate the informal filtering of friends, who email or tweat ideas or postings they know I will read and the more formal aggregation postings from people like Stephen Downes or the Commonwealth of Learning. But these annotations and references are all from a look in the rear view mirror and very rarely result in improvements to the work.
It is true that the number of articles submitted to journals is increasing – likely in response to increases in the numbers of scholars, but so also due to increases in the number of journals. So the cost of peer review remains an issue, and may be improved by offering renumeration to the reviewers, but this will only increase the cost of publication and put further stress on Open Access publications. Scholars learn by undertaking reviews and I consider peer review to be a component of my job and one that is rewarded through recognition and a salary check at the end of each month.
I have also seen the efforts by JIME to make public their reviewers names and their reviews and the responses by authors. They attempted to publish articles “with links to a commentaries area, which may include elements from the article’s original review debate. Readers are invited to make use of this resource, and to add their own commentaries. ” But comments seemed to be few and sparse and they seem to have discontinued the heroic effort. Scholars just don’t seem to want to engage in spontaneous debate on publishers web site- saving that for sporadic blog fights and full paper rebuttals.
So Peer review is certainly not perfect and does consume scholarly resources, but it serves to both filter and to improve the materials we use to build knowledge within our disciplines and (at least open access journals) to expose these ideas to everyone. Neylon’s ideas of just relying on happenstance and popularity ratings of posts, seem to be a recipe for compounding, rather than solving the challenges of utilizing relevant, interesting and important works.
Interesting commentary (I would webcam comment but we have the TV on playing the Olympics in the background).
I want to focus on this: “these annotations and references are all from a look in the rear view mirror and very rarely result in improvements to the work.”
There are two stances you can take with respect to commentary (not necessary mutually exclusive, but which creates the divide between peer review and my own style of formal aggregation postings):
– you can intend the comment to create an improvement in the current work, or
– you can intend the comment to create an improvement in a future work
This is a difference in attitude toward knowledge and writing, and it has wider ramifications in scholarship as a whole.
First, the attitude. When you are focused on improving a single work, you are approaching knowledge and scholarship as something that can be pinned down and definitively recorded.
Now by that I don’t mean you can achieve perfection, or any such thing. But this approach can be contrasted with one that takes knowledge and scholarship to be more like a stream of (social) consciousness, with each article an artifact of the moment.
I don’t think of knowledge and scholarship as static; I think of them as fluid, and therefore to me it seems counterintuitive to attempt to capture a paper and fix it as a definitive statement of fact (or knowledge, or findings, or however you want to represent it).
Second, the wider ramifications. “The author also gains the lesson learned and experience (hard though it may feel) of how NOT to write a scholarly article.” So there is an educational value to reviewer comments, and this forms a significant part of the value of the review.
But it is very inefficient to address these comments to a single recipient. One of the comments I make in one of my posts about an article can reach thousands of people. If I offer suggestions about how to write a scholarly article (as I sometimes do) then this lesson is learned my many more people at a time.
But there is another, even more important, factor at work here. These reviewers do not merely educate people in what counts as scholarly work, they *define* what counts as scholarly. This definition is part and parcel of the science or discipline in question. And yet, if the reviews are never seen publicly, this most important part of a science – defining what counts as knowledge – takes place in secret.
And because it can be, it is abused. Rivals reject papers for no good reasons. Reviewers suggest that certain citations are needed – namely, of their own work (or if they are more clever, work that in turn cites their own work). Standards of evidence may be looser if it supports a certain perspective (usually, the need for more research) and an entire discipline (like, say, education) may see its standards slip.
The difference between me, and an academic reviewer, is that I am held accountable for every harsh word, every appeal to an academic standard, every suggestion of a missing reference, every appeal to theoretical support. I can’t secretly lobby for a certain theory, undercut an opponent or competitor, bias the evidential basis for a proposition, or any of the many other things that can and do happen in peer review.
For me, these three major factors are a conclusive argument against the practice of peer review. The post-publication review represents a more appropriate epistemological stance, is pedagogically more efficient, and is academically more honest.
Well Stephen, as usual you have added value to thsi post and done it publically – making your point. But a couple of comments.
I too see knowledge development as part of a process, but the stream is made of individual and collective pieces and the ways they are woven together. For example, the gardeners’ attention to an individual plant, does not lessen the value of the whole garden, rather it increases the overall, by attention to the individual components. This acknowledgment of the flow is the essence of the literature review that is usually a component of scholarly reports. Showing how and where this one piece fits – or does not fit and how it contributes to deeper or more complete, or radically different previous understandings is a trademark of quality scholarship.
You then go on to write “Standards of evidence may be looser if it supports a certain perspective (usually, the need for more research) and an entire discipline (like, say, education) may see its standards slip.” Now here is where peer review would be useful. I have read this sentence about 8 times, and I don’t understand what it says! Now this may be related to my lack of understanding or brain power, but a reviewer would ask the author to make it a bit clearer!!
I do agree with the value of public nature of comments, but again point to the challenges that JIME had with that model, and knowing how hard getting good reviews is, I would hate to add further challenges. Maybe there just are not enough Stephen Downes with the time, energy, commitment and brains to make a viable model.
It needs to be more open at the very least. And, the only time it really matters is when an article is rejected. That’s where people have problems with the process, for obvious reasons.
At the very least have a public list of all reviewers (and their qualifications, hopefully) and a list of standards for review.
Alec started a new journal and asked for article proposals for a special issue. He got 50. Anonymous reviewers accepted 30 article proposals. None of these numbers were shared, including the fact that only 5 articles were intended to be published in the special issue. For some extremely top tier or extremely unknown journals, an issue is only 3-5 articles, but for most, especially online ones where print costs are not a factor, it is more like 10-20.
So 30 authors or groups or authors including myself went through writing a full journal article based on an accepted proposal, and then 25 of us were rejected based again on completely anonymous an unknown reviewers with unknown qualifications and unknown facts such as that only 5 articles were going to be published, after 30 article proposals WERE accepted.
I don’t mind being rejected, but if I had known only 5 articles were going to be accepted, I wouldn’t have wasted my time, and instead would have tailored my article for a different journal.
The other huge elephant in the room is the most journal articles are not read by people. More people read news and blogs online, where they might run across (better or worse) summaries of research. And if I post to a blog, 100-200 people a day might read it. I post to a journal or book, after waiting months or over a year for it to be published, it may be read by some. By then I’m many studies and papers down the line.
I don’t always agree with Stephen Downes’ ideas, but in this case I do lean his way toward supporting public post-publication review as an alternative:
http://halfanhour.blogspot.com/2010/02/on-peer-review.html
Perhaps ideally a little balance would be nice. There have been some good proposals suggested elsewhere- both on blogs and in some journals like First Monday.
Hi Doug
Thanks for your reply to my post. I can appreciate the ill feelings re the special issue you described. When we accept a proposal for a paper it puts us all in a bit of bind. We think the paper is worth developing and likely worth publishing (else we wouldn’t say yes to the proposal). But we also want to make sure it is a quality piece and have it subjected to peer review. Thus, we have to make a guess at how many proposals to accept – assuming two unknowns.
1. how many of the accepted proposal will turn out to be quality papers
2. how many authors who will not come across with a paper after having their proposal accepted. This is especially frustrating, when you have said no to a promising proposal because you have accepted a similar proposal – that may never show up!
Having noted the uncertainty, I would NEVER accept 30 proposals for 5 papers to publish. The last time we did this I think we accepted around 14 of 30 proposals of which 11 got returned as papers and 8 were published. For our next round of special issues, I am encouraging special issue editors to call for full papers (bi-passing proposal route) which speeds things up for all of us and reduces the problems you noted.
I also don’t agree that “most journal articles are not read by people”. Journal articles are read (and studied) by students and scholars years after they were published. Blogs and news articles are lucky to be read a month after production. Of course with powerful search engines, there are exceptions to this rule, but tools like Google Scholar insure that good journal articles keep rising to our consciousness.
As an aside, another reason to support open access and online journals is that it is usually the publishers who restrict size of an issue. We can and have published up to 20 articles in an issue if we have the quantity of quality work – and the reviewers to insure it!!
Terry
Hi Terry,
I must take issue with your statement “Blogs and news articles are lucky to be read a month after production.”
My little blog, with its limited readership still gets hits on some very old articles – because (I hope) they are still useful to people. That’s the beauty of search engines and open publishing.
Similarly, there are posts on Virtual Canuck that I refer back to such as:
http://terrya.edublogs.org/2005/12/12/collaborative-learning-activities-using-social-software-tools/
and
http://terrya.edublogs.org/2006/03/09/blogging-as-academic-publication/
I have served as founding editor of three journals. The first journal, On the Horizon, began as a subscription-based print journal. I formed an editorial board to review papers but also posted submitted papers to a mailing list (horizon list) and encouraged participants to comment on the papers. No one ever commented that I recall. Assigned reviewers did comment, and their comments were useful to me and to authors. When publishing and editing On the Horizon began to take too much of my time, I was able through the University to transfer journal ownership to Jossey Bass, who, in turn, provided funds to pay for a graduate assistant. We then put the journal on the Web (for subscribers only; we shared subscription fees) and Jossey Bass distributed the print copies. Subscribers who had a subscription to the online version were able to see reviewer critiques and consequent drafts of each paper prior to publication. They were also able to comment on these drafts. No reader commented on the drafts. Occasionally, one reviewer would comment on another review, but not often.
The second journal I founded was The Technology Source, an open access online journal. This journal was initially sponsored by Microsoft and was published on the Microsoft web site. I was not allowed to appoint referees. However, after a year, Jim Ptaszynski, the Microsoft champion of TS, was assigned other duties; his replacement thought TS to be a bit too academic for the Microsoft page. We worked out an arrangement to transfer ownership to UNC, at which time I formed an editorial board and TS became a peer-reviewed journal.
In my opinion, the difference between the quality of articles between the non-reviewed (except by me) and the peer-reviewed versions was significant. This is why when we started Innovate, it was peer-reviewed from the outset.
The saying, “Many heads are better than one (or two)” is true in my experience.
Best.
Jim
Editor-in-Chief Emeritus of On the Horizon, The Technology Source, and Innovate
Professor Emeritus of Educational Leadership
UNC-Chapel Hill
[…] 18, 2010 · Leave a Comment Terry Anderson has a new post responding to criticisms of peer review (Anderson is editor of the International […]
[…] 17, 2010: Terry Anderson responds with “Journals as Filters and Active Agents.” His article talks about the peer review process from a journal editor’s […]
Terry:
I don’t want to know or see how every pizza I ordered is made, that a car part had to be replaced during the assembly of my new vehicle as it was defective, or the comments editors make on the tenth draft of a manuscript of a book. I just want to dine on a tasty pizza, drive a well-functioning car, and read a good book.
My role as an editor is not to function as a gate-keeper or idea filter. As I recently wrote to George Siemens, my role as an editor is to ensure that the articles are based on a sound argument or research design and that the content is accurate and readable. As you and others know, I have reviewed an infinite number of articles – spending three to nine hours to review and comment on each article. In one recent article, a biased sample of participants was selected; thus, the conclusions were tainted. In another, the author indicated that the African Virtual University had expired (which it hasn’t). And in another, it took me an hour to figure out what the author was trying to say in one paragraph. So I rewrote the paragraph and suggested that the author consider my revisions. It was up to the author to decide whether my suggestions were worthy. Peer reviewers are quality gatekeepers, not idea filters.
There is a place for unfettered, raw comments – blogs, listservs, open forums, and tweets, for example. But there is also a place for peer-reviewed journals on the Web. When I pick up a peer-reviewed journal article, I may disagree with the ideas or concepts presented, but I want to use my time wisely as I prefer not to read articles which have faulty research design, inaccurate information, un-recognizable structures, or are poorly written.
Clayton R Wright
> I want to use my time wisely as I prefer not to read articles which have faulty research design, inaccurate information, un-recognizable structures, or are poorly written.
Ironically, in this effort, you have by your own admission as a reviewer read an infinite number of articles which had faulty research design, inaccurate information, and the rest.
Once in the land of academia and peer review I am now back in the private sector. Yet I am still focussed on developing accurate, relevant, understandable content -and using “peer” review. As manager of learning design I employ a design and development team and use subject matter experts to build learning material. I also have a peer review – involving senior instructional designers and select subject matter experts who act as my gatekeepers (quality assurance). The designers concern themselves with the quality assesment of the instructional design and delivery; the subject matter experts contribute to the quality assessment of the content. Without their input we could not even pretend to be developing something that an employee would be able to use in the performance of their job.
We also conduct three levels of evaluation of learning with the participants – the smile sheet, an exit survey and two months later evaluating application of the learning in the workplace environment. We also have participant forums where students can advise/comment on course content and delivery and suggest new ways to learn/revised content. Ultimately we have a cycle of assessment involving those who decide on content, those who develop delivery and those who experience learning. All of it hopefully contributes to “doing better” the next time out.