The debates among journal editors and other academics about
the merits or otherwise of particular forms of peer review continue. A growing
number of academics in the STEM subjects seem to think that post-publication
peer review provides for higher quality control than the traditional
post-submission pre-publication review. Their logic is that mistakes in
manuscripts are less likely to be found during a process involving only a
handful of reviewers. They furthermore argue that post-publication peer review
will submit journal articles to scrutiny by a much larger number of readers.
Methodological, statistical and other mistakes would almost certainly be found
out in post-publication scrutiny with a higher likelihood simply due to the
much larger number of expert reviewing (ie reading, analysing and commenting
on) manuscripts post publication. It is difficult to argue with this
contention, except to note that post publication peer review sites today
suggest that only some of the manuscripts there receive such desirable academic
community review. What does this proposal mean then for papers that are ignored
altogether by peers. Will they still count as some kind of peer reviewed
publications or are they just blog-equivalents? Who counts as a reviewer? Would
the authors’ best friends be viable options? University administrators and
appointments committees will want to know.
Of course, traditional peer review doesn’t preclude post-publication
review. It happens that manuscripts that passed pre-publication review are eventually
found to be seriously methodologically flawed. Many of those are withdrawn, but
reluctant publishers and editors have also gained notoriety by leaving flawed
manuscripts in the public domain without errata. Post-publication review
depends to some extent on easy access to the manuscripts in question (Open
Access being a bonus here), as well as the existence of sophisticated web-based
moderated platforms permitting reader-peers to leave comments. It goes without
saying that relentlessly profit-driven publishers of academic journals will be
reluctant to invest in the staff necessary to manage this process.
I think, for humanities manuscripts, for the time being,
pre-publication anonymous peer review remains the way to go. Anonymous peer
review guarantees honest reviews from reviewers. Open review, whereby the
reviewers and authors are disclosed to each other, will often prevent honest reviews,
simply because reviewers inclined to be critical of a particular submission
will be reluctant to burn their bridges by speaking frankly to the quality of a
particular submission submitted by a close colleague or friend. Anonymous
reviews are not in their own right a guarantee of quality. That’s where journal
editors step in. They need to evaluate reviewers’ comments and decide what to
do if reviewers’ verdicts vary significantly. You might be surprised to learn
that this does not happen all that frequently. Reviewers almost always reach similar
verdicts.
The challenge today is, of course, to find knowledgeable
reviewers. I have lamented this problem here before. Senior colleagues are
often reluctant to undertake this vital work. In fact, a few of them refuse to
undertake reviews outright. That does not stop them from complaining bitterly
if their manuscripts are, in their view, in the review process for an
extraordinary amount of time. Perhaps we ought to institute a policy whereby as
editors we would be well within our right to refuse to evaluate submissions
from colleagues routinely unwilling to accept review requests.
Another issue arises when it comes to sourcing true peers to
review particular content. Having now been an Editor of this journal and its
companion journal for the last 15 years, I still struggle on occasion to find a
suitable competent reviewer for a particular manuscript. It takes time,
especially if the subject matter of a particular manuscript is highly
specialised, to find the right peer reviewer. At this journal as well as its companion
journal our standard operating procedure is that we as Editors have to find
appropriate reviewers. We ask authors only in the rarest of exceptions to
suggest possible reviewers to us. It turns out that choosing our own reviewers
is a good idea, more so than we thought it is. Journals who ask authors for
reviewer suggestions have been hit in fairly significant numbers by fake
reviews, written under pseudonym by submitting authors, or by commercial
outfits in the business of drafting fake reviews.[i]
Post publication peer review publications should take note. You never stop
learning when it comes to these matters, and sadly, nothing much surprises me
any longer. The public or perish culture in today’s universities has clearly
led to unreasonably pressures on academics, leading quite a few of us to stray
from the right path.
Courtesy of the rise of open access on-line ‘journals’ we
have reason to be weary of claims that the papers published in many such venues
have been peer reviewed. As I write this, several incidents were reported where
bogus papers have been accepted for publication by such outlets, for a
processing fee, of course, delivered to the ‘editor’ via PayPal. Bound to be a
classic is undoubtedly this one: The authors conjured up a fake paper with the
title ‘Get me off your fucking mailing list’, directed at a SPAM Open Access
outlet inviting contributions. Their paper was accepted, ostensibly after peer
review.[ii]
As any academic with a university affiliation will be able to testify to these
days, the same outfits don’t discriminate as editors of more discerning
journals would, in terms of competence to review particular academic outputs. A
journalist employed at an Ottawa based newspaper reports that his submission of
a fake article to a dodgy Open Access outfit ended up with him being now
inundated with requests to review manuscripts he is utterly unqualified to
review for said ‘publisher’.[iii]
It is fair to say that on the odd occasion every editor will call on the wrong
reviewer for a particular submission, but that is usually caught by the second
or third reviewer, usually the invited reviewer declines. Apparently many
pay-for-play Open Access publishing operations are primarily concerned about extracting
author processing fees out of the submitting authors. That they can only
achieve after they accept submitted content.
An issue remains apparently the Conflict of Interest declaration.
There are all sorts of standards deployed by all sorts of publishing outfits,
grant giving bodies and so on. Let me just say that I think reviewers would be
well advised to err on the side of caution when they declare conflicts of
interest. One good yardstick would be to ask yourself whether, if you were at
the receiving end of your review, you would want to be advised (as an editor)
of particular information that you are considering disclosing. Conflicts of
interest could include knowledge of the authors’ identity, financial conflicts,
but also that you might be an author harshly criticised or praised in the
manuscript that you are reviewing. None of this would disqualify you per se
from reviewing, but knowing about these potential conflicts would help editors
to assess your comments more competently.
It is pretty obvious to anyone who has been in
the business of publishing as an author or editor – or both – that anonymous
peer review is far from perfect, and it is conceivable that new publishing
platforms will eventually lead to the rise of better peer review processes. I
for one am looking forward to those.
[i] Ferguson, C. 2014. It’s happened again: Journal
cannot rule out possibility that author did his own peer review. Retraction Watch November 10. http://retractionwatch.com/2014/11/10/it-happened-again-journal-cannot-rule-out-possibility-author-did-his-own-peer-review/
[accessed November 25, 2014]