Friday, May 6, 2011

Peer review

Maggie Koerth-Baker writes about peer review at one of my favourite places on the web, Boing Boing. This article does a good job of describing peer review for the general public, especially in regards to why it's a good thing and what problems exist with it. As a working scientist, I have a slightly different point of view on peer review.

Firstly, most reviewers have the good intention of trying to make the paper better. This is something that many new authors don't get: they may even see the reviewers' comments as an attack on their competence. But that's seldom the case, at least not in my experience. Publishing bad papers, or papers with unreliable results, benefits no one. Whenever I review a paper, the first thing I ask myself is, "what can I suggest that could improve this paper?".

Secondly, reviewers have a lot of power. It seems to be the norm now that editors won't challenge anything a reviewer writes, even if it is egregiously wrong. If a reviewer criticizes an author on completely spurious grounds (for example, the reviewer is out of date in the field the paper is in, or if the reviewer simply doesn't know as much as they think they do) most editors will just allow the review through. This places the onus of disproving the reviewer on the author, which means authors have to spend time arguing with a reviewer (via the editor) instead of actually improving their paper.

Thirdly, reviewers do not have a lot of time. I've reviewed a lot of papers, more than a hundred, at last count, and to be honest, most of them sat on my desk for a few weeks before I managed to read them. Recently, the time frame to perform a review has been getting smaller, so there's even less time to do a quality review.

Computational intelligence papers have certain aspects that make them a challenge to review. Firstly, most have a fair amount of mathematics in them, but the number of authors who still don't define the variables in each equation is disappointingly large. Secondly, most papers have empirical work that test the proposed algorithms. But, a large proportion of those still don't produce statistically valid results - repeated trials, varied parameters, independent test sets, and statistical tests of significant differences over the results. I have noticed some progress on this in the last few years, but nowhere near as much as I would like.

I think I'll have to write this up in a future post...

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.