These were the weeks of peer review. Sophie Lewis wrote her farewell to peer reviewing. Climate Feedback is making it easy for scientists to review journalistic articles with nifty new annotation technology. And Carbon Brief showed that while there is a grey area, it is pretty easy to distinguish between science and nonsense in the climate “debate”, which is one of the functions of peer review. And John Christy and Richard McNider managed to get an article published, which I would have advised to reject as reviewer. A little longer ago we had the open review of the Hansen sea level rise paper, where the publicity circus resulted in a-scientific elements spraying their graffiti on the journal wall.
Sophie Lewis writes about two recent reviews she was asked to make. One where the reviewers were negative, but the article was published anyway by the volunteer editor and one case where the reviewers were quite positive, but the manuscript was rejected by a salaried editor.
I have had similar experiences. As reviewer you invest your time and heart in a manuscript and root for the ones you like to make it in print. Making the final decision naturally is the task of the editor, but it is very annoying as a reviewer to have the feeling your review is ignored. There are many interesting things you could have done in that time. At least nowadays you get to see the other reviews and hear the final decision more often, which is motivating.
The European Geophysical Union has a range of journals with open review, where you can see the first round of reviews and anyone can contribute reviews. This kind of open review could benefit from the annotation system used by Climate Feedback to review journalistic articles; it makes reviewing easier and the reader can immediately see the text the review refers to. The open annotation system allows you to add comments to any webpage or PDF article or manuscript. You can see it as an extra layer on top of the web.
The reviewer can select a part of the text and add comments, including figures and links to references. Here is an annotated article in the New York Times that Climate Feedback found to be scientifically very credible, where you can see the annotate system in action. You can click on the text with a yellow background to see the corresponding comment or click on the small symbol at the top right to see all comments. (Examples of articles with low scientific credibility are somehow mostly pay-walled; one would think that the dark money behind these articles would want them to be read widely.)
I got to know annotation via Climate Feedback. We use the annotation system of Hypothes.is and this system was actually not developed to annotate journalistic articles, but for reviewing scientific articles.
The annotation system makes writing a review easier for the reviewer and makes it easier to read reviews. The difference between writing some notes on an article for yourself and a peer review becomes gradual this way. It cannot take away having to read the manuscript and trying to understand it. That takes most time, but this is the fun part, reducing time time for the tedious part makes it more attractive to review.
Publishing and peer review
Is there a better way to review and publish? The difficult part is no longer the publishing. The central part that remains is the trust of a reader in a source.
It starts to become ironic that the owners of the scientific journals are called “scientific publishers” because the main task of a publisher is nowadays no longer the publishing. Everyone can do that nowadays with a (free) word processor and a (free) web page. The publishers and their journals are mostly brands nowadays. The scientific publisher, the journal is a trusted name. Trust is slow to build up (and easy to lose), producing huge barriers to entry and leading to near monopoly profits of scientific publishing houses of 30 to 40%. That is tax-payer money that is not spend on science and promotes organization that prefer to keep science unused behind pay-walls.
Peer review performs various functions. It helps to give a manuscript the initial credibility that makes people trust it, that makes people willing to invest time in it to study its ideas. If the scientific literature would be as abominable as the mitigation skeptical blog Watts Up With That (WUWT) scientific progress would slow down enormously. At WUWT the unqualified readers are supposed to find out themselves whether they are being conned or not. Even if they would do so: having every reader do a thorough review is wasteful; it is much more efficient to ask a few experts to first vet manuscripts.
Without peer review it would be harder for new people to get others to read their work, especially if they would make a spectacular claim and use unfamiliar methods. My colleagues will likely be happy to read my homogenization papers without peer review. Gavin Schmidt’s colleagues will be happy to read his climate modelling papers and Michel Mann’s colleagues his papers on climate reconstructions. But for new people it would be harder to be heard, for me it would be harder to be heard if I would publish something about another topic and for outsiders it would be harder to judge who is credible. The latter is increasingly important the more interdisciplinary sciences becomes.
Improving peer review
When I was dreaming of a future review system where scientific articles were all in one global database, I used to think of a system without journals or editors. The readers would simply judge the articles and comments, like on Ars Technica or Slashdot. The very active open science movement in Spain has implemented such a peer review system for institutional repositories, where the manuscripts and reviews are judged and reputation metrics are estimated. Let me try to explain why I changed my mind and how important editors and journals are for science.
One of my main worries for a flat database would be that there would be many manuscripts that never got any review. In the current system the editor makes sure that every reasonable manuscript gets a review. Without an editor explicitly asking a scientist to write a review, I would expect that many articles would never get a review. Personal relations are important.
Science is not a democracy, but a meritocracy. Just voting an article up or down does not do the job. It is important that this decision is made carefully. You could try to statistically determine which readers are good at predicting the quality of an article, where quality could be determined by later votes or citations. This would be difficult, however, because it is important that the assessment is made by people with the right expertise, often by people from multiple backgrounds; we have seen how much even something as basic as the scientific consensus on climate change depends on expertise. Try determining expertise algorithmically. The editor knows the reviewers.
While it is not a democracy, the scientific enterprise should naturally be open. Everyone is welcome to submit manuscripts. But editors and reviewers need to be trusted and level headed individuals.
More openness in publishing could in future come from everyone being able to start a “journal” by becoming editor (or better by organization a group of editors) and try to convince their colleagues that they do a good job. The fun thing about the annotation system is that you can demonstrate that you do a good job using existing articles and manuscripts.
This could provide real value for the reader. Not only would the reviews be visible, but it would also be possible to explain why an article was accepted, was it speculative, but really interesting if true (something for experts) or was it simply solid (something for outsiders). Which parts do the experts debate about. The debate would also continue after acceptance.
The code and the data of every “journal” should be open so that everyone can start a new “journal” with reviewed articles. So that when Heartland offers me a nice amount of dark money to start accepting WUWT-quality articles, a group of colleagues can start a new journal and fix my dark-money “mistakes”, but otherwise have a complete portfolio from the beginning. If they would have to start from scratch that would be a large barrier to entry, which like the traditional system encourages sloppy work, corruption and power abuse.
Peer review is also not just for selecting articles, but also to help making them better. Theoretically the author can also ask colleagues to do so, but in practice reviewers are better in finding errors. Maybe because the colleagues who will put in most effort are your friends who have to same blind spots? These improvements of the manuscript would also be missing in a pure voting system of “finished” articles. Having a manuscript phase is helpful.
Finally, an editor makes anonymous reviews a lot less problematic because the editor could delete comment where the anonymity seduced people into inappropriate behavior. Anonymity could be abused to make false attacks with impunity. On the other hand anonymity can also provide protection in case of large power differences in case of real problems.
The advantage of internet publishing is that there is no need for an editor to reject technically correct manuscripts. If the contribution to science is small or if the result is very speculative and quite likely to be found to be wrong in future, the manuscript can still be accepted but simply be given a corresponding grade.
This also points to a main disadvantage of the current dead-tree-inspired system: you get either a yes or a no. There is a bit more information in the journal the author chooses, but that is about it. A digital system can communicate much more subtly with a prospective reader. A speculative article is interesting for experts, but may be best avoided by outsiders until the issues are better understood. Some articles mainly review the state-of-the-art, others provide original research. Some articles have a specific audience: for example the users of a specific dataset or model. Some articles are expected to be more important for scientific progress than others or discuss issues that are more urgent than others. And so on. This information can be communicated to the reader.
The nice thing about the open annotate system is that we can begin reviewing articles before authors start submitting their articles. We can simply review existing articles as well as manuscripts, such as the ones uploaded to ArXiv. The editors could reject articles that should not have been published in the traditional journals and accept manuscripts from archives. I would judge this assessment of a knowledgeable editor (team) more than the acceptance by a traditional journal.
In this way we can produce collections of existing articles. If the new system provides a better reviewing service to science, the authors at some moment can stop submitting their manuscripts to traditional journals and submit them directly to the editors of a collection. Then we have real grassroots scientific journals that serve science.
For colleagues in the communities it would be clear which of these collections have credibility. However, for outsiders we would also need some system that communicates this, which would traditionally be the role of publishing houses and the high barriers to entry. This could be assessed where collections have overlap. Preferably again by humans and not by algorithms. For some articles there may be legitimate reasons why there are differences (hard to assess, other topic of collection), for other articles an editor not having noticed problems may be a sign of bad editorship. This problem is likely not too hard, in a recent analysis of twitter discussions on climate change there was a very clear distinction between science and nonsense.
There is still a lot to do, but with the ease of modern publishing and the open annotate system a lot of software is already there. Larger improvements would be tools for editors to moderate review comments (or at least to collapse less valuable comments); Hypothes.is is working on it. A grassroots journal would need a grading system; standardized when possible. More practical tools would include some help in tracking the manuscripts under review and for sending reminders, and the editors of one collection should be able to communicate with each other. The grassroots journal should remain visible even if the editor team stops; that will need collaboration with libraries or science societies.
If we get this working
- we can say goodbye to frustrated reviewers (well mostly),
- goodbye to pay-walled journals in which publicly financed research is hidden for the public and many scientists alike and
- goodbye to wasting limited research money on monopolistic profits by publishing houses, while
- we can welcoming better review and selection and
- we are building a system that inherently allows for post-publication peer review.
What do you think?
There is now an “arXiv overlay journal”, Discrete Analysis. Articles are published/hosted by ArXiv, otherwise traditional peer review. The announcement mentions three software initiative that make starting a digital journal easy: ScienceOpen, Scholastica, Episciences.org and Open Journal Systems.
Brian A. Nosek and Yoav Bar-Anan describe a scientific utopia: Scientific Utopia: I. Opening scientific communication. I hope the ideas in the above post makes this transition possible.
7 Crazy Realities of Scientific Publishing (The Director’s Cut!)
I would trust most scientists to use annotation responsibly, but it can also be used to harass vulnerable voices on the web. Genius Web Annotator vs. One Young Woman With a Blog. Hypothesis is discussing how to handle such situations.
Nature Chemistry blog: Post-publication peer review is a reality, so what should the rules be?
Report from the Knowledge Exchange event: Pathways to open scholarship gives an overview of the different initiative to make science more open.
Magnificent BBC Reith lecture: A question of trust