Where to publish our next paper? Letter to a group member

This post was originally published in JUNQ, the Journal of Unsolved Questions.  I thank the editor David Huesmann for his feedback on an earlier version of the manuscript and for the authorization to reproduce it here.


Hi X

Thank you for sending your draft. Really nice work! I will give you more detailed feedback in the next couple of days, but I want to answer now your question about where we should submit our paper.

In the last couple of years, partly because of my involvement in the stripy controversy (more below), I have thought a lot about publishing… and concluded (along with many other people) that the system is absurd, worse, toxic. Public funds are paid to commercial publishers to put publicly-funded research behind paywalls. The (unpaid) hard work of reviewers (which may or may not have led to improvements in the article) remain confidential and does not benefit the community. Publicly-funded researchers waste their time reviewing articles which have already been reviewed several times by other researchers for other journals. Researchers are evaluated on the impact factor of the journals in which they publish even though this is not at all a measurement of the quality of an article. There is a serious reproducibility crisis but no incentive to reproduce or criticise published work. Those flaws and their consequences can be illustrated by briefly looking at two recent controversies.

It took us three years to publish “Stripy Nanoparticles Revisited”. The numerous (and still unfolding) events that followed this publication opened a window into our disfunctioning scientific system, highlighting the failure of journals and institutions to promote correction of the scientific record. The stripy controversy also shows the role that (open) post-publication peer review and social media can play in enabling those discussions which are almost impossible to get through the traditional journals. A positive example of these new dynamics is the case of Brian Pauw, who came across the controversy via Twitter, made interesting contributions on his blog and in the online discussion (PubPeer) of the arXiv pre-print of our follow-up paper, and eventually became an author of the revised version.

Announced as a major discovery with two publications in Nature and massive media coverage, the generation of stem cells through an acid bath (STAP) rapidly turned into a scientific and human disaster which culminated with the suicide of one author [see tribute]. It is hard to overestimate the impact that this disaster had on Japanese science and on stem cell science more generally. Yet, severe flaws in these articles had been identified before publication by reviewers at Science (where the work had been initially submitted) and by reviewers at Nature. All of this could have been avoided if Nature had decided to reject the article, or, if the work had been published alongside the reviews that cast serious doubts on its validity, leaving it to the readers to make up their mind or wait for replications (which never came in spite of attempts)

The system is so severely flawed that it threatens scientific progress and the fabric of science. Not all those problems are due to the publishing model, but it certainly plays a key role.

We need to change the ways we share scientific progress and we have the opportunity to do so: innovative publishing platforms can transform the way scientists share, discuss and evaluate their findings. I believe that this is the future and embracing this future will be beneficial to young researcher’s careers but I know that this is a gamble because many colleagues and institutions still evaluate researchers through the impact factor of where they publish. In our own institute, at a recent research strategy event, colleagues one after the other argued the excellence of their research groups on the basis of the number of articles published in high impact factor journals. I do not underestimate the gamble and this is one with your own career so it is not one I can make on your behalf. If you are happy to try one of these platforms, I’ll be delighted. If you prefer to go for a more traditional venue, I’ll help you as much as I can and we will pay the fees to make the article open access (all journals offer to make your articles open access though this hybrid model is further filling the pockets of publishers and does not seem to help the transition to full open access; see paragraph entitled Get value for money in this post by Stephen Curry).

The ideal system would be a high quality platform combining these three features: #1 not-for-profit, #2 open access (and reasonably priced), and, #3 with articles published immediately followed by open peer review. There are a lot of experiments in publishing at the moment and I list below just a few which are relevant to our area of research.

All the best,

Raphaël

Twitter @raphavisses 

Journal/Publication platform not-for-profit open access immediate publication followed by open peer review
ScienceOpen  

F1000 Research

Beilstein Journal of Nanotechnology

(and free!)

PloS One

Royal Society Open Science

Chemical Science

(free in 2015-16)

Update 1: For a more biology orientated manuscript, we could also consider Biology Open and eLife which fit criteria #1 and #2 (HT @clathrin, @christlet)

Update 2: The Winnover fits #2 and #3

Update 3: a useful tool to navigate the journal jungle here via @sharmanedit

9 comments

    1. Thanks Robert for the suggestion. Peer J is certainly an interesting venue contributing to changes both in the peer review system and in the publishing economic model. However, strictly speaking, it only satisfies one of my three criteria. That’s why I have not included it in the list.

      Like

  1. 1. I think you have to seriously question calling ACS (or many such societies) not for profit, given the compensation of the executives. They’re really not even science societies, but publishing companies.

    2. I do think there is value in hard copy (paper) versions as archival. Electronic stuff is so ephemeral.

    Like

  2. Very useful post in which (some of) the problems of academic publications are nicely pointed out. Agree 100 % in how stupid is a publication system in which the writer(s) and corrector(s) are paying to read their work. Even worst, as you said, public funds are wasted in paying private companies for whom we (public workers mostly) work for free. However, I think the most important problem regarding the lack of scientific value on the publications (lack of reproducibility, overstatements, etc.) is not due to that perverse system, but to the pressure researchers have to publish, and to a lack of ethics (maybe both are linked…).
    First, you have to be the first in publishing something if you want some impact. No matter how good and reproducible is your work if someone has published something similar before (and probably because they didn’t care about reproducibility). Here, the problem is your personal ethics. No reviewer could ever check if the results you are presenting occurred just once because that day was raining.
    Then, you have to have a number of publications if you want to get the next job. Normally, positions are 2-3 years, so you can imagine. You want to publish 5-6 papers. You wouldn’t mind to repeat the same work and change some word to make it slightly different. Again, I don’t think open access would solve this lack of ethics. Neither open reviews. If I have to publish 5-6 papers in 2 years, I would not spend my time reviewing 15-18 papers (I assume at least 3 reviews per paper would be needed and I assume everyone would like to publish 6 papers). Quality should come first than quantity, but who thinks this is happening? Not me. And I don’t think this would be solved with open access/ review as well. It could get even worst.
    So what? I totally support open access (in fact, the main publication of my PhD is in PlosOne), but I think the scientific problems would be still the same if we don’t fight for a better evaluation system and, mainly, we are not conscious about our responsibility of not publishing shit.

    Like

  3. It would be so easy to place links on every article (even in specific sections of it) to specific criticisms made by non-anonymous reviewers so readers could read the whole picture. I also thought that asking for more thorough reproducibility experiments would help but I think in the end would only encourage people to forge results.
    In my opinion the main problem is the evaluation methods. If we fixed that then maybe the publication model would follow.

    Like

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.