How do I know if an article is good? an #ACSBoston tale

This week I attended the fall meeting of the American Chemical Society in Boston. A little meeting of about 14,000 attendees. I was speaking in a symposium with an impossibly long title but which turned out to be good fun and interesting; big thanks to the organisers: Kimberly Hamad-Schifferli, Clemens Burda and Wolfgang Parak.

IMG_0117I spent most of my time in that symposium but also went to a few other sessions which tackled questions surrounding the ways we do and communicate science. I learnt a bit more about the activities of the Center for Open Science and the platform they offer to researchers to organise, plan, record and share their work (and I was even offered a T-shirt). Probably the best lecture I heard – and certainly the most entertaining – was in a science communication session “The poisoner’s guide to communicating chemistry” by Deborah Blum (now I really need to read her book).

I joined a session with the promising title of “Scientific Integrity: Can We Rely on the Published Scientific Literature?“. Judith Currano (Head of the Chemistry Library, University of Pennsylvania) discussed how to help students evaluate the quality of scientific articles; I reproduce the abstract below (italics and bold mine):

This paper, by a chemistry librarian and a professor who edits an online journal, frames the challenges facing scientists at all levels as a result of the highly variable quality of the scientific literature resulting from the introduction of a deluge of new open-access online journals, many from previously unknown publishers with highly variable standards of peer review. The problems are so pervasive that even papers submitted to well-established, legitimate journals may include citations to questionable or even frankly plagiarized sources. The authors will suggest ways in which science librarians can work with students and researchers to increase their awareness of these new threats to the integrity of the scientific literature and to increase their ability to evaluate the reliability of journals and individual articles. Traditional rules of thumb for assessing the reliability of scientific publications (peer review, publication in a journal with an established Thomson-Reuters Impact Factor, credible publisher) are more challenging to apply given the highly variable quality of many of the new open access journals, the appearance of new publishers, and the introduction of new impact metrics, some of which are interesting and useful, but others of which are based on citation patterns found in poorly described data sets or nonselective databases of articles. The authors suggest that instruction of research students in Responsible Conduct of Research be extended to include ways to evaluate the reliability of scientific information.

Now the problem of (rapidly) evaluating the reliability of an article, especially for new researchers in a particular field is a serious and acute one, so I fully approve the author’s suggestions.

However the entire paper is based largely on a false premise: the idea that it is the “introduction of a deluge of new open-access online journals” which creates this reliability problem. This is hardly the case. The difficulty in identifying poor articles is not the deluge of open access journals nor is it predatory publishing. The growth in the volume of publications is not particularly related to open access and predatory publishing can be easily identified (with a little bit of common sense and a few pointers). The abstract (and to a lesser extent the talk) also conflates the evaluation of the reliability of a journal (an impossible task if you ask me) and the reliability of an article (an extremely onerous task if you ask me, but more on this later). Do I need to comment on the “rule of thumbs“?

I do teach third year undergraduate students on a similar topic. I ask them this same question: “how can you evaluate the validity of a scientific article?”. I write their answers on the white board; in whatever order, I get: the prestige of the University/Authors/Journal, the impact factor, the quality (?) of the references… I then cross it all. I show the Arsenate DNA paper published in Science, the STAP papers published in Nature. I try to convince them that no measure of prestige can help them evaluate the quality and reliability of a paper, that the only solution they have is to read the paper carefully and critically analyse the data. If necessary, discuss it with others. If necessary ask questions to the authors.

Of course, reading carefully takes time, but there is (currently) no alternative. There is absolutely no reason to think that a paper is reliable because it is in an high impact factor journal. The Scottish philosopher David Hume (1771-1776) wrote that “A wise man proportions his belief to the evidence”…and should “always reject the greater miracle.” Many articles in high impact journals resemble such miracles and eventually turn out to be irreproducible.

The second part of my own scientific presentation focused on our ongoing SmartFlare project. On the last slide, it featured the David Hume quote as well as an updated 21st version (see below).

Capture

With the PubPeer browser extension, you can immediately see, on the journal page (or anywhere else the article is cited) if there is an existing discussion at PubPeer

There is however something simple that we can do immediately to make it easier for every body to evaluate the reliability of individual articles: sharing our critiques (positive or negative) of articles we read. If we all commit to use PubPeer and start sharing at least one review per month, this will go a very long way towards generating open discussions around articles. It will obviously not alleviate the need to read the articles and the reviews critically, but it will crowd source the evaluation and this can be very powerful (it is the model of SJS, ScienceOpen, F1000).

Capture

Advertisements

3 comments

  1. Comment by Susan Rvachew via Twitter (https://twitter.com/ProfRvach):
    “I agree that you have to read each paper – prestige of author or journal not a good guide, but… Is second point consistent with the first? Does wisdom of crowd work in science? Maybe test of time”

    My response:
    “Test of time” sounds good, but in the current system it is not working very well either. Some articles stand the test of time in terms of amassing large numbers of citations, yet none of these citations actually reproduce or build on the work. What is cited is simply what is considered (by the “crowd”?) as a landmark maybe in terms of ideas or concepts, whatever the actual strength of the evidence in the original article. Even papers which have been retracted continue to be cited. Eventually of course, the “test of time” will work, but it can take very long time.

    I don’t think that public post publication peer review can be equated to wisdom of crowds. That is be a valid criticism of something like the star system that exists on some platforms. On PubPeer however, what you have is reviews. Each review or comment can be evaluated on its merits. A review which just says “this is great” or “this is crap” will simply be ignored. However, a review which point to a fault in a argument or a problem in a figure will be evaluated and eventually challenged… Maybe that’s wisdom of crowds after all 😉 but a wisdom based on arguments which are shared publicly.

    Like

    1. Susan has replied via Twitter and I reproduce it here for the record!
      “Thanx for answer (and interesting blog post) but recommend books like Livio’s “Brilliant Blunders”
      “”It takes a very long time” is what science is about, everything else is just noise.”
      “Post publication peer review a modern contribution to an old process (scientific conversation)”
      “we don’t yet know what the impact will be on people with truly unique ideas/processes/discoveries”

      Like

  2. Dear Raphael,

    Great text.. However, in my country the funding process is made by amount of papers that the researcher produces and not by relevancy, and by the fact, the relevant paper usually requires funding! This case becomes even worse if the university does not have the appropriate equipment… The researcher stuck in study the same properties/materials due to limitless. So in my defense I think those papers are indeed necessary for those who needs funding. The researcher needs to start somewhere… Also for some universities it is necessary to have a publication in a journal in order to obtain the degree of master/doctor.

    So in the end I think it is necessary to understand that this is not simple. Everybody needs their share of “meal”… I completely agree with the fact that most of the papers are not useful but they have a meaning behind of it. Of course you could always say that you just need to change the policy of funding but as you know is not that easy…. I for example as a Ph.D. student am completely tired of writing the same paper over and over but it is necessary…

    Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s