publishing bias

Guest post: my experience with the SmartFlares, by James Elliott

CaptureThis is a guest post by James Elliott, manager of the Flow Cytometry Facility at the MRC Institute (LMS) in Hammersmith.

***

I thought it may be useful to add to the discussion about SmartFlares, their marketing and the difficulties in disseminating negative results by passing on my own experience.

We tested the system back in 2013. Sorting primary murine T cells and thymocytes on the basis of RNA expression was perhaps of most immediate interest, but of course there were countless potential applications.

The Merck Millipore rep advised us that the caveat we should be aware of in using SmartFlares was that the particles are taken up by endocytosis and that not all cells possess the machinery to allow this. Indeed, he mentioned data he had seen that only around 20% of T cells take up probes. This was puzzling as it suggested either a specific subset of endocytosis-competent cells or alternatively that uptake by T cells was broad but weak, such that only 20% of cells fell into a positive, above background gate. This in itself seemed a potentially interesting question.

To address the usefulness of SmartFlares in primary T cells (and some lymphocyte lines we had in culture) it was agreed with the rep that the sensible first step was buy positive (an ‘Uptake’ probe where fluorescence is always ‘on’ even in the absence of specific RNA) and negative (scrambled, ‘fluorescence off’) controls.

Everyone rightly comments on the extremely high price of the reagents and though we were given a discount, it remained an expensive look-see experiment.

It was useful that on the day we tried out the probes we were lucky enough to have someone with us from Merck who we like and trust to oversee what we were doing – he could vouch for the fact that we did the experiments correctly. We looked for probe uptake both on a flow cytometer and an Imagestream imaging cytometer.

Whilst we had expected lymphocytes to take up the probes poorly, in fact the big problem we had was that whilst all, or nearly all cells took up the probes, the signal from cells given the scrambled probe – notionally ‘always off’ was just as high and in most cases a little higher than that with the positive control ‘Uptake’ probe. Both showed a marked, log shift in fluorescence.

So – big problems! Why was the scrambled probe, which should have been dim or ‘fluorescence off’ giving us such a high signal? Indeed, if anything our negative control was brighter than the positive.

The rep consulted with the technical team, who were quick to point out that more meaningful comparison would have been between a scrambled and housekeeping probe (the Uptake probe merely being useful to show a qualitative result), yet this seemed to me to fudge the issue: first, surely the uptake and scrambled probes should be roughly comparable in number of molecules of fluorochrome attached or the uptake control would be of limited value – it would give a yes/no answer as to whether the cells would take up probe, but would give little clue as to efficiency. Second, the strategy of validating the system had been agreed with the rep. It was not great to then come back and say that actually this was not a good test after all. Third, and most importantly, a system in which the negative control (‘fluorescence off’!) gives a log shift in fluorescence is likely to be almost completely useless! The background would be far too high for all but the most abundant markers.

In addition to which it hardly inspired confidence that the company seemed to have validated the system very poorly – why else would they be giving a vague suggestion that maybe 20% of T cells take up probe, when in our careful (and observed) hands, they did so rather efficiently. Interestingly, in this respect I later read on a cytometry forum that, according to one US user, the company had been very up front from the beginning that primary lymphocytes don’t take up the probes. This was doubly untrue – lymphocytes do take up the probes and in the UK anyway, we were not told primary lymphocytes didn’t take up probes – the rep thought 20% of T cells did so, but was unsure about the data. Again, I was left with the impression of a poorly validated system sold by reps who were largely in the dark.

The most likely explanation for our results in follow up discussions with the company was that scrambled probe had degraded intracellularly and that this can happen in a cell type-specific way. This would mean that there would be a cell type-specific optimum time window where there was a satisfactory balance between cleavage by target RNAs and non-specific cleavage. Of course we had followed the instruction we were given at the time, but now it appeared these probably weren’t correct for our (hardly esoteric!) cells.

The suggestion was therefore that as many controls as possible would be wise.

Clearly this had become completely untenable as a system – we would have to buy hugely expensive probes and – if they worked at all, which we still didn’t know – would have to do a lot of work to establish not only the usual factors such as concentration, but also timing. And how narrow might the optimal time window be where specificity was apparent? An hour? Less? And background from non-specific signal from degrading probes would be likely to be (at least in the cells we were most interested in) a major problem for any RNA that wasn’t highly expressed.

We decided to cut our losses. I applaud those who can follow up and publish negative data that will be useful to the scientific community, but it seemed likely that for us this would end up far too expensive financially and in time and effort – quite possibly simply to show that the system might just about work, but not in any way that would be practically useful.

 

 

 

 

 

 

Advertisements

Publication bias. Grant bias.

All academics writing grants will tell you this: if you want to be successful when applying to a thematic research grant call, you must tick all of the boxes.

Now, imagine that you are a physicist, expert in quantum mechanics. A major funding opportunity arises, exactly matching your interest and track record. That is great news. Obviously you will apply. One difficulty however is that, amongst other things, the call specifies that your project should lead to the “development of highly sensitive approaches enabling the simultaneous determination of the exact position and momentum of a particle“.

At that point, you have three options. The first one is to write a super sexy proposal that somehow ignores the Heisenberg principle. The second option is to write a proposal that addresses the other priorities, but fudges around that particular specification, maybe even alluding to the Heisenberg principle. The third option is to renounce.

The first option is dishonest. The second option is more honest, but, in effect, is not so different from the third: your project is unlikely to get funded if you do not stick to the requirements of the call, as noted above. The third option demonstrates integrity but won’t help you with your career, nor, more importantly with doing any research at all.

And so, you have it. Thematic grant calls that ask for impossible achievements, nourished by publication bias and hype, further contribute to distortion of science.

OK, I’ll confess: I have had a major grant rejected. It was a beautiful EU project (whether BREXIT is partly to blame I do not know). It was not about quantum mechanics but about cell tracking. The call asked for simultaneous “detection of single cells and cell morphologies” and “non-invasive whole body monitoring (magnetic, optical) in large animals” which is just about as impossible as breaking the Heisenberg principle, albeit for less fundamental reasons. We went for option 2. We had a super strong team.

How many people are using the #SmartFlares? Freedom of Information request provide insights

Quick summary of previous episodes for those who have not been following the saga: Chad Mirkin’s group developed a few years ago a technology to detect mRNAs in live cells, the nano-flares. That technology is currently commercialised by Merck under the name smartflares. For a number of reasons (detailed here), I was unconvinced by the publications. We bought the smartflares, studied their uptake in cells as well as their fluorescent signal and concluded that they do not (and in fact cannot) report on mRNAs levels. We published our results as well as all of the raw data

This question – how many people are using the SmartFlares? – is interesting because surely, if a multinational company such as Merck develops, advertises and sells products, to scientists worldwide, these products have to work. As Chad Mirkin himself said today at the ACS National Meeting in Philadelphia “Ultimate measure of impact is how many people are using your technologies“.

So, we must be wrong. SmartFlares must work.

But our data say otherwise, so what is going on?

One hint is the very low number of publications using the smartflares and the fact that some of those are not independent investigations. This, however does not tell us how many groups in the world are using the smartflares.

Here is an hypothesis: maybe lots of groups worldwide are spending public money on probes that don’t work… and then don’t report the results since the probes don’t work. That hypothesis is not as far fetched as it may seem: it is called negative bias in science publishing and it is one of the causes of the reproducibility crisis.

To test this hypothesis, we would need to know how many research groups worldwide have bought the smartflares, an information that I suspected Merck was not going to volunteer. So, instead, I made Freedom of Information requests to (nearly) all UK research intensive universities (the Russell group) asking whether they had evidence of smartflare purchase orders.

Some Universities (6) declined because it would have been too much work to retrieve the information but most (14) obliged. The detailed results are available here. They show that a minimum of 76 different purchases were made between the launch of the product and June 2016. The money spent is £38k representing 0.0013% of these UK universities research income. As far as I can see, none has resulted in a publication so far.

All I can say is that these data do not falsify our hypothesis.

And if after reading this, you are still unconvinced of the need to publish negative data, check the upturnedmicroscope cartoon (warning: scene of violence).