Open peer review of (not so) controversial articles

Publishing articles that are critical of previously published work is notoriously difficult but the secrecy of peer review makes it hard to explain the kind of biases and tricks that one faces in this enterprise. Opening peer review, i.e. sharing reports and responses, would certainly help. Here is an interesting exemple related to an article (nicely discussed by Philip Moriarty in a previous post) which is not even critical of prior literature but touches on the stripy nanoparticles controversy. That was too much for Reviewer #1 (hyperlinks added by me; they point to relevant blog posts here or at PubPeer):

Reviewer #1 (Remarks to the Author):
This paper describes the scanning tunnelling microscopy imaging (STM) of a silver cluster (Ag374). To the best of my knowledge there is no report of such things to date. As such I think this paper should be published but in a specialised journal or a broad journal with reporting functions as Scientific Reports.

The significance of this paper as such is minimal. The STM does not add anything to what X-ray crystallography has shown so far also on the same cluster. In fact it requires strong support from calculation.

The STM itself has been widely published on nanoparticles by the group of Stellacci. The authors do reference a controversy there but do not comment on it an neither add to it.

The approach used is almost identical to the one described by such group in Ong et al ACS Nano (non cited), and the results achieved are similar to the ones described in the same paper and in Moglianetti et al. (not cited). Their minimal difference is that they achieved these results in liquid nitrogen and helium temperature, but low temperature results were described in Biscarini et al. (not cited).

Given the scant discussion in the paper (lacks any point) and the two major objections report, I suggest rejection.

The other, more supportive reports, and the response from the authors, can be downloaded from Nature Communications.

Probes, Patterns, and (nano)Particles

philipmoriarty

Philip Moriarty

This is a guest post by Philip Moriarty, Professor of Physics at the University of Nottingham (and blogger).

“We shape our tools, and thereafter our tools shape us.”

Marshall McLuhan (1911-1980)

My previous posts for Raphael’s blog have focussed on critiquing poor methodology and over-enthusiastic data interpretation when it comes to imaging the surface structure of functionalised nanoparticles. This time round, however, I’m in the much happier position of being able to highlight an example of good practice in resolving (sub-)molecular structure where the authors have carefully and systematically used scanning probe microscopy (SPM), alongside image recognition techniques, to determine the molecular termination of Ag nanoparticles.

For those unfamiliar with SPM, the concept underpinning the operation of the technique is relatively straight-forward. (The experimental implementation rather less so…) Unlike a conventional microscope, there are no lenses, no mirrors, indeed, no optics of any sort [1]. Instead, an atomically or molecularly sharp probe is scanned back and forth across a sample surface (which is preferably atomically flat), interacting with the atoms and molecules below. The probe-sample interaction can arise from the formation of a chemical bond between the atom terminating the probe and its counterpart on the sample surface, or an electrostatic or magnetic force, or dispersion (van der Waals) forces, or, as in scanning tunnelling microscopy (STM), the quantum mechanical tunnelling of electrons. Or, as is generally the case, a combination of a variety of those interactions. (And that’s certainly not an exhaustive list.)

Here’s an example of an STM in action, filmed in our lab at Nottingham for Brady Haran’s Sixty Symbols channel a few years back…

Scanning probe microscopy is my first love in research. The technique’s ability to image and manipulate matter at the single atom/molecule level (and now with individual chemical bond precision) is seen by many as representing the ‘genesis’ of nanoscience and nanotechnology back in the early eighties. But with all of that power to probe the nanoscopic, molecular, and quantum regimes come tremendous pitfalls. It is very easy to acquire artefact-ridden images that look convincing to a scientist with little or no SPM experience but that instead arise from a number of common failings in setting up the instrument, from noise sources, or from a hasty or poorly informed choice of imaging parameters. What’s worse is that even relatively seasoned SPM practitioners (including yours truly) can often be fooled. With SPM, it can look like a duck, waddle like a duck, and quack like a duck. But it can too often be a goose…

That’s why I was delighted when Raphael forwarded me a link to “Real-space imaging with pattern recognition of a ligand-protected Ag374 nanocluster at sub-molecular resolution”, a paper published a few months ago by Qin Zhou and colleagues at Xiamen University (China), the Chinese Academy of Science, Dalian (China), the University of Jyväskylä (Finland), and the Southern University of Science and Technology, Guandong (China). The authors have convincingly imaged the structure of the layer of thiol molecules (specifically, tert-butyl benzene thiol) terminating 5 nm diameter silver nanoparticles.

What distinguishes this work from the stripy nanoparticle oeuvre that has been discussed and dissected at length here at Raphael’s blog (and elsewhere) is the degree of care taken by the authors and, importantly, their focus on image reproducibility. Instead of using offline zooms to “post hoc” select individual particles for analysis (a significant issue with the ‘stripy’ nanoparticle work), Zhou et al. have zoomed in on individual particles in real time and have made certain that the features they see are stable and reproducible from image to image. The images below are taken from the supplementary information for their paper and shows the same nanoparticle imaged four times over, with negligible changes in the sub-particle structure from image to image.

This is SPM 101

This is SPM 101. Actually, it’s Experimental Science 101. If features are not repeatable — or, worse, disappear when a number of consecutive images/spectra are averaged – then we should not make inflated claims (or, indeed, any claims at all) on the basis of a single measurement. Moreover, the data are free of the type of feedback artefacts that plagued the ‘classic’ stripy nanoparticle images and Zhou et al. have worked hard to ensure that the influence of the tip was kept to a minimum.

Given the complexity of the tip-sample interactions, however, I don’t quite share the authors’ confidence in the Tersoff-Hamann approach they use for STM image simulation [2]. I’m also not entirely convinced by their comparison with images of isolated molecular adsorption on single crystal (i.e. planar) gold surfaces because of exactly the convolution effects they point towards elsewhere in their paper. But these are relatively minor points. The imaging and associated analysis are carried out to a very high standard, and their (sub)molecular resolution images are compelling.

As Zhou et al. point out in their paper, STM (or atomic force microscopy) of nanoparticles, as compared to imaging a single crystal metal, semiconductor, or insulator surface, is not at all easy due to the challenging non-planar topography. A number of years back we worked with Marie-Paule Pileni’s group on dynamic force microscopy imaging (and force-distance analysis) of dodecanethiol-passivated Au nanoparticles. We found somewhat similar image instabilities as those observed by Zhou et al…

A-C above are STM data

A-C above are STM data, while D-F are constant height atomic force microscope images [3], of thiol-passivated nanoparticles (synthesised by Nicolas Goubet of Pileni’s group) and acquired at 78 K. (Zhou et al. similarly acquired data at 77K but they also went down to liquid helium temperatures). Note that while we could acquire sub-nanoparticle resolution in D-F (which is a sequence of images where the tip height is systematically lowered), the images lacked the impressive reproducibility achieved by Zhou et al. In fact, we found that even though we were ostensibly in scanning tunnelling microscopy mode for images such as those shown in A-C (and thus, supposedly, not in direct contact with the nanoparticle), the tip was actually penetrating into the terminating molecular layer, as revealed by force-distance spectroscopy in atomic force microscopy mode.

The other exciting aspect of Zhou et al.’s paper is that they use pattern recognition to ‘cross-correlate’ experimental and simulated data. There’s increasingly an exciting overlap between computer science and scanning probe microscopy in the area of image classification/recognition and Zhou and co-workers have helped nudge nanoscience a little more in this direction. Here at Nottingham we’re particularly keen on the machine learning/AI-scanning probe interface, as discussed in a recent Computerphile video…

Given the number of posts over the years at Raphael’s blog regarding a lack of rigour in scanning probe work, I am pleased, and very grateful, to have been invited to write this post to redress the balance just a little. SPM, when applied correctly, is an exceptionally powerful technique. It’s a cornerstone of nanoscience, and the only tool we have that allows both real space imaging and controlled modification right down to the single chemical bond limit. But every tool has its limitations. And the tool shouldn’t be held responsible if it’s misapplied…

[1] Unless we’re talking about scanning near field optical microscopy (SNOM). That’s a whole new universe of experimental pain…

[2] This is the “zeroth” order approach to simulating STM images from a calculated density of states. It’s a good starting point (and for complicated systems like a thiol-terminated Ag374 particle probably also the end point due to computational resource limitations) but it is certainly a major approximation.

[3] Technically, dynamic force microscopy using a qPlus sensor. See this Sixty Symbols video for more information about this technique.

 

The war on (scientific) terror…

Thank you Philip for this post and your support.

Symptoms Of The Universe

I’ve been otherwise occupied of late so the blog has had to take a back seat. I’m therefore coming to this particular story rather late in the day. Nonetheless, it’s on an exceptionally important theme that is at the core of how scientific publishing, scientific critique, and, therefore, science itself should evolve. That type of question doesn’t have a sell-by date so I hope my tardiness can be excused.

The story involves a colleague and friend who has courageously put his head above the parapet (on a number of occasions over the years) to highlight just where peer review goes wrong. And time and again he’s gotten viciously castigated by (some) senior scientists for doing nothing more than critiquing published data in as open and transparent a fashion as possible. In other words, he’s been pilloried (by pillars of the scientific community) for daring to suggest that we do science…

View original post 1,415 more words

Do planes fly and other difficult scientific questions

The Scientist magazine reported on the ACS meeting incident. Here is Chad Mirkin’s response to their questions:

Mirkin disagrees with Levy’s assessment of the endosome entrapment. “Levy’s narcissistic approach is akin to, ‘I bought an airplane, and I can’t make it fly. Therefore, planes don’t fly, despite the fact that I see them all above me,’” he tells The Scientist.

Mirkin stresses the number of studies in which the probes have been used successfully: “There is no controversy . . . There are over 40 papers reporting the successful use of such structures, involving over 100 different researchers, spanning three different continents,” he writes to The Scientist in an email. “I think the data and widespread use of such structures speak to their reliability and utility for measuring RNA content in live cells,” he adds.

After “dishonest Rafael [sic] Levy and his band of trolls“, “scientific terrorist” and “scientific zealot“, I suppose the “narcissistad hominem, could be considered more moderate?

1920px-John_William_Waterhouse_-_Echo_and_Narcissus_-_Google_Art_Project

Echo and Narcissus, John William Waterhouse, 1903, Walker Art Gallery, Liverpool. Narcissus, too busy contemplating his image, cannot see Echo let alone planes flying above him.

As The Scientist notes, I am hardly the only one who cannot make the SmartFlare plane fly. And the plane manufacturer has stopped selling its product and does not answer questions from journalists.

Guest post: my experience with the SmartFlares, by James Elliott

CaptureThis is a guest post by James Elliott, manager of the Flow Cytometry Facility at the MRC Institute (LMS) in Hammersmith.

***

I thought it may be useful to add to the discussion about SmartFlares, their marketing and the difficulties in disseminating negative results by passing on my own experience.

We tested the system back in 2013. Sorting primary murine T cells and thymocytes on the basis of RNA expression was perhaps of most immediate interest, but of course there were countless potential applications.

The Merck Millipore rep advised us that the caveat we should be aware of in using SmartFlares was that the particles are taken up by endocytosis and that not all cells possess the machinery to allow this. Indeed, he mentioned data he had seen that only around 20% of T cells take up probes. This was puzzling as it suggested either a specific subset of endocytosis-competent cells or alternatively that uptake by T cells was broad but weak, such that only 20% of cells fell into a positive, above background gate. This in itself seemed a potentially interesting question.

To address the usefulness of SmartFlares in primary T cells (and some lymphocyte lines we had in culture) it was agreed with the rep that the sensible first step was buy positive (an ‘Uptake’ probe where fluorescence is always ‘on’ even in the absence of specific RNA) and negative (scrambled, ‘fluorescence off’) controls.

Everyone rightly comments on the extremely high price of the reagents and though we were given a discount, it remained an expensive look-see experiment.

It was useful that on the day we tried out the probes we were lucky enough to have someone with us from Merck who we like and trust to oversee what we were doing – he could vouch for the fact that we did the experiments correctly. We looked for probe uptake both on a flow cytometer and an Imagestream imaging cytometer.

Whilst we had expected lymphocytes to take up the probes poorly, in fact the big problem we had was that whilst all, or nearly all cells took up the probes, the signal from cells given the scrambled probe – notionally ‘always off’ was just as high and in most cases a little higher than that with the positive control ‘Uptake’ probe. Both showed a marked, log shift in fluorescence.

So – big problems! Why was the scrambled probe, which should have been dim or ‘fluorescence off’ giving us such a high signal? Indeed, if anything our negative control was brighter than the positive.

The rep consulted with the technical team, who were quick to point out that more meaningful comparison would have been between a scrambled and housekeeping probe (the Uptake probe merely being useful to show a qualitative result), yet this seemed to me to fudge the issue: first, surely the uptake and scrambled probes should be roughly comparable in number of molecules of fluorochrome attached or the uptake control would be of limited value – it would give a yes/no answer as to whether the cells would take up probe, but would give little clue as to efficiency. Second, the strategy of validating the system had been agreed with the rep. It was not great to then come back and say that actually this was not a good test after all. Third, and most importantly, a system in which the negative control (‘fluorescence off’!) gives a log shift in fluorescence is likely to be almost completely useless! The background would be far too high for all but the most abundant markers.

In addition to which it hardly inspired confidence that the company seemed to have validated the system very poorly – why else would they be giving a vague suggestion that maybe 20% of T cells take up probe, when in our careful (and observed) hands, they did so rather efficiently. Interestingly, in this respect I later read on a cytometry forum that, according to one US user, the company had been very up front from the beginning that primary lymphocytes don’t take up the probes. This was doubly untrue – lymphocytes do take up the probes and in the UK anyway, we were not told primary lymphocytes didn’t take up probes – the rep thought 20% of T cells did so, but was unsure about the data. Again, I was left with the impression of a poorly validated system sold by reps who were largely in the dark.

The most likely explanation for our results in follow up discussions with the company was that scrambled probe had degraded intracellularly and that this can happen in a cell type-specific way. This would mean that there would be a cell type-specific optimum time window where there was a satisfactory balance between cleavage by target RNAs and non-specific cleavage. Of course we had followed the instruction we were given at the time, but now it appeared these probably weren’t correct for our (hardly esoteric!) cells.

The suggestion was therefore that as many controls as possible would be wise.

Clearly this had become completely untenable as a system – we would have to buy hugely expensive probes and – if they worked at all, which we still didn’t know – would have to do a lot of work to establish not only the usual factors such as concentration, but also timing. And how narrow might the optimal time window be where specificity was apparent? An hour? Less? And background from non-specific signal from degrading probes would be likely to be (at least in the cells we were most interested in) a major problem for any RNA that wasn’t highly expressed.

We decided to cut our losses. I applaud those who can follow up and publish negative data that will be useful to the scientific community, but it seemed likely that for us this would end up far too expensive financially and in time and effort – quite possibly simply to show that the system might just about work, but not in any way that would be practically useful.

 

 

 

 

 

 

Scientific terrorist

At the 2018 American Chemical Society National Meeting in Boston, I asked a question to Chad Mirkin after his talk on Spherical Nucleic Acids. This is what I said:

In science, we need to share the bad news as well as the good news. In your introduction you mentioned four clinical trials. One of them has reported. It showed no efficacy and Purdue Pharma which was supposed to develop the drug decided not to pursue further. You also said that 1600 forms of NanoFlares were commercially available. This is not true anymore as the distributor has pulled the product because it does not work. Finally, I have a question: what is the percentage of nanoparticles that escape the endosome.

I had written my question and I asked exactly this although not in one block as he started answering before I had made all my points. He became very angry. The exchange lasted maybe 5 minutes. Towards the end he said that no one is reading my blog (who cares), that no one agrees with me, he called me a “scientific zealot” and a “scientific terrorist”. The packed room was shell shocked. We then moved swiftly to the next talk.

Two group leaders, one from North America and the other one from Europe, came to me afterwards.

Group leader 1:

Science is ever evolving and evidenced based. The evidence is gathered by first starting to ask questions. I witnessed an interaction between two scientists. One asks his questions gracefully and one responding in a manner unbecoming of a Linus Pauling Medalist. It took courage to stand in front of a packed room of scientists and peers to ask those questions that deserved an answer in a non-aggressive manner. It took even more courage to not become reactive when the respondent is aggressive and belittling. I certainly commended Raphael Levy for how he handled the aggressive response from Chad Mirkin. Even in disagreements, you can respond in a more professional manner. Not only is name calling not appropriate, revealing the outcomes of reviewers opinions from a confidential peer-review process is unprofessional and unethical.*

Lesson learned: Hold your self to a high standard and integrity.

Group leader 2:

Many conferences suffer from interesting discussions after a talk in such way that there are questions and there are answers. So far so good. Only in rare cases, a critical mind starts a discussion, or ask questions which imply some disagreement with the presented facts. Here I was surprised how a renowned expert like Chad Mirkin got in rage by such questions of Raphael Levy and how unprofessional his reaction was. It was not science any longer, it was a personal aggression, and this raises the question why Chad Mirkin acted like this? I do not think that this strategy helps to get more acceptance by the audience. I tribute to Raphael Levy afterwards, because I think science needs critical minds, and one should not be calm because of the fear to get attacked by a speaker. Science is full of statements how well everything works, and optimism is the fuel to keep research running. There is nothing wrong with this, but definitely one also need critical questions to make progress, and what we don’t need is unprofessional behavior and discreditation.

* Group leader 1 refers here to the outcome of the reviews of this article which you can read on ChemrXiv and which was (predictably) rejected by Nature Biomedical Engineering. During the incident Chad Mirkin used these reviews to attack me.

Update: some reactions on Twitter:

“re. your exchange at if being a critical thinker is a I think this is something we should all aspire to be. Good for you.” @wilkinglab

“Do you know Rapha’s blog? Not true that no one is reading it! It is the true gem and a rare truth island!” @zk_nano

“Wow, that’s shockingly uncool.” @sean_t_barry

“What an unprofessional guy.”  @SLapointeChem

“Calling a fellow researcher a “scientific terrorist” for raising concerns and asking a question (even if you disagree with them) is shocking. Sorry to hear that there wasn’t any real discussion instead, would’ve been interesting.” @bearore

“Surprised this isn’t getting more pub. One must wonder at what point does one’s ego/reputation become more important than the science, which ABSOLUTELY must include the bad with the good.” @Ben_Jimi440

“Keep fighting the good fight tenaciously, Raphael. Like the detectives in those old film noir shows… 🤜🏼🤛🏽”  @drheaddamage

 

Multimodal cell tracking by combining gold nanorods and reporter genes (webinar)

Toni Plagge and myself did a webinar (organised by Ithera Medical) yesterday. It mostly covers these two papers: