Philip Moriarty

Please read this leaflet carefully before taking to Twitter

Please read this leaflet carefully before taking to Twitter

1. Name of the medicinal product

TWITTIVIR 5% w/w cream

2. Qualitative and quantitative composition

TWITTIVIR 5% medical grade w/w cream (cis:trans isomer 95:5)

3. Pharmaceutical form

Cream for topical application (usually to the finger tips).

4. Clinical particulars

 4.1 Therapeutic indications

TWITTIVIR 5% w/w cream is indicated for the treatment of Anemic Network Infection, Grant Blood Clot, Publication Circulatory Virus and Altmetric Intestinal Flu

4.2 Posology and method of administration

TWITTIVIR 5% w/w cream is suitable for adults, children of 13 years  of age and above, and the elderly. TWITTIVIR 5% w/w cream is for external use only and should not be applied to broken skin, mucous membranes or near the eyes.

4.3 Contraindications

TWITTIVIR 5% w/w cream is contra-indicated in subjects with known hypersensitivity to the product and its components. (group 1)

TWITTIVIR 5% w/w cream is contra-indicated in highly obsessive subjects. (group 2)

TWITTIVIR 5% w/w cream is strongly contra-indicated in subjects that cannot resist a Twitter spat with Louise Mensh. (group 3)

4.9 Overdose

There are rare cases of overdosage of TWITTIVIR 5% w/w cream, usually in patients from group 3 above. The effects can be serious, leading to grumpiness and even, in extreme cases (in parents), child neglect. In such cases, the treatment should be immediately stopped.

 

Towards the end of the stripy controversy?

Last week saw the publication in PloS One of Quy Khac Ong and Francesco Stellacci’s response to Stirling et al “Critical Assessment of the Evidence for Striped Nanoparticles” published a year earlier (November 2014, I am one of the co-authors).

The controversy had started with our publication of Stripy Nanoparticles Revisited after a three year editorial process (2009-2012) and was followed by a large number of events at this blog, on PubPeer and a few other places.

Here is a short statement in response to Ong and Stellacci. Since theirs  was a response to Stirling et al, Julian Stirling was invited to referee their submission (report).

We are pleased that Ong and Stellacci have responded to our paper, Critical assessment of the evidence for striped nanoparticles, PLoS ONE 9 e108482 (2014). Each of their rebuttals of our critique has, however, already been addressed quite some time ago either in our original paper, in the extensive PubPeer threads associated with that paper (and its preprint arXiv version), and/or in a variety of blog posts. Indeed, arguably the strongest evidence against the claim that highly ordered stripes form in the ligand shell of suitably-functionalised nanoparticles comes from Stellacci and co-authors’ own recent work, published shortly after we submitted our PLOS ONE critique. This short and simple document compares the images acquired from ostensibly striped nanoparticles with control particles where, for the latter (and as claimed throughout the work of Stellacci et al.), stripes should not be present. We leave it to the reader to draw their own conclusions. At this point, we believe that little is to be gained from continuing our debate with Stellacci et al. We remain firmly of the opinion that the experimental data to date show no evidence for formation of the “highly ordered” striped morphology that has been claimed throughout the work of Stellacci and co-workers, and, for the reasons we have detailed at considerable length previously, do not find the counter-claims in Ong and Stellacci in any way compelling. We have therefore clearly reached an impasse. It is thus now up to the nanoscience community to come to its own judgement regarding the viability of the striped nanoparticle hypothesis. As such, we would very much welcome STM studies from independent groups not associated with any of the research teams involved in the controversy to date. For completeness, we append below the referee reports which JS submitted on Ong and Stellacci’s manuscript.

Julian Stirling, Raphaël Lévy, and Philip Moriarty November 16 2015

 

 

Big tussle over tiny particles, by Lauren K. Wolf , C&EN

Lauren K. Wolf has written a nice overview of the stripy nanoparticle controversy for Chemical & Engineering News, the weekly magazine published by the American Chemical Society. It starts like this:

AS TRUTH SEEKERS, scientists often challenge one another’s work and debate over the details. At the first-ever international scientific conference, for instance, leading chemists argued vociferously over how to define a molecule’s formula. A lot of very smart people at the meeting, held in Germany in 1860, insisted that water was
OH, while others fought for H 2 O.

That squabble might seem tame compared with a dispute that’s been raging
in the nanoscience community during the past decade. […]

Read it all here… if you have access. If you don’t, email me and I will send you a pdf.

 

Whither stripes?

Philip Moriarty

Philip Moriarty

This is a guest post by  Philip Moriarty, Professor of Physics at the University of Nottingham

A few days ago, Raphael highlighted the kerfuffle that our paper, Critical assessment of the evidence for striped nanoparticles, has generated over at PubPeer and elsewhere on the internet. (This excellent post from Neuroskeptic is particularly worth reading – more on this below). At one point the intense interest in the paper and associated comments thread ‘broke’ PubPeer — the site had difficulty dealing with the traffic, leading to this alert:

At the time of writing, there are seventy-eight comments on the paper, quite a few of which are rather technical and dig down into the minutiae of the many flaws in the striped nanoparticle ‘oeuvre’ of Francesco Stellacci and co-workers.  It is, however, now getting very difficult to follow the thread over at PubPeer, partly because of the myriad comments labelled “Unregistered Submission” – it has been suggested that PubPeer consider modifying their comment labelling system – but mostly because of the rather circular nature of the arguments and the inability to incorporate figures/images directly into a comments thread to facilitate discussion and explanation. The ease of incorporating images, figures, and, indeed, video in a blog post means that a WordPress site such as Raphael’s is a rather more attractive proposition when making particular scientific/technical points about Stellacci et al.’s data acquisition/analysis protocols. That’s why the following discussion is posted here, rather than at PubPeer.

Unwarranted assumptions about unReg?

Julian Stirling, the lead author of the “Critical assessment…” paper, and I have spent a considerable amount of time and effort over the last week addressing the comments of one particular “Unregistered Submission” at PubPeer who, although categorically stating right from the off that (s)he was in no way connected with Stellacci and co-workers, nonetheless has remarkably in-depth knowledge of a number of key papers (and their associated supplementary information) from the Stellacci group.

It is important to note that although our critique of Stellacci et al.’s data has, to the best of our knowledge, attracted the greatest number of comments for any paper at PubPeer to date, this is not indicative of widespread debate about our criticism of the striped nanoparticle papers (which now number close to thirty). Instead, the majority of comments at PubPeer are very supportive of the arguments in our “Critical assessment…” paper. It is only a particular commenter, who does not wish to log into the PubPeer site and is therefore labelled “Unregistered Submission” every time they post (I’ll call them unReg from now on), that is challenging our critique.

We have dealt repeatedly, and forensically, with a series of comments from unReg over at PubPeer. However, although unReg has made a couple of extremely important admissions (which I’ll come to below), they continue to argue, on entirely unphysical grounds, that the stripes observed by Stellacci et al. in many cases are not the result of artefacts and improper data acquisition/analysis protocols.

unReg’s persistence in attempting to explain away artefacts could be due to a couple of things: (i) we are being subjected to a debating approach somewhat akin to the Gish gallop. (My sincere thanks to a colleague – not at Nottingham, nor, indeed, in the UK – who has been following the thread at PubPeer and suggested this to us by e-mail. Julian also recently raised it in a comment elsewhere at Raphael’s blog which is well worth reading);  and/or (ii) our assumption throughout that unReg is familiar with the basic ideas and protocols of experimental science, at least at undergraduate level, may be wrong.

Because we have no idea of unReg’s scientific background – despite a couple of commenters at PubPeer explicitly asking unReg to clarify this point – we assumed that they had a reasonable understanding of basic aspects of experimental physics such as noise reduction, treatment of experimental uncertainties, accuracy vs precision etc… But Julian and I realised yesterday afternoon that perhaps the reason we and unReg keep ‘speaking past’ each other is because unReg may well not have a very strong or extensive background in experimental science.  Their suggestion at one point in the PubPeer comments thread that “the absence of evidence is not evidence of absence” is a rather remarkable statement for an experimentalist to make. We therefore suspect that the central reason why unReg is not following our arguments is their lack of experience with, and absence of training in, basic experimental science.

As such, I thought it might be a useful exercise – both for unReg and any students who might be following the debate – to adopt a slightly more tutorial approach in the discussion of the issues with the stripy nanoparticle data so as to complement the very technical discussion given in our paper and at PubPeer. Let’s start by looking at a selection of stripy nanoparticle images ‘through the ages’ (well, over the last decade or so).

The Evolution of Stripes: From feedback loop ringing to CSI image analysis protocols

The images labelled 1 – 12 below represent the majority of the types of striped nanoparticle image published to date. (I had hoped to put together a 4 x 4 or 4 x5 matrix of images but, due to image re-use throughout Stellacci et al.’s work, there aren’t enough separate papers to do that).

Stripes across the ages

Stripes across the ages

Putting the images side by side like this is very instructive. Note the distinct variation in the ‘visibility’ of the stripes. Stellacci and co-workers will claim that this is because the terminating ligands are not the same on every particle. That’s certainly one interpretation. Note, however, that images 1, 2, 4, and 11 each have the same type of octanethiol- mercaptopropionic acid (2:1) termination and that we have shown, through an analysis of the raw data, that images #1 and #11 result from a scanning tunnelling microscopy artefact known as feedback loop ringing (see p.73 of this scanning probe microscopy manual).

A key question which has been raised repeatedly (see, for example, Peer 7’s comment in this sub-thread) is just why Stellacci et al., or any other group (including those selected by Francesco Stellacci to independently verify his results), has not reproduced the type of exceptionally high contrast images of stripes seen in images #1,#2,#3, and #11 in any of the studies carried out in 2013. This question still hangs in the air at PubPeer…

Moreover, the inclusion of Image #5 above is not a mistake on my part – I’ll leave it to the reader to identify just where the stripes are supposed to lie in this image. Images #10 and #12 similarly represent a challenge for the eagle-eyed reader, while Image #4 warrants its own extended discussion below because it forms a cornerstone of unReg’s argument that the stripes are real. Far from supporting the stripes hypothesis, however, Stellacci et al’s own analysis of Image #4 contradicts their previous measurements and arguments (see “Fourier analysis or should we use a ruler instead?” below).

What is exceptionally important to note is that, as we show in considerable detail in “Critical assessment…”, a variety of artefacts and improper data acquisition/analysis protocols – and not just feedback loop ringing – are responsible for the variety of striped images seen above. For those with no experience in scanning probe microscopy, this may seem like a remarkable claim at first glance, particularly given that those striped nanoparticle images have led to over thirty papers in some of the most prestigious journals in nanoscience (and, more broadly, in science in general). However, we justify each of our claims in extensive detail in Stirling et al. The key effects are as follows:

–          Feedback loop ringing (see, for example, Fig. 3 of “Critical assessment…”. Note that nanoparticles in that figure are entirely ligand-free).

–          The “CSI” effect. We know from access to (some of) the raw data that a very common approach to STM imaging in the Stellacci group (up until ~ 2012) was to image very large areas with relatively low pixel densities and then rely on offline zooming into areas no more than a few tens of pixels across to “resolve” stripes. This ‘CSI’ approach to STM is unheard of in the scanning probe community because if we want to get higher resolution images, we simply reduce the scan area. The Stellacci et al. method can be used to generate stripes on entirely unfunctionalised particles, as shown here.

–          Observer bias. The eye is remarkably adept at picking patterns out of uncorrelated noise. Fig. 9 in Stirling et al. demonstrates this effect for ‘striped’ nanoparticles. I have referred to this post from my erstwhile colleague Peter Coles repeatedly throughout the debate at PubPeer. I recommend that anyone involved in image interpretation read Coles’ post.

Fourier analysis or should we use a ruler instead?

I love Fourier analysis. Indeed, about the only ‘Eureka!’ moment I had as an undergraduate was when I realised that the Heisenberg uncertainty principle is nothing more than a Fourier transform. (Those readers who are unfamiliar with Fourier analysis and might like a  brief overview could perhaps refer to this Sixty Symbols video, or, for much more (mathematical) detail, this set of notes I wrote for an undergraduate module a number of years ago).

In Critical assessment… we show, via a Fourier approach, that the measurements of stripe spacing in papers published by Stellacci et al in the period from 2006 to 2009 – and subsequently used to claim that the stripes do not arise from feedback loop ringing – are comprehensively incorrectly estimated. We are confident in our results here because of a clear peak in our Fourier space data (See Figures S1 and S2 of the paper).

Fabio Biscarini and co-workers, in collaboration with Stellacci et al, have attempted to use Fourier analysis to calculate the ‘periodicity’ of the nanoparticle stripes. They use Fourier transform of the raw images, averaged in the slow scan direction. No peak is visible in this Fourier space data, even when plotting on a logarithmic scale in an attempt to increase contrast/visibility. Instead, the Fourier space data just shows a decay with a couple of plateaus in it. They claim – erroneously, for reasons we cover below – that the corners of the second plateau and the continuing decay (called a “shoulder” by Biscarini et al.) indicates stripe spacing. To locate these shoulders they apply a fitting method.

We describe in detail in “Critical assessment… that not only is the fitting strategy used to extract the spatial frequencies highly questionable – a seven free-parameter fit to selectively ‘edited’ data is always going to be somewhat lacking in credibility – but that the error bars on the spatial frequencies extracted are underestimated by a very large amount.

Moreover, Biscarini et al. claim the following in the conclusions of their paper:

The analysis of STM images has shown that mixed-ligand NPs exhibit a spatially correlated architecture with a periodicity of 1 nm that is independent of the imaging conditions and can be reproduced in four different laboratories using three different STM microscopes. This PSD [power spectral density; i.e. the modulus squared of the Fourier transform] analysis also shows…”

Note that the clear, and entirely misleading, implication here is that use of the power spectral density (PSD – a way of representing the Fourier space data) analysis employed by Biscarini et al. can identify “spatially correlated architecture”. Fig. 10 of our “Critical assessment…” paper demonstrates that this is not at all the case: the shoulders can equally well arise from random speckling.

This unconventional approach to Fourier analysis is not even internally consistent with measurements of stripe spacings as identified by Stellacci and co-workers. Anyone can show this using a pen, a ruler, and a print-out of the images of stripes shown in Fig. 3 of Ong et al. It’s essential to note that Ong et al. claim that they measure a spacing of 1.2 nm between the ‘stripes’; this 1.2 nm figure is very important in terms of consistency with the data in earlier papers. Indeed, over at PubPeer, unReg uses it as a central argument of the case for stripes:

“… the extracted characteristic length from the respective fittings results in a characteristic length for the stripes of 1.22 +/- 0.08. This is close to the 1.06 +/-0.13 length for the stripes of the images in 2004 (Figure 3a in Biscarini et al.). Instead, for the homoligand particles, the number is much lower: 0.76 +/- 0.5 [(sic). unReg means ‘+/- 0.05’ here. The unit is nm] , as expected. So the characteristic lengths of the high resolution striped nanoparticles of 2013 and the low resolution striped nanoparticles of 2004 match within statistical error, ***which is strong evidence that the stripe features are real.***”

Notwithstanding the issue that the PSD analysis is entirely insensitive to the morphology of the ligands (i.e. it cannot distinguish between stripes and a random morphology), and can be abused to give a wide range of results, there’s a rather simpler and even more damaging inconsistency here.

A number of researchers in the group here at Nottingham have repeated the ‘analysis’ in Ong et al. Take a look at the figure below. (Thanks to Adam Sweetman for putting this figure together). We have repeated the measurements of the stripe spacing for Fig. 3 of Ong et al. and we consistently find that, instead of a spacing of 1.2 nm, the separation of the ‘stripes’ using the arrows placed on the image by Ong et al. themselves has a mean value of 1.6 nm (± 0.1 nm). What is also interesting to note is that the placement of the arrows “to guide the eye” does not particularly agree with a placement based on the “centre of mass” of the features identified as stripes. In that case, the separation is far from regular.

We would ask that readers of Raphael’s blog – if you’ve got this far into this incredibly long post! – repeat the measurement to convince yourself that the quoted 1.2 nm value does not stand up to scrutiny.

Measuring stripes with a ruler

So, not only does the PSD analysis carried out by Biscarini et al. not recover the real space value for the stripe spacing (leaving aside the question of just how those stripes were identified), but there is a significant difference between the stripe spacing claimed in the 2004 Nature Materials paper and that in the 2013 papers. Both of these points severely undermine the case for stripy nanoparticles. Moreover, the inability of Ong et al. to report the correct spacing for the stripes from simple measurements of their STM instruments raises significant questions about the reliability of the other data in their paper.

As the title of this post says, whither stripes?

Reducing noise pollution

A very common technique in experimental science to increase signal-to-noise (SNR) ratio is signal averaging. I have spent many long hours at synchrotron beamlines while we repeatedly scanned the same energy window watching as a peak gradually appeared from out of the noise. But averaging is of course not restricted to synchrotron spectroscopy – practically every area of science, including SPM, can benefit from the advantages of simply summing a signal over the course of time.

A particularly frustrating aspect of the discussion at PubPeer, however, has been unReg’s continued assertion that even though summing of consecutive images of the same area gives rise to completely smooth particles (see Fig. 5(k) of “Critical assessment…”), this does not mean that there is no signal from stripes present in the scans. This claim has puzzled not just Julian and myself, but a number of other commenters at PubPeer, including Peer 7:

If a feature can not be reproduced in two successive equivalent experiments then the feature does not exist because the experiment is not reproducible. Otherwise how do you chose between two experiments with one showing the feature and the other not showing it? Which is the correct one ? Please explain to me.

Furthermore, if a too high noise is the cause of the lack of reproducibility than the signal to noise ratio is too low and once again the experiment has to be discarded and/or improved to increase this S/N. Repeating experiments is a good way to do this and if the signal does not come out of the noise when the number of experiment increases than it does not exist.

This is Experimental Science 101 and may (should) seem obvious to everyone here…”

I’ve put together a short video of a LabVIEW demo I wrote for my first year undergrad tutees to show how effective signal averaging can be. I thought it might help to clear up any misconceptions…

The Radon test

There is yet another problem, however, with the data from Ong et al. which we analysed in the previous section. This one is equally fundamental. While Ong et al. have drawn arrows to “guide the eye” to features they identify as stripes (and we’ve followed their guidelines when attempting to identify those ‘stripes’ ourselves), those stripes really do not stand up tall and proud like their counterparts ten years ago (compare images #1 and #4, or compare #4 and #11 in that montage above).

Julian and I have stressed to unReg a number of times that it is not enough to “eyeball” images and pull out what you think are patterns. Particularly when the images are as noisy as those in Stellacci et al’s recent papers, it is essential to try to adopt a more quantitiative, or at least less subjective approach. In principle, Fourier transforms should be able to help with this, but only if they are applied robustly. If spacings identified in real space (as measured using a pen and ruler on a printout of an image) don’t agree with the spacings measured by Fourier analysis – as for the data of Ong et al. discussed above – then this really should sound warning bells.

One method of improving objectivity in stripe detection is to use a Radon transform (which for reasons I won’t go into here – but Julian may well in a future post! – is closely related to the Fourier transform). Without swamping you in mathematical detail, the Radon transform is the projection of the intensity of an image along a radial line at a particular angular displacement. (It’s important in, for example, computerised tomography). In a nutshell, lines in an image will show up as peaks in the Radon transform.

So what does it look like in practice, and when applied to stripy nanoparticle images? (All of the analysis and coding associated with the discussion below are courtesy of Julian yet again). Well, let’s start with a simulated stripy nanoparticle image where the stripes are clearly visible – that’s shown on the left below and its Radon transform is on the right.

Radon-1

Note the series of peaks appearing at an angle of ~ 160°. This corresponds to the angular orientation of the stripes. The Radon transform does a good job of detecting the presence of stripes and, moreover, objectively yields the angular orientation of the stripes.

What happens when we feed the purportedly striped image from Ong et al. (i.e. Image #4 in the montage) into the Radon transform? The data are below. Note the absence of any peaks at angles anywhere near the vicinity of the angular orientation which Ong et al. assigned to the stripes (i.e. ~ 60°; see image on lower left below)…

Radon-2

Hyperempiricism

If anyone’s still left reading out there at this point, I’d like to close this exceptionally lengthy post by quoting from Neuroskeptic’s fascinating and extremely important “Science is Interpretationpiece over at the Discover magazine blogging site:

The idea that new science requires new data might be called hyperempiricism. This is a popular stance among journal editors (perhaps because it makes copyright disputes less likely). Hyperempiricism also appeals to scientists when their work is being critiqued; it allows them to say to critics, “go away until you get some data of your own”, even when the dispute is not about the data, but about how it should be interpreted.”

Meanwhile, back at PubPeer, unReg has suggested that we should “… go back to the lab and do more work”.

 

The Emperor’s New Stripes

Philip Moriarty

Philip Moriarty

This is a guest post by  Philip Moriarty, Professor of Physics at the University of Nottingham

Since the publication of the ACS Nano and Langmuir papers to which Mathias Brust refers in the previous post, I have tried not to get drawn into posting comments on the extent to which the data reported in those papers ‘vindicates’ previous work on nanoparticle stripes by Francesco Stellacci’s group. (I did, however, post some criticism at ChemBar, which I note was subsequently uploaded, along with comments from Julian Stirling, at PubPeer).  This is because we are working on a series of experimental measurements and re-analyses of the evidence for stripes to date (including the results published in the ACS Nano and Langmuir papers) and would very much like to submit this work before the end of the year.

Mathias’ post, however, has prompted me to add a few comments in the blogosphere, courtesy of Rapha-z-Lab.

It is quite remarkable that the ACS Nano and Langmuir papers are seen by some to provide a vindication of previous work by the Stellacci group on stripes. I increasingly feel as if we’re participating in some strange new nanoscale ‘reimagining’ of The Emperor’s New Clothes! Mathias clearly and correctly points out that the ACS Nano and Langmuir papers published earlier this year provide no justification for the earlier work on stripes. Let’s compare and contrast an image from the seminal 2004 Nature Materials paper with Fig. S7 from the paper published in ACS Nano earlier this year…

moriarty comparison

Note that the image on the right above is described in the ACS Nano paper as “reproducing” high resolution imaging of stripes acquired in other labs. What is particularly important about the image on the right is that it was acquired under ultrahigh vacuum conditions and at a temperature of 77K by Christoph Renner’s group at Geneva. UHV and 77 K operation should give rise to extremely good instrumental stability and provide exceptionally clear images of stripes. Moreover, Renner is a talented and highly experienced probe microscopist. And yet, nothing even vaguely resembling the types of stripes seen in the image on the left is observed in the STM data. It’s also worth noting that the image from Renner’s group features in the Supplementary Information and not the main paper.

Equally remarkable is that the control sample discussed in the ACS Nano paper (NP3) shows features which are, if anything, much more like stripes than the so-called stripy particles. But the authors don’t mention this. I’ve included a comparison below of Fig. 5(c) from the ACS Nano paper with a contrast-enhanced version. I’ll leave it to the reader to make up their own mind as to whether or not there is greater evidence for stripe formation in the image shown on the right above, or in the image shown on the right below…

moriarty 2

Finally, the authors neglect any consideration at all of convolution between the tip structure and the sample structure. One can’t just assume that the tip structure plays no role in the image formation mechanism – scanning probe microscopy is called scanning probe microscopy for a reason. This is particularly the case when the features being imaged are likely to have a comparable radius of curvature to the tip.

I could spend quite a considerable amount of time discussing other deficiencies in the analyses in the Langmuir and ACS Nano papers but we’ll cover this at length in the paper we’re writing.

Browsing the archive

Philip Moriarty

Philip Moriarty

Update (11/06/2013): the link to the raw data on Francesco Stellacci does not work anymore; here is a mirror of the files that were released last month.

This is a guest post by  Philip Moriarty, Professor of Physics at the University of Nottingham

It is to Francesco Stellacci (FS)’s credit that he has now uploaded the data I requested some time ago. I appreciate that this will have been a time-consuming task – I sometimes struggle to find files I saved last week, let alone locate data from almost a decade ago! It’s just a shame that the provision of the data necessitated the involvement of the journal editors (and possibly required prompting from other sources).

It is also worth noting that, over the last week, FS has been very helpful in providing timely responses to my questions regarding the precise relationship of the data files in the archive to the figures published in the papers.

Unfortunately, the data in the archive leave an awful lot to be desired.

In the near future, we will write up a detailed analysis of the data in the archive, combining it with scanning probe data of nanoparticles we have acquired, to show that the conclusions reached by FS et al. on the basis of their STM data, and associated analyses, do not stand up to scrutiny. Like Raphaël, I do not agree with FS that the only appropriate forum for scientific debate is the primary literature. (I’ve written at length I’ve rambled on in my usual loquacious style recently about the importance of embedding ‘Web 2.0’ debate within the literature). Nonetheless, given, for one, the extent to which the stripy nanoparticle papers have been cited, there is clearly significant scope in the primary literature for examining the reliability of FS’ experimental methodology and data analysis.

For now, I would simply like to highlight a number of the most important problems with the data in the archive.

The problems are multi-faceted and arise from a combination of: (i) feedback loop artifacts; (ii) image analysis based around exceptionally low pixel densities (and interpolation, applied apparently unknowingly, to achieve higher pixel densities); (iii) a highly selective choice/sampling of features for analysis in the STM images; and (iv) experimental uncertainties/error bars which are dramatically underestimated.

Predrag Djuranovic showed in 2005 how it was possible for stripes to appear on bare gold and ITO surfaces due to improper feedback loop settings. Julian Stirling, a final year PhD student in the Nanoscience Group here in Nottingham, has written a simulation code which shows that both stripes and “domains” can appear if care is not taken to choose the gains and scan speed appropriately. I’ll curtail my loquacity for now and let the images do the talking…

Stripes-and-domains

It’s worth comparing these images against the 3D-rendered experimental data of Fig. 3 of Jackson et al., Nature Materials 2004:

NatureMat

Much more information on the simulations will be provided in due course in the paper I mentioned above but I am sure that Julian will be more than happy to address questions in the comments section below.

Just in case anyone might think that generating stripes in a simulated STM doesn’t quite address the observation of stripes in an ‘ex silico’ environment, we have, of course, produced our own experimental images of stripy nanoparticles.

stripes-together

In (A) the feedback loop gain is increased dramatically at the scan line highlighted by the arrows. Stripes then appear in the upper 2/3 of the image. A low pass filter – which is, in essence, equivalent to an interpolation because it washes out high spatial frequencies – is then applied to a zoom of the image (shown in B), followed by a 3D rendering (C). The similarity with the Nature Materials figures above is striking.

The key aspect of the image above is that the nanoparticles do not have a ligand shell. These are the traditional citrate-stabilised colloidal Au nanoparticles known widely in the nanoscience community (and beyond). They were deposited onto a Au-on-mica sample from water. The stripes arise from feedback loop ringing.

Noise Pollution

Of course, FS’ argument has always been that his group can distinguish between stripes and features due to improper feedback loop settings and those that are “real”. Unfortunately, the data in the archive really do not support this claim. Again, we’ll provide a detailed analysis in the forthcoming paper, but a single image, entirely representative of the contents of the archive as a whole, is enough to show the limitations of Stellacci et al.’s analysis…

Uninterpolated

The image above is an uninterpolated digital zoom of one of the images from FS’ archive (yingotmpa2to1011706.006; 196 x 196 nm2; 512 x 512 pixels). Images of this type were used to measure the “periodicity” of ripples in nanoparticles for the statistical analyses described in Jackson et al. JACS 2006 and Hu et al. J. SPM 2009. Quite how one reliably measures a periodicity/ripple spacing for the image above (and all the others like it) is, I’m afraid to say, entirely beyond me.

I’ve noted previously that taking very low resolution STM scans (pixel size = 0.38 nm for the image above) which are analysed via heavily interpolated digital zooms is not, let’s say, the norm in the scanning probe microscopy community. There’s a very good reason for this – why would we be satisfied with low resolution images, where the pixel size is comparable to the features of interest, when we can simply reduce the scan area, and/or increase the pixel density, and “up” the effective resolution?

It is not good experimental practice, to put it mildly, to set the imaging conditions so that the size of a pixel is of the same order as the scale of the features in which you’re interested. This is a bad enough problem for images where the ripple spacing is proposed to be of order 0. 7 nm (i.e. ~ 2 pixels), as for the image above. In Jackson et al., JACS 2006, however, it is claimed that the spacing between head groups for homoligand particles is also measured and is stated to be 0.5 nm. (Unfortunately, these data are not included in the archive).

If this 0.5 nm measurement was reached on the basis of a similar type of imaging approach to that above, then there’s a fundamental problem (even if the image wasn’t generated due to feedback loop artifacts). The Nyquist-Shannon sampling theorem tells us that in order to measure the period of a wave without aliasing artefacts, our sampling frequency must be at least twice that of the highest frequency component. In other words, to measure something with a period of 0.5 nm, we should have, as an absolute maximum, a pixel size of 0.25 nm. I have asked FS whether the same imaging conditions (i.e. 196 nm x 196 nm, 512 x 512 pixel) were also used for the homoligand particles. I cannot tell from the archive because the data are not there. (To be fair to FS, I did not previously ask him to provide these particular data).

Compare and contrast

The contrast of the images in the original Nature Materials 2004 paper is quite saturated for some reason. The data archive allows us to look at the original image before contrast adjustment. This is illuminating…

Comparison-NatMat-Fig1a-vs-raw-data

The box in the image on the right delineates the area used in Fig. 1 of the Jackson et al. Nature Materials paper (which is shown on the left). As was pointed out to me by a researcher in the group here, what is intriguing is that the ripples extend beyond the edges of the particles. An explanation of this was offered by FS on the basis that the ripples arise from particles in a layer underneath the “brighter” particles seen in the image. That’s certainly one explanation. There are others that spring perhaps more readily to mind…