stm

Probes, Patterns, and (nano)Particles

philipmoriarty

Philip Moriarty

This is a guest post by Philip Moriarty, Professor of Physics at the University of Nottingham (and blogger).

“We shape our tools, and thereafter our tools shape us.”

Marshall McLuhan (1911-1980)

My previous posts for Raphael’s blog have focussed on critiquing poor methodology and over-enthusiastic data interpretation when it comes to imaging the surface structure of functionalised nanoparticles. This time round, however, I’m in the much happier position of being able to highlight an example of good practice in resolving (sub-)molecular structure where the authors have carefully and systematically used scanning probe microscopy (SPM), alongside image recognition techniques, to determine the molecular termination of Ag nanoparticles.

For those unfamiliar with SPM, the concept underpinning the operation of the technique is relatively straight-forward. (The experimental implementation rather less so…) Unlike a conventional microscope, there are no lenses, no mirrors, indeed, no optics of any sort [1]. Instead, an atomically or molecularly sharp probe is scanned back and forth across a sample surface (which is preferably atomically flat), interacting with the atoms and molecules below. The probe-sample interaction can arise from the formation of a chemical bond between the atom terminating the probe and its counterpart on the sample surface, or an electrostatic or magnetic force, or dispersion (van der Waals) forces, or, as in scanning tunnelling microscopy (STM), the quantum mechanical tunnelling of electrons. Or, as is generally the case, a combination of a variety of those interactions. (And that’s certainly not an exhaustive list.)

Here’s an example of an STM in action, filmed in our lab at Nottingham for Brady Haran’s Sixty Symbols channel a few years back…

Scanning probe microscopy is my first love in research. The technique’s ability to image and manipulate matter at the single atom/molecule level (and now with individual chemical bond precision) is seen by many as representing the ‘genesis’ of nanoscience and nanotechnology back in the early eighties. But with all of that power to probe the nanoscopic, molecular, and quantum regimes come tremendous pitfalls. It is very easy to acquire artefact-ridden images that look convincing to a scientist with little or no SPM experience but that instead arise from a number of common failings in setting up the instrument, from noise sources, or from a hasty or poorly informed choice of imaging parameters. What’s worse is that even relatively seasoned SPM practitioners (including yours truly) can often be fooled. With SPM, it can look like a duck, waddle like a duck, and quack like a duck. But it can too often be a goose…

That’s why I was delighted when Raphael forwarded me a link to “Real-space imaging with pattern recognition of a ligand-protected Ag374 nanocluster at sub-molecular resolution”, a paper published a few months ago by Qin Zhou and colleagues at Xiamen University (China), the Chinese Academy of Science, Dalian (China), the University of Jyväskylä (Finland), and the Southern University of Science and Technology, Guandong (China). The authors have convincingly imaged the structure of the layer of thiol molecules (specifically, tert-butyl benzene thiol) terminating 5 nm diameter silver nanoparticles.

What distinguishes this work from the stripy nanoparticle oeuvre that has been discussed and dissected at length here at Raphael’s blog (and elsewhere) is the degree of care taken by the authors and, importantly, their focus on image reproducibility. Instead of using offline zooms to “post hoc” select individual particles for analysis (a significant issue with the ‘stripy’ nanoparticle work), Zhou et al. have zoomed in on individual particles in real time and have made certain that the features they see are stable and reproducible from image to image. The images below are taken from the supplementary information for their paper and shows the same nanoparticle imaged four times over, with negligible changes in the sub-particle structure from image to image.

This is SPM 101

This is SPM 101. Actually, it’s Experimental Science 101. If features are not repeatable — or, worse, disappear when a number of consecutive images/spectra are averaged – then we should not make inflated claims (or, indeed, any claims at all) on the basis of a single measurement. Moreover, the data are free of the type of feedback artefacts that plagued the ‘classic’ stripy nanoparticle images and Zhou et al. have worked hard to ensure that the influence of the tip was kept to a minimum.

Given the complexity of the tip-sample interactions, however, I don’t quite share the authors’ confidence in the Tersoff-Hamann approach they use for STM image simulation [2]. I’m also not entirely convinced by their comparison with images of isolated molecular adsorption on single crystal (i.e. planar) gold surfaces because of exactly the convolution effects they point towards elsewhere in their paper. But these are relatively minor points. The imaging and associated analysis are carried out to a very high standard, and their (sub)molecular resolution images are compelling.

As Zhou et al. point out in their paper, STM (or atomic force microscopy) of nanoparticles, as compared to imaging a single crystal metal, semiconductor, or insulator surface, is not at all easy due to the challenging non-planar topography. A number of years back we worked with Marie-Paule Pileni’s group on dynamic force microscopy imaging (and force-distance analysis) of dodecanethiol-passivated Au nanoparticles. We found somewhat similar image instabilities as those observed by Zhou et al…

A-C above are STM data

A-C above are STM data, while D-F are constant height atomic force microscope images [3], of thiol-passivated nanoparticles (synthesised by Nicolas Goubet of Pileni’s group) and acquired at 78 K. (Zhou et al. similarly acquired data at 77K but they also went down to liquid helium temperatures). Note that while we could acquire sub-nanoparticle resolution in D-F (which is a sequence of images where the tip height is systematically lowered), the images lacked the impressive reproducibility achieved by Zhou et al. In fact, we found that even though we were ostensibly in scanning tunnelling microscopy mode for images such as those shown in A-C (and thus, supposedly, not in direct contact with the nanoparticle), the tip was actually penetrating into the terminating molecular layer, as revealed by force-distance spectroscopy in atomic force microscopy mode.

The other exciting aspect of Zhou et al.’s paper is that they use pattern recognition to ‘cross-correlate’ experimental and simulated data. There’s increasingly an exciting overlap between computer science and scanning probe microscopy in the area of image classification/recognition and Zhou and co-workers have helped nudge nanoscience a little more in this direction. Here at Nottingham we’re particularly keen on the machine learning/AI-scanning probe interface, as discussed in a recent Computerphile video…

Given the number of posts over the years at Raphael’s blog regarding a lack of rigour in scanning probe work, I am pleased, and very grateful, to have been invited to write this post to redress the balance just a little. SPM, when applied correctly, is an exceptionally powerful technique. It’s a cornerstone of nanoscience, and the only tool we have that allows both real space imaging and controlled modification right down to the single chemical bond limit. But every tool has its limitations. And the tool shouldn’t be held responsible if it’s misapplied…

[1] Unless we’re talking about scanning near field optical microscopy (SNOM). That’s a whole new universe of experimental pain…

[2] This is the “zeroth” order approach to simulating STM images from a calculated density of states. It’s a good starting point (and for complicated systems like a thiol-terminated Ag374 particle probably also the end point due to computational resource limitations) but it is certainly a major approximation.

[3] Technically, dynamic force microscopy using a qPlus sensor. See this Sixty Symbols video for more information about this technique.

 

Whither stripes?

Philip Moriarty

Philip Moriarty

This is a guest post by  Philip Moriarty, Professor of Physics at the University of Nottingham

A few days ago, Raphael highlighted the kerfuffle that our paper, Critical assessment of the evidence for striped nanoparticles, has generated over at PubPeer and elsewhere on the internet. (This excellent post from Neuroskeptic is particularly worth reading – more on this below). At one point the intense interest in the paper and associated comments thread ‘broke’ PubPeer — the site had difficulty dealing with the traffic, leading to this alert:

At the time of writing, there are seventy-eight comments on the paper, quite a few of which are rather technical and dig down into the minutiae of the many flaws in the striped nanoparticle ‘oeuvre’ of Francesco Stellacci and co-workers.  It is, however, now getting very difficult to follow the thread over at PubPeer, partly because of the myriad comments labelled “Unregistered Submission” – it has been suggested that PubPeer consider modifying their comment labelling system – but mostly because of the rather circular nature of the arguments and the inability to incorporate figures/images directly into a comments thread to facilitate discussion and explanation. The ease of incorporating images, figures, and, indeed, video in a blog post means that a WordPress site such as Raphael’s is a rather more attractive proposition when making particular scientific/technical points about Stellacci et al.’s data acquisition/analysis protocols. That’s why the following discussion is posted here, rather than at PubPeer.

Unwarranted assumptions about unReg?

Julian Stirling, the lead author of the “Critical assessment…” paper, and I have spent a considerable amount of time and effort over the last week addressing the comments of one particular “Unregistered Submission” at PubPeer who, although categorically stating right from the off that (s)he was in no way connected with Stellacci and co-workers, nonetheless has remarkably in-depth knowledge of a number of key papers (and their associated supplementary information) from the Stellacci group.

It is important to note that although our critique of Stellacci et al.’s data has, to the best of our knowledge, attracted the greatest number of comments for any paper at PubPeer to date, this is not indicative of widespread debate about our criticism of the striped nanoparticle papers (which now number close to thirty). Instead, the majority of comments at PubPeer are very supportive of the arguments in our “Critical assessment…” paper. It is only a particular commenter, who does not wish to log into the PubPeer site and is therefore labelled “Unregistered Submission” every time they post (I’ll call them unReg from now on), that is challenging our critique.

We have dealt repeatedly, and forensically, with a series of comments from unReg over at PubPeer. However, although unReg has made a couple of extremely important admissions (which I’ll come to below), they continue to argue, on entirely unphysical grounds, that the stripes observed by Stellacci et al. in many cases are not the result of artefacts and improper data acquisition/analysis protocols.

unReg’s persistence in attempting to explain away artefacts could be due to a couple of things: (i) we are being subjected to a debating approach somewhat akin to the Gish gallop. (My sincere thanks to a colleague – not at Nottingham, nor, indeed, in the UK – who has been following the thread at PubPeer and suggested this to us by e-mail. Julian also recently raised it in a comment elsewhere at Raphael’s blog which is well worth reading);  and/or (ii) our assumption throughout that unReg is familiar with the basic ideas and protocols of experimental science, at least at undergraduate level, may be wrong.

Because we have no idea of unReg’s scientific background – despite a couple of commenters at PubPeer explicitly asking unReg to clarify this point – we assumed that they had a reasonable understanding of basic aspects of experimental physics such as noise reduction, treatment of experimental uncertainties, accuracy vs precision etc… But Julian and I realised yesterday afternoon that perhaps the reason we and unReg keep ‘speaking past’ each other is because unReg may well not have a very strong or extensive background in experimental science.  Their suggestion at one point in the PubPeer comments thread that “the absence of evidence is not evidence of absence” is a rather remarkable statement for an experimentalist to make. We therefore suspect that the central reason why unReg is not following our arguments is their lack of experience with, and absence of training in, basic experimental science.

As such, I thought it might be a useful exercise – both for unReg and any students who might be following the debate – to adopt a slightly more tutorial approach in the discussion of the issues with the stripy nanoparticle data so as to complement the very technical discussion given in our paper and at PubPeer. Let’s start by looking at a selection of stripy nanoparticle images ‘through the ages’ (well, over the last decade or so).

The Evolution of Stripes: From feedback loop ringing to CSI image analysis protocols

The images labelled 1 – 12 below represent the majority of the types of striped nanoparticle image published to date. (I had hoped to put together a 4 x 4 or 4 x5 matrix of images but, due to image re-use throughout Stellacci et al.’s work, there aren’t enough separate papers to do that).

Stripes across the ages

Stripes across the ages

Putting the images side by side like this is very instructive. Note the distinct variation in the ‘visibility’ of the stripes. Stellacci and co-workers will claim that this is because the terminating ligands are not the same on every particle. That’s certainly one interpretation. Note, however, that images 1, 2, 4, and 11 each have the same type of octanethiol- mercaptopropionic acid (2:1) termination and that we have shown, through an analysis of the raw data, that images #1 and #11 result from a scanning tunnelling microscopy artefact known as feedback loop ringing (see p.73 of this scanning probe microscopy manual).

A key question which has been raised repeatedly (see, for example, Peer 7’s comment in this sub-thread) is just why Stellacci et al., or any other group (including those selected by Francesco Stellacci to independently verify his results), has not reproduced the type of exceptionally high contrast images of stripes seen in images #1,#2,#3, and #11 in any of the studies carried out in 2013. This question still hangs in the air at PubPeer…

Moreover, the inclusion of Image #5 above is not a mistake on my part – I’ll leave it to the reader to identify just where the stripes are supposed to lie in this image. Images #10 and #12 similarly represent a challenge for the eagle-eyed reader, while Image #4 warrants its own extended discussion below because it forms a cornerstone of unReg’s argument that the stripes are real. Far from supporting the stripes hypothesis, however, Stellacci et al’s own analysis of Image #4 contradicts their previous measurements and arguments (see “Fourier analysis or should we use a ruler instead?” below).

What is exceptionally important to note is that, as we show in considerable detail in “Critical assessment…”, a variety of artefacts and improper data acquisition/analysis protocols – and not just feedback loop ringing – are responsible for the variety of striped images seen above. For those with no experience in scanning probe microscopy, this may seem like a remarkable claim at first glance, particularly given that those striped nanoparticle images have led to over thirty papers in some of the most prestigious journals in nanoscience (and, more broadly, in science in general). However, we justify each of our claims in extensive detail in Stirling et al. The key effects are as follows:

–          Feedback loop ringing (see, for example, Fig. 3 of “Critical assessment…”. Note that nanoparticles in that figure are entirely ligand-free).

–          The “CSI” effect. We know from access to (some of) the raw data that a very common approach to STM imaging in the Stellacci group (up until ~ 2012) was to image very large areas with relatively low pixel densities and then rely on offline zooming into areas no more than a few tens of pixels across to “resolve” stripes. This ‘CSI’ approach to STM is unheard of in the scanning probe community because if we want to get higher resolution images, we simply reduce the scan area. The Stellacci et al. method can be used to generate stripes on entirely unfunctionalised particles, as shown here.

–          Observer bias. The eye is remarkably adept at picking patterns out of uncorrelated noise. Fig. 9 in Stirling et al. demonstrates this effect for ‘striped’ nanoparticles. I have referred to this post from my erstwhile colleague Peter Coles repeatedly throughout the debate at PubPeer. I recommend that anyone involved in image interpretation read Coles’ post.

Fourier analysis or should we use a ruler instead?

I love Fourier analysis. Indeed, about the only ‘Eureka!’ moment I had as an undergraduate was when I realised that the Heisenberg uncertainty principle is nothing more than a Fourier transform. (Those readers who are unfamiliar with Fourier analysis and might like a  brief overview could perhaps refer to this Sixty Symbols video, or, for much more (mathematical) detail, this set of notes I wrote for an undergraduate module a number of years ago).

In Critical assessment… we show, via a Fourier approach, that the measurements of stripe spacing in papers published by Stellacci et al in the period from 2006 to 2009 – and subsequently used to claim that the stripes do not arise from feedback loop ringing – are comprehensively incorrectly estimated. We are confident in our results here because of a clear peak in our Fourier space data (See Figures S1 and S2 of the paper).

Fabio Biscarini and co-workers, in collaboration with Stellacci et al, have attempted to use Fourier analysis to calculate the ‘periodicity’ of the nanoparticle stripes. They use Fourier transform of the raw images, averaged in the slow scan direction. No peak is visible in this Fourier space data, even when plotting on a logarithmic scale in an attempt to increase contrast/visibility. Instead, the Fourier space data just shows a decay with a couple of plateaus in it. They claim – erroneously, for reasons we cover below – that the corners of the second plateau and the continuing decay (called a “shoulder” by Biscarini et al.) indicates stripe spacing. To locate these shoulders they apply a fitting method.

We describe in detail in “Critical assessment… that not only is the fitting strategy used to extract the spatial frequencies highly questionable – a seven free-parameter fit to selectively ‘edited’ data is always going to be somewhat lacking in credibility – but that the error bars on the spatial frequencies extracted are underestimated by a very large amount.

Moreover, Biscarini et al. claim the following in the conclusions of their paper:

The analysis of STM images has shown that mixed-ligand NPs exhibit a spatially correlated architecture with a periodicity of 1 nm that is independent of the imaging conditions and can be reproduced in four different laboratories using three different STM microscopes. This PSD [power spectral density; i.e. the modulus squared of the Fourier transform] analysis also shows…”

Note that the clear, and entirely misleading, implication here is that use of the power spectral density (PSD – a way of representing the Fourier space data) analysis employed by Biscarini et al. can identify “spatially correlated architecture”. Fig. 10 of our “Critical assessment…” paper demonstrates that this is not at all the case: the shoulders can equally well arise from random speckling.

This unconventional approach to Fourier analysis is not even internally consistent with measurements of stripe spacings as identified by Stellacci and co-workers. Anyone can show this using a pen, a ruler, and a print-out of the images of stripes shown in Fig. 3 of Ong et al. It’s essential to note that Ong et al. claim that they measure a spacing of 1.2 nm between the ‘stripes’; this 1.2 nm figure is very important in terms of consistency with the data in earlier papers. Indeed, over at PubPeer, unReg uses it as a central argument of the case for stripes:

“… the extracted characteristic length from the respective fittings results in a characteristic length for the stripes of 1.22 +/- 0.08. This is close to the 1.06 +/-0.13 length for the stripes of the images in 2004 (Figure 3a in Biscarini et al.). Instead, for the homoligand particles, the number is much lower: 0.76 +/- 0.5 [(sic). unReg means ‘+/- 0.05’ here. The unit is nm] , as expected. So the characteristic lengths of the high resolution striped nanoparticles of 2013 and the low resolution striped nanoparticles of 2004 match within statistical error, ***which is strong evidence that the stripe features are real.***”

Notwithstanding the issue that the PSD analysis is entirely insensitive to the morphology of the ligands (i.e. it cannot distinguish between stripes and a random morphology), and can be abused to give a wide range of results, there’s a rather simpler and even more damaging inconsistency here.

A number of researchers in the group here at Nottingham have repeated the ‘analysis’ in Ong et al. Take a look at the figure below. (Thanks to Adam Sweetman for putting this figure together). We have repeated the measurements of the stripe spacing for Fig. 3 of Ong et al. and we consistently find that, instead of a spacing of 1.2 nm, the separation of the ‘stripes’ using the arrows placed on the image by Ong et al. themselves has a mean value of 1.6 nm (± 0.1 nm). What is also interesting to note is that the placement of the arrows “to guide the eye” does not particularly agree with a placement based on the “centre of mass” of the features identified as stripes. In that case, the separation is far from regular.

We would ask that readers of Raphael’s blog – if you’ve got this far into this incredibly long post! – repeat the measurement to convince yourself that the quoted 1.2 nm value does not stand up to scrutiny.

Measuring stripes with a ruler

So, not only does the PSD analysis carried out by Biscarini et al. not recover the real space value for the stripe spacing (leaving aside the question of just how those stripes were identified), but there is a significant difference between the stripe spacing claimed in the 2004 Nature Materials paper and that in the 2013 papers. Both of these points severely undermine the case for stripy nanoparticles. Moreover, the inability of Ong et al. to report the correct spacing for the stripes from simple measurements of their STM instruments raises significant questions about the reliability of the other data in their paper.

As the title of this post says, whither stripes?

Reducing noise pollution

A very common technique in experimental science to increase signal-to-noise (SNR) ratio is signal averaging. I have spent many long hours at synchrotron beamlines while we repeatedly scanned the same energy window watching as a peak gradually appeared from out of the noise. But averaging is of course not restricted to synchrotron spectroscopy – practically every area of science, including SPM, can benefit from the advantages of simply summing a signal over the course of time.

A particularly frustrating aspect of the discussion at PubPeer, however, has been unReg’s continued assertion that even though summing of consecutive images of the same area gives rise to completely smooth particles (see Fig. 5(k) of “Critical assessment…”), this does not mean that there is no signal from stripes present in the scans. This claim has puzzled not just Julian and myself, but a number of other commenters at PubPeer, including Peer 7:

If a feature can not be reproduced in two successive equivalent experiments then the feature does not exist because the experiment is not reproducible. Otherwise how do you chose between two experiments with one showing the feature and the other not showing it? Which is the correct one ? Please explain to me.

Furthermore, if a too high noise is the cause of the lack of reproducibility than the signal to noise ratio is too low and once again the experiment has to be discarded and/or improved to increase this S/N. Repeating experiments is a good way to do this and if the signal does not come out of the noise when the number of experiment increases than it does not exist.

This is Experimental Science 101 and may (should) seem obvious to everyone here…”

I’ve put together a short video of a LabVIEW demo I wrote for my first year undergrad tutees to show how effective signal averaging can be. I thought it might help to clear up any misconceptions…

The Radon test

There is yet another problem, however, with the data from Ong et al. which we analysed in the previous section. This one is equally fundamental. While Ong et al. have drawn arrows to “guide the eye” to features they identify as stripes (and we’ve followed their guidelines when attempting to identify those ‘stripes’ ourselves), those stripes really do not stand up tall and proud like their counterparts ten years ago (compare images #1 and #4, or compare #4 and #11 in that montage above).

Julian and I have stressed to unReg a number of times that it is not enough to “eyeball” images and pull out what you think are patterns. Particularly when the images are as noisy as those in Stellacci et al’s recent papers, it is essential to try to adopt a more quantitiative, or at least less subjective approach. In principle, Fourier transforms should be able to help with this, but only if they are applied robustly. If spacings identified in real space (as measured using a pen and ruler on a printout of an image) don’t agree with the spacings measured by Fourier analysis – as for the data of Ong et al. discussed above – then this really should sound warning bells.

One method of improving objectivity in stripe detection is to use a Radon transform (which for reasons I won’t go into here – but Julian may well in a future post! – is closely related to the Fourier transform). Without swamping you in mathematical detail, the Radon transform is the projection of the intensity of an image along a radial line at a particular angular displacement. (It’s important in, for example, computerised tomography). In a nutshell, lines in an image will show up as peaks in the Radon transform.

So what does it look like in practice, and when applied to stripy nanoparticle images? (All of the analysis and coding associated with the discussion below are courtesy of Julian yet again). Well, let’s start with a simulated stripy nanoparticle image where the stripes are clearly visible – that’s shown on the left below and its Radon transform is on the right.

Radon-1

Note the series of peaks appearing at an angle of ~ 160°. This corresponds to the angular orientation of the stripes. The Radon transform does a good job of detecting the presence of stripes and, moreover, objectively yields the angular orientation of the stripes.

What happens when we feed the purportedly striped image from Ong et al. (i.e. Image #4 in the montage) into the Radon transform? The data are below. Note the absence of any peaks at angles anywhere near the vicinity of the angular orientation which Ong et al. assigned to the stripes (i.e. ~ 60°; see image on lower left below)…

Radon-2

Hyperempiricism

If anyone’s still left reading out there at this point, I’d like to close this exceptionally lengthy post by quoting from Neuroskeptic’s fascinating and extremely important “Science is Interpretationpiece over at the Discover magazine blogging site:

The idea that new science requires new data might be called hyperempiricism. This is a popular stance among journal editors (perhaps because it makes copyright disputes less likely). Hyperempiricism also appeals to scientists when their work is being critiqued; it allows them to say to critics, “go away until you get some data of your own”, even when the dispute is not about the data, but about how it should be interpreted.”

Meanwhile, back at PubPeer, unReg has suggested that we should “… go back to the lab and do more work”.

 

Where are the stripes?

guo

This is a guest post by  Quanmin Guo, Senior Lecturer in the School of Physics and Astronomy at the University of Birmingham

I was brought to the attention of the debate around stripy nano-particles fairly recently. Since I have been working with alkanethiol monolayers on Au(111) for a number of years, naturally I became interested in the arguments put forward by both sides. Incidentally, I imaged thiol-passivated gold nanoparticles in 1999. I have to say that the quality of our images was poor (A few images can be found on PCCP Vol. 4, 2002), certainly not better than those of Stellacci.  The poor quality of the images was partly due to the equipment that we used, being noisier that we would have liked; and partly due to the fact that STM does not cope well with non-flat surfaces. While the debate at the moment focuses on whether stripes have been observed or not on  gold nano-particles, I would like to address another relevant issue here first, are we expected to see stripes at all? The answer is yes, if the nanoparticle presents a (111) facet to the STM tip. The figure (Surf. Sci. 2011, Vol. 605, 1016) below shows an STM image from an octanethiol covered Au(111) surface.

guo2

In this image you can see a dense phase and a less dense “striped phase”. Stripes within alkanethiol SAMs on Au(111) corresponding to ≤ 80% of saturation coverage are frequently observed with single component thiols. Also in the image, there are three islands. These islands are formed by post-deposition of Au onto the SAM. The Au atoms dive through the SAM and form one atomic layer tall islands. So the newly formed Au islands are “inserted” between the initial gold substrate and the SAM. If we call these islands “nano-particles”, there you are, we have striped nano-particles. The formation of stripes is always linked to a less than saturation coverage. Under saturation coverage, the Au islands are capped by a layer of “close-packed” molecules. The nanoparticles used by Stellacci are not single layer gold islands, they are more 3-D like. If you deposit such nano-particles onto a gold substrate, they may become flatted over time due to atomic diffusion. If the particles assume a plate-like structure, and at the same time allowing some thiolate to migrate from the particle to the flat Au substrate, then stripes may appear on top of the plate.

However, the stripes reported by Stellacci seem to come from a different origin: phase separation of hydrophilic thiol from hydrophobic thiol, for example. It is my personal view that phase separation would occur so each facet is covered  by a single type of thiol because the particle is so small. Under non-equilibrium conditions, there might be some random mixing. Along the stripes shown in the above figure, the distance between the dots is 0.5 nm which is also the distance reported by Stellacci. From the STM images of Stellacci and colleagues, one really have great difficulty in seeing any features of stripes. If you examine the STM images in the figure above, you can see the stripy feature are not very regular and between rows there are disordered molecules. Nevertheless, the observation of stripes in this case is beyond any doubt, and there is no need to perform power spectrum analysis.

I am not trying to criticize anybody here, but in their recent Langmuir paper, Stellacci et al included an STM image of thiol covered flat Au surface (Fig. 4). Even for a flat sample, their image showed no molecular resolution. This makes me worried. It is understandable that Stellacci is eager to show some new results to make his initial observation credible. I think he needs to produce better STM images. At this point, I would like to say that the theoretical modeling around the stripy particles is immature. The long-standing view that alkanethiol SAMs consist of thiolate (-SR) directly attached to a Au surface via the hollow, bridge or bridge/hollow site is on the way out. Thiloalte are paired on the surface in the form of Au-adatom-dithiolate (RS-Au-SR). This linkage/ paring of the thiolate has an important influence on the structure of the SAMs.

The Emperor’s New Stripes

Philip Moriarty

Philip Moriarty

This is a guest post by  Philip Moriarty, Professor of Physics at the University of Nottingham

Since the publication of the ACS Nano and Langmuir papers to which Mathias Brust refers in the previous post, I have tried not to get drawn into posting comments on the extent to which the data reported in those papers ‘vindicates’ previous work on nanoparticle stripes by Francesco Stellacci’s group. (I did, however, post some criticism at ChemBar, which I note was subsequently uploaded, along with comments from Julian Stirling, at PubPeer).  This is because we are working on a series of experimental measurements and re-analyses of the evidence for stripes to date (including the results published in the ACS Nano and Langmuir papers) and would very much like to submit this work before the end of the year.

Mathias’ post, however, has prompted me to add a few comments in the blogosphere, courtesy of Rapha-z-Lab.

It is quite remarkable that the ACS Nano and Langmuir papers are seen by some to provide a vindication of previous work by the Stellacci group on stripes. I increasingly feel as if we’re participating in some strange new nanoscale ‘reimagining’ of The Emperor’s New Clothes! Mathias clearly and correctly points out that the ACS Nano and Langmuir papers published earlier this year provide no justification for the earlier work on stripes. Let’s compare and contrast an image from the seminal 2004 Nature Materials paper with Fig. S7 from the paper published in ACS Nano earlier this year…

moriarty comparison

Note that the image on the right above is described in the ACS Nano paper as “reproducing” high resolution imaging of stripes acquired in other labs. What is particularly important about the image on the right is that it was acquired under ultrahigh vacuum conditions and at a temperature of 77K by Christoph Renner’s group at Geneva. UHV and 77 K operation should give rise to extremely good instrumental stability and provide exceptionally clear images of stripes. Moreover, Renner is a talented and highly experienced probe microscopist. And yet, nothing even vaguely resembling the types of stripes seen in the image on the left is observed in the STM data. It’s also worth noting that the image from Renner’s group features in the Supplementary Information and not the main paper.

Equally remarkable is that the control sample discussed in the ACS Nano paper (NP3) shows features which are, if anything, much more like stripes than the so-called stripy particles. But the authors don’t mention this. I’ve included a comparison below of Fig. 5(c) from the ACS Nano paper with a contrast-enhanced version. I’ll leave it to the reader to make up their own mind as to whether or not there is greater evidence for stripe formation in the image shown on the right above, or in the image shown on the right below…

moriarty 2

Finally, the authors neglect any consideration at all of convolution between the tip structure and the sample structure. One can’t just assume that the tip structure plays no role in the image formation mechanism – scanning probe microscopy is called scanning probe microscopy for a reason. This is particularly the case when the features being imaged are likely to have a comparable radius of curvature to the tip.

I could spend quite a considerable amount of time discussing other deficiencies in the analyses in the Langmuir and ACS Nano papers but we’ll cover this at length in the paper we’re writing.

Browsing the archive

Philip Moriarty

Philip Moriarty

Update (11/06/2013): the link to the raw data on Francesco Stellacci does not work anymore; here is a mirror of the files that were released last month.

This is a guest post by  Philip Moriarty, Professor of Physics at the University of Nottingham

It is to Francesco Stellacci (FS)’s credit that he has now uploaded the data I requested some time ago. I appreciate that this will have been a time-consuming task – I sometimes struggle to find files I saved last week, let alone locate data from almost a decade ago! It’s just a shame that the provision of the data necessitated the involvement of the journal editors (and possibly required prompting from other sources).

It is also worth noting that, over the last week, FS has been very helpful in providing timely responses to my questions regarding the precise relationship of the data files in the archive to the figures published in the papers.

Unfortunately, the data in the archive leave an awful lot to be desired.

In the near future, we will write up a detailed analysis of the data in the archive, combining it with scanning probe data of nanoparticles we have acquired, to show that the conclusions reached by FS et al. on the basis of their STM data, and associated analyses, do not stand up to scrutiny. Like Raphaël, I do not agree with FS that the only appropriate forum for scientific debate is the primary literature. (I’ve written at length I’ve rambled on in my usual loquacious style recently about the importance of embedding ‘Web 2.0’ debate within the literature). Nonetheless, given, for one, the extent to which the stripy nanoparticle papers have been cited, there is clearly significant scope in the primary literature for examining the reliability of FS’ experimental methodology and data analysis.

For now, I would simply like to highlight a number of the most important problems with the data in the archive.

The problems are multi-faceted and arise from a combination of: (i) feedback loop artifacts; (ii) image analysis based around exceptionally low pixel densities (and interpolation, applied apparently unknowingly, to achieve higher pixel densities); (iii) a highly selective choice/sampling of features for analysis in the STM images; and (iv) experimental uncertainties/error bars which are dramatically underestimated.

Predrag Djuranovic showed in 2005 how it was possible for stripes to appear on bare gold and ITO surfaces due to improper feedback loop settings. Julian Stirling, a final year PhD student in the Nanoscience Group here in Nottingham, has written a simulation code which shows that both stripes and “domains” can appear if care is not taken to choose the gains and scan speed appropriately. I’ll curtail my loquacity for now and let the images do the talking…

Stripes-and-domains

It’s worth comparing these images against the 3D-rendered experimental data of Fig. 3 of Jackson et al., Nature Materials 2004:

NatureMat

Much more information on the simulations will be provided in due course in the paper I mentioned above but I am sure that Julian will be more than happy to address questions in the comments section below.

Just in case anyone might think that generating stripes in a simulated STM doesn’t quite address the observation of stripes in an ‘ex silico’ environment, we have, of course, produced our own experimental images of stripy nanoparticles.

stripes-together

In (A) the feedback loop gain is increased dramatically at the scan line highlighted by the arrows. Stripes then appear in the upper 2/3 of the image. A low pass filter – which is, in essence, equivalent to an interpolation because it washes out high spatial frequencies – is then applied to a zoom of the image (shown in B), followed by a 3D rendering (C). The similarity with the Nature Materials figures above is striking.

The key aspect of the image above is that the nanoparticles do not have a ligand shell. These are the traditional citrate-stabilised colloidal Au nanoparticles known widely in the nanoscience community (and beyond). They were deposited onto a Au-on-mica sample from water. The stripes arise from feedback loop ringing.

Noise Pollution

Of course, FS’ argument has always been that his group can distinguish between stripes and features due to improper feedback loop settings and those that are “real”. Unfortunately, the data in the archive really do not support this claim. Again, we’ll provide a detailed analysis in the forthcoming paper, but a single image, entirely representative of the contents of the archive as a whole, is enough to show the limitations of Stellacci et al.’s analysis…

Uninterpolated

The image above is an uninterpolated digital zoom of one of the images from FS’ archive (yingotmpa2to1011706.006; 196 x 196 nm2; 512 x 512 pixels). Images of this type were used to measure the “periodicity” of ripples in nanoparticles for the statistical analyses described in Jackson et al. JACS 2006 and Hu et al. J. SPM 2009. Quite how one reliably measures a periodicity/ripple spacing for the image above (and all the others like it) is, I’m afraid to say, entirely beyond me.

I’ve noted previously that taking very low resolution STM scans (pixel size = 0.38 nm for the image above) which are analysed via heavily interpolated digital zooms is not, let’s say, the norm in the scanning probe microscopy community. There’s a very good reason for this – why would we be satisfied with low resolution images, where the pixel size is comparable to the features of interest, when we can simply reduce the scan area, and/or increase the pixel density, and “up” the effective resolution?

It is not good experimental practice, to put it mildly, to set the imaging conditions so that the size of a pixel is of the same order as the scale of the features in which you’re interested. This is a bad enough problem for images where the ripple spacing is proposed to be of order 0. 7 nm (i.e. ~ 2 pixels), as for the image above. In Jackson et al., JACS 2006, however, it is claimed that the spacing between head groups for homoligand particles is also measured and is stated to be 0.5 nm. (Unfortunately, these data are not included in the archive).

If this 0.5 nm measurement was reached on the basis of a similar type of imaging approach to that above, then there’s a fundamental problem (even if the image wasn’t generated due to feedback loop artifacts). The Nyquist-Shannon sampling theorem tells us that in order to measure the period of a wave without aliasing artefacts, our sampling frequency must be at least twice that of the highest frequency component. In other words, to measure something with a period of 0.5 nm, we should have, as an absolute maximum, a pixel size of 0.25 nm. I have asked FS whether the same imaging conditions (i.e. 196 nm x 196 nm, 512 x 512 pixel) were also used for the homoligand particles. I cannot tell from the archive because the data are not there. (To be fair to FS, I did not previously ask him to provide these particular data).

Compare and contrast

The contrast of the images in the original Nature Materials 2004 paper is quite saturated for some reason. The data archive allows us to look at the original image before contrast adjustment. This is illuminating…

Comparison-NatMat-Fig1a-vs-raw-data

The box in the image on the right delineates the area used in Fig. 1 of the Jackson et al. Nature Materials paper (which is shown on the left). As was pointed out to me by a researcher in the group here, what is intriguing is that the ripples extend beyond the edges of the particles. An explanation of this was offered by FS on the basis that the ripples arise from particles in a layer underneath the “brighter” particles seen in the image. That’s certainly one explanation. There are others that spring perhaps more readily to mind…

Three months of stripy nanoparticles controversy

In a post at MaterialsToday.com, David Bradley asked whether nanoparticles have lost their stripes?  and concluded:

There are now at least a couple of dozen comments on the THE article itself. If only there were some centralised system for pulling all the arguments together and perhaps tying them to the original papers from Stellacci and from Lévy. Perhaps we will one day see such a development in web 3.0. Meanwhile, we still don’t know for sure whether those gold nanoparticles are stripy or not!

In the absence of web 3.0, here is an attempt at providing a current picture of the controversy, focusing on the scientific arguments. Ethical issues such as data re-use and refusal to provide raw data are covered elsewhere.

First a quick reminder if you have not been following: the stripy nanoparticle hypothesis was first proposed in Nature Materials in 2004 by the group of Professor Stellacci (then at MIT and now at the EPFL). This hypothesis now forms the basis of 26 articles by the same group, mostly published in high impact journals including Nature Materials, Nature Nanotechnology, Nature Communications, Science, Journal of the American Chemical Society, Small, etc:  -101234567 , 8910111213, 13a141516171819202122 and 23. {apologies for the strange numbering}

ENGAGE? 

A number of technical questions have been discussed not just at Times Higher Education (25 comments) but also here at rapha-z-lab (over 100 comments), on the blog of Doug Natelson (23 comments) and in a few other places. One notable fact is the very unfortunate refusal of Francesco Stellacci and his co-authors to engage in the online discussion (with the exception of one comment by Sharon Glotzer on the article at Chemistry World). This can be contrasted with the way Phil Baran engaged with Blog Syn assessment of one of its papers as ‘difficult to reproduce‘ ; that latter case nicely demonstrates how post publication peer review combined with engagement from the criticized authors can lead to better science. Stuart Cantrill puts it succintly (referring not to this blog but to Blog Syn: “ENGAGE (and do it nicely). This is not a witch hunt, it’s for the good of science.”).

Scanning Tunneling Microscopy (STM)

The primary evidence for the existence of stripes is STM. In Stripy Nanoparticles Revisited (open access), we argue that the observed stripes are a scanning artefact rather than a feature of the particles. We base our conclusion on several argument: 1) a simple geometric consideration about projection from a sphere (the particle) to 2D (the image), 2) the direction of the stripes (always perpendicular to scanning direction), 3) the unlikely correlation between particles (see this video to understand the problem), and Fast Fourier Transform analysis. Stellacci and Yu response was published in Small at the same time as our article.

After the publications, Predrag Djuranovic contacted me. He had already demonstrated that the stripes were an artefact several years ago. He published on my blog results which demonstrate that the stripes could be obtained in the absence of nanoparticles and Matlab simulations which show how the STM feedback mechanisms  generate those patterns. Predrag was a graduate student in Stellacci’s group at MIT in 2005 when he generated these data… but they had not been published or communicated outside of MIT.

Philippe Moriarty, Professor of Physics at Nottingham and leading STM expert, confirmed our interpretation of the stripes as a scanning artefact, and, discussed the response of Stellacci and Yu, showing that the ‘stripes’ presented in the new images are ‘a fortuituous alignment of random noise‘.

At Doug Natelson’s blog,  SPMer said:

As somebody who has worked on high resolution SPM for many years, the first instinct on seeing the images in the Nature Materials paper mentioned is “those are crappy scans”. […] it should never have been published. It is a disservice both to the field and to the authors of the paper themselves.”

After having looked at Stellacci and Yu’s response, SPMer was disheartened:

[…] as an experimentalist I looked closely at figures 3 and 4 […] Two images are shown, and the authors pick at random two particles and claim that they see lines that match on the two particles. However, if we look at other particles in the figure, there appears to be absolutely no correlation in the two rotated images for the dimples or stripes on the particles. What about an autocorrelation analysis? […]

Still at Doug Natelson’s blog, another STM expert said:

I am an STM-er. For 10 years. Atomic resolution spectroscopy is the only thing I do. Granted I hunt for flat areas to do my work, but I do encounter a lot of nm scale “mountains”. My judgment (without it being worth anything here as I prefer to remain anonymous) is that the STM images in the papers by Stellaci are not proof of ligand ordering. I believe (but can’t prove that without actually repeating these measurements myself) that they are indeed feedback-loop ringing. […]

It is unfortunate that SPMer and the above STM expert chose to remain anonymous (Rich Apodaca’s thoughts on the choice of anonymous science blogging here).

At Rapha-z-lab, AFMhelp, a.k.a Peter Eaton, was one of the first to post a comment; he said:

Good work. The first time I saw those images, I was very very doubtful about them. I think it would be very easy to produce such images, by having some periodic noise in a scanning image of “normal” nanoparticles.

and later (in a discussion of this article):

The AFM data in the Nature Materials paper are nowhere near to being “proof”. Phase imaging of heterogeneity at the (small) molecular level on non-flat surfaces is extremely difficult. There would need to be more images than that shown. The stripes are interesting, but occur only on one or two particles…they do seem to be digital zooms of larger images, since many of the “features” seem to be single pixel in size.The AFM data is basically hard to interpret and more data should have been got before publication. It is also rather confusing in its presentation, and I think it would have been more fair to show the data without the cartoons drawn on top of the data, or at least include this data in the SI. This is in contrast to the original STM data which as discussed previously (and as was pretty much proven by Predrag) is completely artifactual.
Peter Eaton

Pep

Some other aspects of the STM evidence were discussed in minute details in a series of exchanges involving anonymous commentator Pep. Predrag Djuranovic, Philip Moriarty, Li Jinfeng, I, and others spent a considerable amount of time answering carefully the 20 or so comments that Pep left here over a period of a few days. Apects of that discussion were genuinely interesting and led for example to a refined understanding of the ‘projection argument’ (using a suggestion from a comment at Doug Natelson’s blog: positive post peer review in action!), but, as Dmitry Baranov noted  very early in the discussion (on Twitter) “@raphavisses and yeah, looks like Pep got more than just a quest for fairness there.” For this reason, I won’t attempt to summarize those long-winded arguments in this post (they can be found in particular under these two posts). It turned out that Pep was Pep Pàmies, Editor at Nature Materials, the journal which has published 4 of the stripy articles including the inaugural one (and which rejected a first version of Stripy Revisited; see my Letter to the Editors here). My comment identifying Pep is here as well as the robust discussion that followed; see also Ben Goldacre take on this episode, as well as Pep Pàmies note entitled “On my comments on Lévy’s blog[update 404: Pep has now removed the note from the web!]. Dave Fernig has responded to Pep Pàmies apologia for reuse of data on his blog and Philip Moriarty has provided a comprehensive response as a guest post.

Transmission Electron Microscopy (TEM)

In addition to scanning probe microscopy, it is claimed in the original 2004 article paper that the existence of stripes is backed up by TEM (and XRD). In stripy revisited (supporting information, section 2), we show that “no conclusion regarding the structure of the capping layer can be drawn from this image”. In their response (and at Chemistry World), the authors repeat that TEM backs up the existence of stripes but do not address our criticism. Online post publication, the TEM evidence has been brought up by Pep too (see above) and others. In a strongly worded comment (a response to Pep), Li Jinfeng wrote:

To claim the existence of stripes made of alkyl thiolate molecules based on a few dark spots in rather bad-quality TEM images…? Unheard of in the TEM community!

Elias wrote (in another thread):

Here is more TEM evidence for monolayer segregation that you seemed to have ignored: http://pubs.acs.org/doi/abs/10.1021/nn204078w

The reference here is to really interesting paper, but it is not about stripes nor does it claim to show the existence of stripes, as I responded to Elias here.

The last (?) word on the TEM evidence for stripes goes to David A Muller, Professor of Physics at Cornell. He is a leading electron microscopy expert and the motto of his group his ‘Understanding Materials, Atom by Atom“. At Doug Natelson’s blog, he says:

Funny thing is the TEM image from their appendix, Fig S2a, that is cited as independent confirmation is also an instrumental artifact. The ring of black dots is the out of focus point spread function (basically a Fresnel fringe). This is a very common problem for casual users of a TEM who are looking for core-shell nanoparticle structures. By changing focus, either a dark or bright ring can be created. Going in to focus will make it go away. The focus of the image can be determined from a quick FFT of the amorphous background, and sure enough, the passband is about a factor of two off from the optimal defocus.

NMR

While the stripy controversy was about to become public, an article ‘confirming’ the existence of stripes by NMR was published in Nature Communications. No independent post publication peer review yet. Do you know an NMR expert who could comment on the Nature Communication paper? The invitation for a guest post is open.