Seven years of imaging artifacts: What gives?

pedjaThis is a guest post by Predrag Djuranovic, currently a graduate student at the MIT Department of Materials Science and Engineering.

In 2005, I was a graduate student in Francesco Stellacci’s lab at MIT. My project was investigating a potential phase separation in the ligand shells on semiconductor nanoparticles.  I explain below how, after months of strenuous STM imaging, I came to the conclusion that the  “ripples” and “hexagonal packing”, were nothing but common scanning artifacts, called feedback oscillations or  ”ringing”.

When I started to have doubts, I performed simple control experiments, i.e. STM imaging on bare conducting substrates (clean substrates without any ligands). I selected two conductive substrates: gold foil (surface roughness comparable to the size of gold nanoparticles) and ITO glass (relatively flat surface islands in the 20-50 nm range). STM scans of those surfaces using the same instrument and similar settings as the ones used by Jackson et al. led to the images shown in Fig. 1. They are remarkably similar to the STM images that had just been reported in Nature Materials.

Figure 1. Feedback artifacts on bare gold foil (left) and plain ITO substrate; Predrag Djuranovic, unpublished results, MIT 2005

Figure 1. Feedback artifacts on bare gold foil (left) and plain ITO substrate; Predrag Djuranovic, unpublished results, MIT 2005

Obviously, these features are neither ligands nor any kind of nanostructure on the surface, since those surfaces were not functionalized. What are they?

The images are collected in constant tunneling current mode: the tunneling current and voltage across the tunneling junction is set to a particular value. The tunneling junction is connected to a feedback loop, which controls the movement of the STM tip – if the tunneling current is above the set value, the feedback loop retracts the tip away from the surface, thus decreasing the tunneling current.  And vice versa.

It is normal to deviate from the set point by a few percents, but the scans reported in the Nature Materials 2004 article, as well as those presented in the images above, show deviation of one order of magnitude from the set point.

There are three parameters fundamental to feedback circuits in STM:

1. Proportional gain (the error signal gets multiplied by a gain and is sent to the piezo)

2. Integral gain (multiplies the integrated error while scanning)

3. Differential gain (multiplies the difference between the current tunneling current and the previous tunneling current reading)

These parameters are extremely important to consider, because they directly control the STM tip movement. Setting the gains, scan rate and scan size is crucial to obtain a good STM image. Scanning larger areas (> 50 nm) immediately implies dropping the scan rate so the feedback loop has enough response time to track the set current. Scanning too fast over large areas will lead the feedback system into an unstable mode of operation and will generate artificial features.

To better understand the tip response to sudden changes of topography, I decided to use elementary control theory. I implemented in matlab a simple second order feedback control system commonly found in STM electronics [2]. The image generated (Fig 2, left) is remarkably similar to those obtained on ITO and those published in the Nature Materials 2004 article.

Figure 2: Feedback oscillations over a spherical surface; LEFT: Generated by a second order linear control feedback loop (matlab script see [2], Predrag Djuranovic, unpublished results, MIT 2005); CENTRE: Feedback oscillations on ITO surface (zoom from Fig 1); RIGHT: Adapted from Jackson et al, Nature Materials 2004.

Figure 2: Feedback oscillations over a spherical surface; LEFT: Generated by a second order linear control feedback loop (matlab script see [2], Predrag Djuranovic, unpublished results, MIT 2005); CENTER: Feedback oscillations on ITO surface (zoom from Fig 1); RIGHT: Adapted from Jackson et al, Nature Materials 2004.

To conclude, I have shown in 2005, while in Francesco Stellacci’s group, that the features in the 2004 paper STM images result from scanning probe artifacts. They are not representative of any surface ligands.

To obtain STM images that are less prone to scanning probe artifacts, I suggest imaging of smaller areas, for example, 20×20 nm and zooming in on several nanoparticles. Scanning large surface areas (100x100nm) with 512×512 resolution – an approach commonly used by Francesco Stellacci — introduces an intrinsic linear lateral uncertainty of 0.2 nm and leads the STM feedback circuit into an unstable mode of operation. To prove the validity of STM images in constant current mode, it would also be useful to share the error signal and show that it is not oscillating and significantly overshooting the set current value.

Seven years after these events, I am still wondering how so many high-impact publications based on immediately apparent scanning probe artifacts have been published. And consequently, how did reporting scanning artifacts consistently propagate through multiple peer review processes?

Comments are more than welcome.

[1] Predrag Djuranovic. https://dl.dropbox.com/u/8856561/ring.pdf ; Feedback Oscillation in STM Imaging, 2005

[2] Predrag Djuranovic, matlab script (https://dl.dropbox.com/u/8856561/sphere_PID_3D.m) used to generate Fig 2 left panel. Note that this is not a simulation of real STM dynamics, but a demonstration of the kind of topography that could be rendered under improper integral/proportional gain settings.

Jackson, A., Myerson, J., & Stellacci, F. (2004). Spontaneous assembly of subnanometre-ordered domains in the ligand shell of monolayer-protected nanoparticles Nature Materials, 3 (5), 330-336 DOI: 10.1038/nmat1116

30 comments

  1. Very good points.

    One point about peer-review system: it is slow and not without problems.

    Just think about that a fraction of SPM professionals who can recognize artifacts in a broad community of nano-, bio-, chemi- researchers is very very small. It is even more true with editors. So, a chance of a paper to be reviewed by a proper expert is not very high, especially if there are no many references included to professional SPM papers and not-experienced editors are really excited about new discoveries. I personally have been involved in several cases (not these) when corrupted papers have been published in very respectable journals despite I noted in my reviews severe SPM artifacts and recommended against publication.

    Vlad

    Like

  2. Is it correct to imply that these resuls obtained in Francesco Stellacci’s group were discussed with him? If so, I cannot understand why he has neither retracted the original paper nor published some call for caution, i.e. indicating that such stripy patterns can also be obtained deliberately as artifacts in the absence of ligands that could be made accountable for this observation. Instead he based additional papers on something he must have known rested on shaky ground. I feel that I am not getting the full story here.

    Like

    1. Yes Mathias, it is correct to imply that these resuls obtained in Francesco Stellacci’s group were discussed with him.

      Like

  3. I’ve been trying to keep an open mind about this and to be as fair as possible to Francesco Stellacci but this post is particularly damning. As Mathias (Brust) says above, if Predrag’s findings were indeed discussed with Francesco (as Raphael confirms, and as one would expect) then those papers should have been retracted.

    @Vladimir Tsukruk:: The key issue here is that the artifacts which Predrag points out are not some strange rare, esoteric effect that only appear occasionally. Any scanning probe microscopist with even the most basic training in the technique is well aware of the influence of feedback loop instabilities and ringing. As I said in an e-mail to Francesco last weekend:

    “I will ‘trust’ an SPM image – in the sense that I can accept it is giving me reliable structural/electronic/chemical information about the sample – only if the images are reproducible in the following senses:
    (a) from scan to scan, (b) under a variety of scan conditions (e.g. appropriate variations in scan speeds, gain parameters), and (c) with a variety of tips.

    For the Nanoscience group here – and, I must say, the same holds true for the vast majority of other groups I know – these criteria represent ‘SPM 101’. These are the ‘ground rules’ and they’re drummed into every graduate student and postdoc because otherwise it’s possible to make inflated claims on the basis of experimental artefacts. I’ve lost count of the number of times I’ve had to send an excited 1st year PhD student or new postdoc (or myself!) back to the lab. to ask them to repeat measurements when they’ve burst into my office exclaiming that they’ve found a wonderful new structure that no one has seen before. What can be weeks of control experiments (particularly in a UHV/5 K experimental environment) are not fun, but they’re absolutely essential.”

    Interestingly, I also noted to Francesco in that e-mail that:

    “I am also willing to lay good odds that I could produce each of the images you show in that JACS paper from a layer of entirely unfunctionalised Au nanoparticles.”

    This is very much along the lines of what Predrag has done. He has acted with scientific rigour and carried out the appropriate control experiment where, as he clearly explains above, precisely the same features are indeed seen for surfaces which are not ligand-terminated.

    The broader problem here is that I cannot understand how supposedly high quality journals of the prestige of, for example, the Nature Publishing Group titles, could have published papers which are based around scanning artifacts which are clear to even novices in the field, let alone supposed experts.

    Scientists are continually striving to get their work into ‘those’ high impact factor journals. (In the UK, for example, there is a direct financial imperative for universities to ‘encourage’ researchers to publish in journals which are considered to be of 3* and 4* quality in the research excellence framework (REF)). One would expect/hope that those journals would have rigorous peer review processes. In this case, the level of peer – and, indeed, editorial – review is shockingly low. Do those NPG journals really deserve their prestige when work which is *so* riddled with obvious artifacts, as in this case, has been published not just once, but on multiple occasions?

    One might almost think that the ‘iconography’ of a striped nanoparticle (and the associated colourful images) might have played a role in encouraging editors to overlook any possible artifacts in the images. Surely not.

    Once again, *thank you* Predrag for your extremely important input to this debate.

    Philip

    P.S. On the ‘statistical illiteracy’ of journal impact factors, this will be of interest: I’m Sick Of Impact Factors

    Like

  4. The problem here is perhaps with the area of science- a new subdiscipline that combines aspects of many older subdisciplines (synthetic organic chemistry to make ligands, inorganic chemistry to make functionalising gold clusters, physical chemistry and surface science for SPM and other advanced characterisation methods – and that’s just the chemical aspects without taking into account the life science aspects), each with its own unwritten code (‘ground rules’) that specialists would know, but not necessarily outsiders adapting its methods for their own work.

    This problem then extends to refereeing: I am at present refereeing a cross-disciplinary submission for one of the Nature ‘X’ journals and if I’m honest I can only fairly assess, at a deep technical level, about 20% of the content, and maybe have a rough idea of the validity of another 20-30%. The paper appears to be really interesting and novel- but should I trust that the other 80% of the measurements are conducted to the same excellent standard as the work I am competent to judge? Of course I can (and will!) tell the editorial team of my limitations, but let’s say they get three referees, each competent to assess about 20%, that they are lucky and none of the referees’ expertise overlaps, and each referee is positive. That’s still a recommendation to publish, with 40% of the paper that hasn’t really been assessed by an expert, and not all referees are willing to admit that they have ANY limitations (in my experience…). I imagine this scenario is quite common. Of course, once such a paper is published, perhaps with a glaring error that an expert could spot, it has the glow of ‘peer-reviewed publication in a high-impact journal’ about it and, as Raphaël has found, it is difficult to publish a subsequent paper taking issue with it.

    Like

    1. Hi, Simon.

      I agree entirely re. the difficulty of reviewing cross-disciplinary work. A very important aspect of the “striped nanoparticle” controversy, however, is that any scanning probe microscopist with even a modicum of experience with the technique should have questioned the validity of the images. Feedback loop ringing is very common if the gains are set too high and with even the most rudimentary training is something that should have been spotted.

      I find it absolutely remarkable that what is effectively a ‘schoolboy’-level artifact was not identified and questioned by the referees of the papers. I would very much hope that at least one of the editors would have selected a leading scanning probe microscopist to review the work. ( If not, why not?) An SPM expert should have immediately questioned the images and asked to see consecutive scans showing the *same* stripe patterns for the same particle and to show that the stripes scaled appropriately with scan size. These basic checks for reproducibility have apparently not been carried out for any of the publications.

      Philip

      Like

    2. Again, I have to agree with Simon. I have long been ranting about the diversity of manuscripts I am asked to review only because somebody has done something with gold nanoparticles, which is a field in which I am recognised as an expert. This can range from single-electron transistors to cancer therapy or the safety of large scale hog farming operations (this is actually true). I normally do not review such manuscripts, but nobody would stop me if I did. Interestingly, journals that have their submissions handled by academic editors, for example all ACS and some RSC journals, almost without exception send me manuscripts to review that are spot on in my area of competence, or at least, there is a significant component of the paper that I can competently assess, and I would limit my comments to that component. Journals with appointed professional editors, who themselves are not leading academic experts, send me manuscripts for review that could have been picked by a computer (and probably have) based on a list of keywords. If I start to review these, and perhaps some colleagues feel delusions of grandeur and do so, anything becomes possible. Unfortunately, the pressure to publish in highest impact factor journals constantly increases, and few of us are gentlemen scientists (I know none) who can afford to ignore this. For this reason, I believe we will increasingly be seeing papers, that should not have passed the peer review process, published in top tier journals. I have no constructive suggestions at the moment on what could be done to stop this trend.

      Like

      1. @ mathias

        your interpretation of the growing complexity of the peer-review process of multidisciplinary work could explain consistent failures of Nature Materials to catch errors propagating in Stellacci “stripy” publications. The most amazing thing is that Stellacci nanoparticle STM work is so obviously erroneous that anybody with the slightest modicum of SPM experience can catch it. Yet, Nature Materials can’t. That simply implies that STM work has either not been reviewed at all or it has been by somebody who does not have the required working knowledge of the STM.

        Now, what one needs to do to correct “bad” science? Let Adam Smith’s invisible hand rectify the issue or do something more proactive, such as blogging? Well we saw what invisible hand does to economy.

        Unfortunately, I have to conclude as well that I don’t have any constructive suggestions, beyond voicing my opinion behind my real name and hope that somebody will listen ….

        Like

    1. @ Douglas: no, I am just reporting my findings from 2005. I thought it might be beneficial to weigh in my opinion and contribute to general awareness of the existence of scanning probe artifacts. It certainly tricked Stellacci, and perhaps it is happening somewhere else too, as we are blogging.

      Like

  5. It is a factual account of his experience and the argument he made at the time. What in the above do you think constitutes accusation of misconduct?

    Like

    1. Well, between this post and the comments, it’s established (1) that Stellacci was made aware of this issue several years ago; (2) Stellacci continued and continues to publish papers claiming that these stripes are real. One possibility is that Stellacci has an argument for why these particular imaging artifacts are not relevant (an argument that is not presented here, though I have not read through all of the associated material), and genuinely believes that Djuranovic is wrong. The other possibility is that Djuranovic thinks that Stellacci is deliberately publishing an interpretation of data that he (Stellacci) knows to be incorrect. I have no stake in this – I’m just trying to get to the question of whether you and Djuranovic think Stellacci is fooling himself, or whether you think he is deliberately, intentionally ignoring contrary evidence.

      Like

      1. Stellacci et al. acknowledge the presence of “noise” in their images, according to their JACS 2007 publication “From Homoligand- to Mixed-Ligand- Monolayer-Protected Metal Nanoparticles: A Scanning Tunneling Microscopy Investigation”. Their Figure 3 indicates that they are able to distinguish between “noise” (scan speed dependent) and “reality” (scan speed independent) . Well so far so good. Until you realize that this is most likely another error: they hand-pick what ought to be “ripple” and demonstrate that it is scan speed independent. In the text, of course, there is no mentioning of how they actually do it: is it by random inspection or maybe by ripple recognition algorithm? Jackson et al. only say that “each point in the plots is the average of multiple measurements; the calculated standard deviations are shown as error bars in the plot”. Well, it doesn’t really say how many measurements they took, but lets assume they did one hundred on every image to get the reported statistics. Each pair of “stripies” is about 10 pixels at most. So, their averaging includes a total of, lets say, at best, 1000 pixels. Their STM images are collected in 512×512 resolution, or 262,144 pixels. Therefore, their scan-independent “reality” statistics is based on 1000/262,144 = 0.4% of the entire image. So it seems they have some miraculous method that can tell that 0.4% of the image is not an artifact, but 99.6 % is? Now, how unlikely is this? They also forgot to superimpose the lateral uncertainty of 0.2 nm per pixel (100nm/512) to their error bars. So enlarge their error bars by plus/minus 0.2nm and keep in mind that its 0.4% of the image. As I said before, what gives?

        On top of this conclusion, note that they are performing the statistics on something that is probably not even a nanoparticle ( cf. to Figure 1 left on my blog entry that shows the imaging substrate, gold foil). Can you tell the difference between a nanoparticle in JACS Figure 3 and my Figure 1 left? Actually, you can. My image of surface roughness of gold foil looks more like a nanoparticle.

        To conclude, Stellacci recognizes that noise is present in 99.6% of the surface area of his images while 0.4% are surface ligands, according to his JACS publication. I think its rather 100% noise.

        Like

      2. Doug, you have employed the word “misconduct” and Philip has employed the word “retraction”. These are very serious issues and need great care. I will give my take on this in a future post. I don’t think we can or should look at it in the way you are suggesting above though.

        We cannot investigate what happens or happened in Stellacci’s head. We will never be able to decipher between mistaken belief and deception, so we’d better just assume Stellacci has been writing his articles in good faith.

        Having said that, we can look at the evidence and while there does not appear to be fabrication of data, among other things, there has been duplication of data, claims not substantiated by evidence, and cherry picking.

        Like

  6. Pedja’s analysis of the data in the JACS paper is spot-on. The question of error bars in particular needs special attention. It is not at all clear how those error bars are defined when it’s practically impossible to distinguish even one period of the stripes/ripples, let alone accurately measure it to *sub-Angstrom* precision (!), as implied in Fig. 3 of that paper.

    I have asked Francesco to send me some of the raw data for this paper – I’d like to give it to a number of independent researchers and see what type of values they extract. I think that there is a great deal of observer bias inherent in the interpretation.

    Note in particular the clear ‘cherry picking’ of the results. In the lower left corner of Fig. 3(a) there are features on the gold “mounds” of the substrate which provide evidence for ‘stripes’ which is easily as compelling as that claimed on the nanoparticles. Yet those features on the ‘mounds’ are ignored…

    Like

  7. I would like to know more about Predrag’s project.

    What was the targeted semiconductor colloid (err..nanoparticle)?

    Was the intention to basically replicate/expand the work on Au-stripeys to semi-cond-stripeys?

    Was any experimental work done on functionalized semi-cond particles? Was feedback ringing seen? Or perhaps not seen (because of more careful scanning) and then FS pushed for finding them…and the grad student responded (naturally) in testing to see if the Au-stripeys might be artifacts (thus not hard to understand why could not be done in another system)?

    P.s. the comment from Jodie is very interesting…from a human perspective. Predrag is not sharing all the info with us, but one has to suspect that his career was derailed by an instructor pushing a student to work on an area that was mistaken to begin with. This is one of the real damages of bad science and of bad advisors.

    Like

  8. Hi Nony, indeed I am not sharing the minutia of my interpersonal conflict with Francesco and associated co-workers on the first Nature paper. I don’t think it belongs to this forum and it does not contribute in any constructive way to this scientific debate.

    My career was not derailed by my dispute with Francesco. From my interaction with him and other faculty members I realized that academic career was definitely ill-suited to my personality. So, I would thank to Francesco for making me aware of this misfit “early” in my PhD track. Instead, I diverged to molecular simulations and working outside of academia — more specifically, in the financial sector. Once my schedule eases up, I will publish my long-overdue simulation work.

    I would love to thank to many graduate students, faculty and MIT administration who were very supportive during and after my fallout with Francesco. More specifically, Office of the Vice President for Research has evaluated that my concerns regarding Francesco’s STM methodology, as presented in the Nature paper, were fully corroborated.

    Like

  9. I ain’t buying it. I think you’re smarter than Jackson (and cavalier-looking FS) and more curious. I think you screwed up picking an advisor. Always, always, always, go for the old ones. Never, never, never pick a tenure track one. Power to the people and free food for grad students at receptions! 🙂

    If I were your advisor, I’d have kept you interested and squeezed work out of you. Left you with a few more years to discover what a bunch of weasels the professors are. Except for Phil and Raph, of course.. 😉

    P.s. Don’t be a finance slut. That derivatives stuff is way over-rated. I don’t care if you are solving hard diff E Qs. Most of the people in that field haven’t really read and thought through (really though through in terms of arbitrage, history, etc.) the lessons in a book like Brealey and Myers.

    If you don’t want to be in academia, what about getting into geology (US shale production is booming). Druck drivers make 100,0000. Lower level managers can take home 3-4 hundred K. Plus it’s science and really….wealth comes from the earth. Not from trading derivatives. Yeah, it’s politically incorrect and not like chasing Boston skirt. But you could be where men are men…and sheep are scared.

    P.s.s. leaving aside the drama (and thanks for putting up with my teasing), I’m still interested in if you actually synthesized any ligand-semicond particles (what is the bonding if not Au-S)? How does the STM work if the nanoparticles are insulators (semicond is orders of magnitude lower cond than gold). Did you see ringing in another system (when scanning fast, etc.)? Oh…and if the system was ITO, that isn’t really a semi conductor. It’s a poor man’s shitty metal. The Fermi level is in the [bottom of] the conduction band.

    Like

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.