guest

Probes, Patterns, and (nano)Particles

philipmoriarty

Philip Moriarty

This is a guest post by Philip Moriarty, Professor of Physics at the University of Nottingham (and blogger).

“We shape our tools, and thereafter our tools shape us.”

Marshall McLuhan (1911-1980)

My previous posts for Raphael’s blog have focussed on critiquing poor methodology and over-enthusiastic data interpretation when it comes to imaging the surface structure of functionalised nanoparticles. This time round, however, I’m in the much happier position of being able to highlight an example of good practice in resolving (sub-)molecular structure where the authors have carefully and systematically used scanning probe microscopy (SPM), alongside image recognition techniques, to determine the molecular termination of Ag nanoparticles.

For those unfamiliar with SPM, the concept underpinning the operation of the technique is relatively straight-forward. (The experimental implementation rather less so…) Unlike a conventional microscope, there are no lenses, no mirrors, indeed, no optics of any sort [1]. Instead, an atomically or molecularly sharp probe is scanned back and forth across a sample surface (which is preferably atomically flat), interacting with the atoms and molecules below. The probe-sample interaction can arise from the formation of a chemical bond between the atom terminating the probe and its counterpart on the sample surface, or an electrostatic or magnetic force, or dispersion (van der Waals) forces, or, as in scanning tunnelling microscopy (STM), the quantum mechanical tunnelling of electrons. Or, as is generally the case, a combination of a variety of those interactions. (And that’s certainly not an exhaustive list.)

Here’s an example of an STM in action, filmed in our lab at Nottingham for Brady Haran’s Sixty Symbols channel a few years back…

Scanning probe microscopy is my first love in research. The technique’s ability to image and manipulate matter at the single atom/molecule level (and now with individual chemical bond precision) is seen by many as representing the ‘genesis’ of nanoscience and nanotechnology back in the early eighties. But with all of that power to probe the nanoscopic, molecular, and quantum regimes come tremendous pitfalls. It is very easy to acquire artefact-ridden images that look convincing to a scientist with little or no SPM experience but that instead arise from a number of common failings in setting up the instrument, from noise sources, or from a hasty or poorly informed choice of imaging parameters. What’s worse is that even relatively seasoned SPM practitioners (including yours truly) can often be fooled. With SPM, it can look like a duck, waddle like a duck, and quack like a duck. But it can too often be a goose…

That’s why I was delighted when Raphael forwarded me a link to “Real-space imaging with pattern recognition of a ligand-protected Ag374 nanocluster at sub-molecular resolution”, a paper published a few months ago by Qin Zhou and colleagues at Xiamen University (China), the Chinese Academy of Science, Dalian (China), the University of Jyväskylä (Finland), and the Southern University of Science and Technology, Guandong (China). The authors have convincingly imaged the structure of the layer of thiol molecules (specifically, tert-butyl benzene thiol) terminating 5 nm diameter silver nanoparticles.

What distinguishes this work from the stripy nanoparticle oeuvre that has been discussed and dissected at length here at Raphael’s blog (and elsewhere) is the degree of care taken by the authors and, importantly, their focus on image reproducibility. Instead of using offline zooms to “post hoc” select individual particles for analysis (a significant issue with the ‘stripy’ nanoparticle work), Zhou et al. have zoomed in on individual particles in real time and have made certain that the features they see are stable and reproducible from image to image. The images below are taken from the supplementary information for their paper and shows the same nanoparticle imaged four times over, with negligible changes in the sub-particle structure from image to image.

This is SPM 101

This is SPM 101. Actually, it’s Experimental Science 101. If features are not repeatable — or, worse, disappear when a number of consecutive images/spectra are averaged – then we should not make inflated claims (or, indeed, any claims at all) on the basis of a single measurement. Moreover, the data are free of the type of feedback artefacts that plagued the ‘classic’ stripy nanoparticle images and Zhou et al. have worked hard to ensure that the influence of the tip was kept to a minimum.

Given the complexity of the tip-sample interactions, however, I don’t quite share the authors’ confidence in the Tersoff-Hamann approach they use for STM image simulation [2]. I’m also not entirely convinced by their comparison with images of isolated molecular adsorption on single crystal (i.e. planar) gold surfaces because of exactly the convolution effects they point towards elsewhere in their paper. But these are relatively minor points. The imaging and associated analysis are carried out to a very high standard, and their (sub)molecular resolution images are compelling.

As Zhou et al. point out in their paper, STM (or atomic force microscopy) of nanoparticles, as compared to imaging a single crystal metal, semiconductor, or insulator surface, is not at all easy due to the challenging non-planar topography. A number of years back we worked with Marie-Paule Pileni’s group on dynamic force microscopy imaging (and force-distance analysis) of dodecanethiol-passivated Au nanoparticles. We found somewhat similar image instabilities as those observed by Zhou et al…

A-C above are STM data

A-C above are STM data, while D-F are constant height atomic force microscope images [3], of thiol-passivated nanoparticles (synthesised by Nicolas Goubet of Pileni’s group) and acquired at 78 K. (Zhou et al. similarly acquired data at 77K but they also went down to liquid helium temperatures). Note that while we could acquire sub-nanoparticle resolution in D-F (which is a sequence of images where the tip height is systematically lowered), the images lacked the impressive reproducibility achieved by Zhou et al. In fact, we found that even though we were ostensibly in scanning tunnelling microscopy mode for images such as those shown in A-C (and thus, supposedly, not in direct contact with the nanoparticle), the tip was actually penetrating into the terminating molecular layer, as revealed by force-distance spectroscopy in atomic force microscopy mode.

The other exciting aspect of Zhou et al.’s paper is that they use pattern recognition to ‘cross-correlate’ experimental and simulated data. There’s increasingly an exciting overlap between computer science and scanning probe microscopy in the area of image classification/recognition and Zhou and co-workers have helped nudge nanoscience a little more in this direction. Here at Nottingham we’re particularly keen on the machine learning/AI-scanning probe interface, as discussed in a recent Computerphile video…

Given the number of posts over the years at Raphael’s blog regarding a lack of rigour in scanning probe work, I am pleased, and very grateful, to have been invited to write this post to redress the balance just a little. SPM, when applied correctly, is an exceptionally powerful technique. It’s a cornerstone of nanoscience, and the only tool we have that allows both real space imaging and controlled modification right down to the single chemical bond limit. But every tool has its limitations. And the tool shouldn’t be held responsible if it’s misapplied…

[1] Unless we’re talking about scanning near field optical microscopy (SNOM). That’s a whole new universe of experimental pain…

[2] This is the “zeroth” order approach to simulating STM images from a calculated density of states. It’s a good starting point (and for complicated systems like a thiol-terminated Ag374 particle probably also the end point due to computational resource limitations) but it is certainly a major approximation.

[3] Technically, dynamic force microscopy using a qPlus sensor. See this Sixty Symbols video for more information about this technique.

 

Drug Discovery 2017

This is a guest post by Marie Held reporting from the ELRIG conference held last week.

On 3rd-4th October I attended ELRIG’s flagship event, Drug Discovery 2017, in Liverpool. With around 250 participants, it was the largest of the ELRIG conferences yet. The spacious arrangement of the vendors and posters in the exhibition hall was a refreshing change. There was ample space to mingle, chat and discuss equipment on show.

On day one, I attended the Advances in Imaging stream (one of three parallel streams). The keynote lecture by Tony Ng covered a broad range of the spatial scale, stressing the importance of whole body imaging in cancer in combination with investigating the tumour microenvironment down to super resolution imaging of individual molecules. He outlined their attempts in predicting tumour metastasis enabled via immune system hijacking by the cancer cells. An important conclusion was that with the wealth of imaging methods and tracers being developed, we need standardisation and validation across facilities to bring them closer to the clinic, ultimately improving the lives of patients. The imaging methods discussed in the following six talks ranged from man to molecule, focussing on ever smaller features as the day went on. A transpiring theme was the generation of large amounts of data from different techniques and the associated challenge of deriving meaningful information. Machine learning and artificial intelligence were mentioned time and again as being part of that quest. The last scientific presentation, by Charlotte Dodson, focussed on twinkling enzymes, studying the conformational changes of kinases in disease and after treatment via single molecule spectroscopy. Throughout the imaging stream, twelve men contributed to the presentations, vendor snapshots and poster tasters and three women contributed to the stream. The other streams were a bit more gender balanced but only the workshop on Tuesday achieved a 50/50 split.

On the second day, I attended the Lab of the Future workshop presented by SiLA and ELRIG. The general consensus was that the lab of the future (whether you call it Lab 4.0, Industry 4.0 or something else) is an interconnected space in which smart machines are communicating with each other, running fully automated cycles of fabrication, screening and/or testing. Machinery that can be monitored if not controlled remotely via mobile device apps was mentioned multiple times. Smart products are uniquely identifiable, may be located at any time and “know” their own history, current status and alternative routes to achieving their target state. It left some of the audience wondering where innovation is going to come from. A lot of innovation is not based on a “Eureka” moment but rather lucky accidents or not quite sticking to the protocol and making mistakes. These instances are near on excluded in an automated lab. Another doubt that was raised was: Where is the space, if not need, for the scientist is in this fully automated lab? “He” has more time to think about the science and efficiency gains rather than processing the work. Unfortunately, the scientist was exclusively referred to as a “he” throughout the whole workshop, which irritated myself and another female member of the audience to the extent that it seemed appropriate to clarify that the scientist can be a female scientist. Unconscious discrimination is one of the reasons why there are still so few leading women in science. There was a conspicuous lack of women, both in the audience and in particular in the selection of session leaders, which were all male. It would be nice to see some female panel members in the future. Also, this year only one out of 12 session chairs throughout the whole conference were female.

Near on every panel member in the lab of the future workshop voiced that the interconnectivity should be down scalable to medium and small labs. As a member of the academic research community and a small lab, I felt somewhat left out though. We do not generally use automated machinery, never mind machinery connected to the internet of things. Often enough there is a piece of equipment, that has to be taken off the net entirely because the software is so outdated (and not supplier maintained anymore) that it has to run on an obsolete operating system posing a risk to the University network. That means we are in fact taking a step away from the lab of the future. The audience saw the responsibility with the industrial sector to come up with a solution and I am looking forward to seeing a change in the future. Also, electronic notebooks (find the same presentation here with audio comment) are already a standard in the industrial sector but the academic sector is severely lagging behind. Not all universities have specific guidelines on how to keep a paper lab book, never mind having a system of electronic lab books in place. The responsibility here lies in the academic sector to catch up but it might have to be a bottom up approach to induce a change.

The high point of the second day and probably the conference as a whole was the plenary keynote by Dr Nessa Carey asking whether we can fix big pharma. Her keynote was eloquent, inspiring and also entertaining. We can all do our bit to help fix big pharma. It is not the evil it is often made out to be. Millions of lives have been saved by pharmacological advances and still are being saved, however it does suffer from the worst PR there is.

Overall, I enjoyed the ELRIG Drug Discovery 2017 and am looking forward to the next instalments in London in 2018 and back in Liverpool in 2019.

 

Guest post: SmartFlares fail to reflect their target transcripts levels

Czarnek&BeretaThis is a guest post by Maria Czarnek and Joanna Bereta, who have just published the following article in Scientific Reports entitled SmartFlares fail to reflect their target transcripts levels

We got the idea of using SmartFlare probes when working on generating knockout cells. In the era of CRISPR-Cas9 genome editing, the possibility of sorting out knockout cells based on their low target transcript content (mRNAs that contain premature stop codons are removed in a process called nonsense-mediated decay) instead of time-consuming testing of dozens or thousands of clones would be a great step forward. SmartFlare probes seemed to be just the ticket: no transfection, lysis or fixation needed; moreover, the probes were supposed to eventually leave the cells. We were full of hope as the first probes arrived. (more…)