Novelty, reproducibility, and data sharing in (nano)materials science

Half-random ranty post that might develop into something more structured at some point… Feedback very much welcome.

Andrew Maynard has blogged about the extent to which novelty should (or, in fact, should not) be the main consideration for the evaluation of nanomaterials risks (initially published as an editorial in Nature Nanotechnology). It’s entitled “Is novelty in nanomaterials overrated when it comes to risks” and is well worth reading in full. A central point is that:

Novelty as a result is a subjective, transient, and consequently a rather unreliable indicator of potential risk. It tends to obscure the reality that conventional behaviour can sometimes lead to harm, and that mundane risks are still risks. And it favours the interesting (and possibly the headline-grabbing) over the important. But if novelty is an unreliable guide to potential risk, how can approaches be developed that help identify, understand and manage plausible risks associated with emerging materials and the products that use them?

Apparently unrelated (but wait for the next paragraphs), there are various initiatives to encourage or even mandate sharing of data related to the characterization of (nano)materials. It is thought that this will boost innovation and facilitate the coming together of computational and experimental work. Maybe the most impressive and concerted effort comes from the White House Office for Science and Technology as exemplified by this post It’s Time to Open Materials Science Data. Publishers have smelled something and are moving to the area of providing services for data sharing and curation; NPG launched Scientific Data in partnership with FigShare; Elsevier has just launched an initiative specifically targeted to open data in Materials Science.

Now for the (arguably subtle and tenuous) link. Novelty is overrated not just when it comes to risk. It is overrated in materials science full stop. This seems not intuitive; surely scientific endeavour in materials science is about discovering new materials. The problem here (and arguably the opportunity too) is that there is an immense combinatorial space of potential new materials. We work on peptide-capped gold nanoparticles. By varying the peptide sequences and making various mixed monolayers, we can potentially generate hundreds of novel materials every day (and we do make a fair number). The combinatorial space of potential nanomaterials vastly exceed the number of potential molecules. Most of these materials are not interesting, but they are novel: nobody made them before.

I see a lot of research articles which can be summarised as

  1. This is a novel nanomaterial (and it truly is: nobody has made before this gold-nanorod-with-carbon-dots-at-the-tips-graphene-oxide-on-the-side-and-some-antibody-labelled-conductive-polymer-wrapped-around [1])
  2. It could be used for [delete as appropriate] energy/biological imaging/curing cancer (and it will never be).

When it comes to safety, Andrew argues convincingly that the focus should be on plausible scenarios rather than on novelty. When it comes to what should be curiosity-driven science, there seems to be a lot of new materials generated for the sole purpose of highly improbable applications rather than in the pursuit of general principles that would help us explore the materials landscape. This has the very unfortunate consequence that the materials characterisation is often poor and limited to whatever is thought to enable the envisioned application. An extremely large proportion of these new materials are made by a single group for the purpose of a single paper. The experiments are not reproduced independently. Capturing all of this data into platforms that are open and suitable for data mining is a noble and worthwhile purpose which I support, but it must be accompanied by a change of focus and higher standards of characterisation otherwise I fear that it will not help understanding much.

Thanks to who chronicled the reaction of materials scientists to an OFST presentation at the MRS conference in Boston in December 2014.

[1] Novel Nano-Lychees for Theranostics of Cancer; Charles Spencer and Edna Purviance; Nature Matters-to-all (2015) 7  101-114



    1. Thanks for the comment. The difficulty (impossibility) of evaluating independently novelty and significance is maybe best illustrated by your follow up question “Is the novelty significant or trivial”…

      For the record, I actually think that journals should only assess soundness and quality of data and leave it to the community to do the sorting and evaluation (which might dramatically evolve with time) of novelty and significance (post-publication peer review). See this post/a> for more on my views of publishing.

      My concern is that in the current publishing system an awful proportion of what is getting published would not be suitable for the building of a robust database useful for data mining etc, not because these papers are not novel and significant, but because the standard of the data and reproducibility are not good enough. And they are not good enough because the incentives are not in this direction (yet).

      I hope (but I am not sure) that this clarifies?


  1. I agree that the community can sort the good from the bad. My gripe is when I get a paper for review and the journal asks me to comment on its novelty. Who cares if there is novelty if the novelty is trivial. – Scott


  2. What I hate is hype science. In some ways, it is dishonest. Plus at a minimum, it is confusing. Just freaking noise to the signal.

    I don’t think there is anything wrong with data point science, with stamp collecting but BE HONEST about it. You can even discuss possible implications, but do it an a more understated and at the end fashion. Keep the real science at the front. And the importance discussion should not be about hyping yourselves. But about just making sure someone in industry can get the “so what”. In other words, to serve the reader. Not to BS them. That needs to be the mindset.

    P.s. My most cited paper was a phase diagram. Little bit of properties in there as well, and on relevant materials. But not a “one pot synthesis of magic foofoo powder”. So, if you want to up your h-index…here is a little secret. Do foundational work. You might even help mankind and justify your salaries–don’t get me started on the diminishing returns of taxpayer funded neoliberal “Big Science”! 😉


  3. I think that one of the main problems we have at the moment with Nanoscience, is that many people are not fully acknowledging that reproducibility is a major problem. Before going into examples, I would like to say that this is not only a problem in Nanoscience (there are a lot of similar debates in biology/medicine).
    One of the features of Nanoscience is that it is not robust. What I mean is that a small difference in starting conditions of material synthesis, can cause a large change in properties.
    One example I have a lot of personal experience with is nanoparticle synthesis. For many years, gold nanoparticle synthesis was very hard to reproduce. Fortunately in that particular case, some good chemists took upon themselves to try to understand why, and now we know that the synthesis is very dependent on amount of Iodine impurities in the CTAB used for capping. There are many other systems, where synthesis conditions that are usually not written in articles, like the stirring speed, can significantely affect particle size and distribution (for example in silver nanoparticle synthesis). Not only in chemical synthesis, also in top down approaches, some impurities in the system can cause nanometer thick layers to have totally different properties than those made without the impurities, and in many cases, people were not aware of the existance of these impurities.
    Not only on the synthesis side, but also on the characterization side, a lot of properties of nanostructures are being measured close to the noise limit, which make the amount of artifacts much higher than in other fields.
    I am sure that probably the community that reads this blog are much more aware of these problems than the general Nanoscience community, but I feel there is not enough done, do educate the general `nanoscientists` about these problems and how to deal with them.



Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.