Last year, Matthew Faria et al published Minimum information reporting in bio–nano experimental literature, introducing a checklist (MIRIBEL) of experimental characterisations that should accompany any new research paper. 12 months later, the same journal has published 22 (!!!) short opinion pieces. As I feel particularly generous (and a bit facetious) today, I shall summarise those 22 pieces in 2 sentences.
- There are authors who feel that MIRIBEL is great and should be implemented although really colleagues should also consider using these other characterization techniques (that they happen to be developing/proposing in their lab/European network [INSERT ACRONYM]).
- There are authors who think that there is a risk that MIRIBEL standardisation will stiffle creativity and innovation (and they also regret that MIRIBEL authors haven’t cited their editorials deploring irreproducible research).
Thankfully, there are more interesting takes from young researchers on Twitter (why do we need journals again?).
Wilson Poon remarks that the sheer amount of acronyms for nano-bio related guidelines & databases is insane; he remains unconvinced that making new guidelines is the best way to address the current “significant barriers to progress in [nanomedicine], and, even more damningly, he notes the hypocrisy of many researchers in the field [who] just talk the talk, and not walk the walk.
A few thoughts on the new @NatureNano correspondence piece “On the issue of transparency and reproducibility in nanomedicine” & MIRIBEL (Minimum Information Reporting in Bio–Nano Experimental Literature) 🤔🤔🤔https://t.co/Ak3H1MmzKV
— Wilson Poon (@wilsonpoon) July 4, 2019
Shrey Sindhwani demands quantification of what is happening to particles at a cellular and sub-cellular level, multiple lines of evidence and the use of appropriate biological controls. He makes two other really important points: 1) he demands critical discussion of what is in the literature; 2) he says we need replications: multiple groups should try to reproduce core concepts of the field for their systems. This involves mechanistic studies of what the body does to your specific formulation. This will define the scope of a broad concept and its applicability.
My 2 cents: 1. We need quantification of what is happening to particles at a cellular and sub-cellular level. This quantification cannot be lax and superficial. A certain detail level needs to be there!
— Shrey Sindhwani (@ShreySindhwani) July 5, 2019
I largely agree with Wilson and Shrey. MIRIBEL may be well intentioned (and so are most responses), but they are not digging in the right place, and that is because they might otherwise find skeletons that they’d rather not find. This is very explicit in the original MIRIBEL paper:
… our intention is not to criticize existing work or suggest a specific direction for future research. The absence of standards and consistency in experimental reporting is a systemic problem across the field, and our own work is no exception.
God forbids criticizing existing work. If we start there, people might even consider criticising our own work and then where we will it stop? We might have to answer difficult questions at conferences?! That would be scientific terrorism.
Better reporting guidelines is not the solution because it does not address the core of the problems we are facing. In his 2012 paper entitled “Why Science Is Not Necessarily Self-correcting, John P. A. Ioannidis noted that
Checklists for reporting may promote spurious behaviors from authors who may write up spurious methods and design details simply to satisfy the requirements of having done a good study that is reported in full detail; flaws in the design and execution of the study may be buried under such normative responses.
This is exactly what will happen with MIRIBEL. Some will ignore it. Some will talk the talk, i.e. they will burry flaws in the design and execution of the study under a fully checked list of characterizations. Ben Ouyang makes a similar point when he asks what’s the point of reporting standards that might not relate to the problem?:
Without understanding fundamental design parameters, how can we design? What’s the point of reporting standards that might not relate to the problem? Should I tell you if I had toast for breakfast? Should I tell you if it was raining or sunny during particle injection? 3/5 pic.twitter.com/vSv6Nf7rGt
— Ben Ouyang 👨🔬 (@ben_ouyang) July 5, 2019
So, what are the core issues. What needs to be done?
First, we need to look critically at the scientific record. We need to sort out our field. We need to know what are solid concepts we can build on and what are fantasies that have been pushed at some point to get funding but have no underpinnings in the real world. This is important and necessary work. It may impact evaluation of what is worth or not worth funding. It may impact evaluation of risks and public perception of science and technology (badly needed) and even the approval of clinical trials. It may make all the difference for a starting PhD student if she finds a critical analysis of the paper their supervisor is asking them to base their PhD project on.
I have started here with 20 reviews of highly cited papers; we need more people joining in this effort of critically annotating the literature. The tools are available via PubPeer (have you installed their browser plugin that tells you when you are reading a paper which has comments available?). It is not accidental that such tools are not provided by the shiny journals such as Nature Nanotechnology who are happy to publish some buzz about reproducibility but have very little interest in correcting the scientific record.
We need clarity and critical thinking. We need to evaluate what we have. Take one of the founding idea of bionano, that nanoparticles are good at crossing biological barriers. Where does this idea comes from? What does it actually mean (i.e. what % of particles do that? which barriers are we talking about? “Good” compared to what?)? What is the evidence? Is it true? Can it be tested? Are we being good scientists when we make such statements in the introduction of our papers, in press releases or in grant applications? I would argue, contrarily to Ben, that the problem is not that things are complex, but rather that we have been blurring simple facts under a ton of mud for about two decades .
In his 2012 paper already cited above, Ioannidis describes science on planet Planet F345, Andromeda Galaxy, Year 3045268. It sounds worryingly not exotic. Let’s try not to emulate Planet F345.
Planet F345 in the Andromeda galaxy is inhabited by a highly intelligent humanoid species very similar to Homo sapiens sapiens. Here is the situation of science in the year 3045268 in that planet. Although there is considerable growth and diversity of scientific fields, the lion’s share of the research enterprise is conducted in a relatively limited number of very popular fields, each one of that attracting the efforts of tens of thousands of investigators and including hundreds of thousands of papers. Based on what we know from other civilizations in other galaxies, the majority of these fields are null fields—that is, fields where empirically it has been shown that there are very few or even no genuine nonnull effects to be discovered, thus whatever claims for discovery are made are mostly just the result of random error, bias, or both. The produced discoveries are just estimating the net bias operating in each of these null fields. Examples of such null fields are nutribogus epidemiology, pompompomics, social psychojunkology, and all the multifarious disciplines of brown cockroach research—brown cockroaches are considered to provide adequate models that can be readily extended to humanoids. Unfortunately, F345 scientists do not know that these are null fields and don’t even suspect that they are wasting their effort and their lives in these scientific bubbles.
Young investigators are taught early on that the only thing that matters is making new discoveries and finding statistically significant results at all cost. In a typical research team at any prestigious university in F345, dozens of pre-docs and post-docs sit day and night in front of their powerful computers in a common hall perpetually data dredging through huge databases. Whoever gets an extraordinary enough omega value (a number derived from some sort of statistical selection process) runs to the office of the senior investigator and proposes to write and submit a manuscript. The senior investigator gets all these glaring results and then allows only the manuscripts with the most extravagant results to move forward. The most prestigious journals do the same. Funding agencies do the same. Universities are practically run by financial officers that know nothing about science (and couldn’t care less about it), but are strong at maximizing financial gains. University presidents, provosts, and deans are mostly puppets good enough only for commencement speeches and other boring ceremonies and for making enthusiastic statements about new discoveries of that sort made at their institutions. Most of the financial officers of research institutions are recruited after successful careers as real estate agents, managers in supermarket chains, or employees in other corporate structures where they have proven that they can cut cost and make more money for their companies. Researchers advance if they make more extreme, extravagant claims and thus publish extravagant results, which get more funding even though almost all of them are wrong.
No one is interested in replicating anything in F345. Replication is considered a despicable exercise suitable only for idiots capable only of me-too mimicking, and it is definitely not serious science. The members of the royal and national academies of science are those who are most successful and prolific in the process of producing wrong results. Several types of research are conducted by industry, and in some fields such as clinical medicine this is almost always the case. The main motive is again to get extravagant results, so as to license new medical treatments, tests, and other technology and make more money, even though these treatments don’t really work. Studies are designed in a way so as to make sure that they will produce results with good enough omega values or at least allow some manipulation to produce nice-looking omega values.
Simple citizens are bombarded from the mass media on a daily basis with announcements about new discoveries, although no serious discovery has been made in F345 for many years now. Critical thinking and questioning is generally discredited in most countries in F345.
 The example of uptake of nanoparticles in cells is a case in point. Endocytosis was literally discovered and initially characterized using gold colloids as electron microscopy contrast agents in the 1950s and 1960s, yet half a century later, tens of thousands of articles write that the uptake of nanoparticles in cells is a mystery that urgently needs to be investigated.