Social and economics aspects of science

Nanomedicine on Planet F345

Last year, Matthew Faria et al published Minimum information reporting in bio–nano experimental literature, introducing a checklist (MIRIBEL) of experimental characterisations that should accompany any new research paper. 12 months later, the same journal has published 22 (!!!) short opinion pieces. As I feel particularly generous (and a bit facetious) today, I shall summarise those 22 pieces in 2 sentences.

  1. There are authors who feel that MIRIBEL is great and should be implemented although really colleagues should also consider using these other characterization techniques (that they happen to be developing/proposing in their lab/European network [INSERT ACRONYM]).
  2. There are authors who think that there is a risk that MIRIBEL standardisation will stiffle creativity and innovation  (and they also regret that MIRIBEL authors haven’t cited their editorials deploring irreproducible research).

Thankfully, there are more interesting takes from young researchers on Twitter (why do we need journals again?).

Wilson Poon remarks that the sheer amount of acronyms for nano-bio related guidelines & databases is insane;  he remains unconvinced that making new guidelines is the best way to address the current “significant barriers to progress in [nanomedicine],  and, even more damningly, he notes the hypocrisy of many researchers in the field [who] just talk the talk, and not walk the walk

Shrey Sindhwani demands quantification of what is happening to particles at a cellular and sub-cellular level, multiple lines of evidence and the use of appropriate biological controls. He makes two other really important points: 1) he demands critical discussion of what is in the literature; 2) he says we need replications: multiple groups should try to reproduce core concepts of the field for their systems. This involves mechanistic studies of what the body does to your specific formulation. This will define the scope of a broad concept and its applicability.

I largely agree with Wilson and Shrey. MIRIBEL may be well intentioned (and so are most responses), but they are not digging in the right place, and that is because they might otherwise find skeletons that they’d rather not find. This is very explicit in the original MIRIBEL paper:

… our intention is not to criticize existing work or suggest a specific direction for future research. The absence of standards and consistency in experimental reporting is a systemic problem across the field, and our own work is no exception.

God forbids criticizing existing work. If we start there, people might even consider criticising our own work and then where we will it stop? We might have to answer difficult questions at conferences?! That would be scientific terrorism.

Better reporting guidelines is not the solution because it does not address the core of the problems we are facing. In his 2012 paper entitled “Why Science Is Not Necessarily Self-correcting, John P. A. Ioannidis noted that

Checklists for reporting may promote spurious behaviors from authors who may write up spurious methods and design details simply to satisfy the requirements of having done a good study that is reported in full detail; flaws in the design and execution of the study may be buried under such normative responses.

This is exactly what will happen with MIRIBEL. Some will ignore it. Some will talk the talk, i.e. they will burry flaws in the design and execution of the study under a fully checked list of characterizations. Ben Ouyang makes a similar point when he asks what’s the point of reporting standards that might not relate to the problem?:

So, what are the core issues. What needs to be done?

First, we need to look critically at the scientific record. We need to sort out our field. We need to know what are solid concepts we can build on and what are fantasies that have been pushed at some point to get funding but have no underpinnings in the real world. This is important and necessary work. It may impact evaluation of what is worth or not worth funding. It may impact evaluation of risks and public perception of science and technology (badly needed) and even the approval of clinical trials. It may make all the difference for a starting PhD student if she finds a critical analysis of the paper their supervisor is asking them to base their PhD project on.

I have started here with 20 reviews of highly cited papers; we need more people joining in this effort of critically annotating the literature. The tools are available via PubPeer (have you installed their browser plugin that tells you when you are reading a paper which has comments available?). It is not accidental that such tools are not provided by the shiny journals such as Nature Nanotechnology who are happy to publish some buzz about reproducibility but have very little interest in correcting the scientific record.

We need clarity and critical thinking. We need to evaluate what we have. Take one of the founding idea of bionano, that nanoparticles are good at crossing biological barriers. Where does this idea comes from? What does it actually mean (i.e. what % of particles do that? which barriers are we talking about? “Good” compared to what?)? What is the evidence? Is it true? Can it be tested? Are we being good scientists when we make such statements in the introduction of our papers, in press releases or in grant applications? I would argue, contrarily to Ben, that the problem is not that things are complex, but rather that we have been blurring simple facts under a ton of mud for about two decades [1].

In his 2012 paper already cited above, Ioannidis describes science on planet Planet F345, Andromeda Galaxy, Year 3045268. It sounds worryingly not exotic. Let’s try not to emulate Planet F345.

Planet F345 in the Andromeda galaxy is inhabited by a highly intelligent humanoid species very similar to Homo sapiens sapiens. Here is the situation of science in the year 3045268 in that planet. Although there is considerable growth and diversity of scientific fields, the lion’s share of the research enterprise is conducted in a relatively limited number of very popular fields, each one of that attracting the efforts of tens of thousands of investigators and including hundreds of thousands of papers. Based on what we know from other civilizations in other galaxies, the majority of these fields are null fields—that is, fields where empirically it has been shown that there are very few or even no genuine nonnull effects to be discovered, thus whatever claims for discovery are made are mostly just the result of random error, bias, or both. The produced discoveries are just estimating the net bias operating in each of these null fields. Examples of such null fields are nutribogus epidemiology, pompompomics, social psychojunkology, and all the multifarious disciplines of brown cockroach research—brown cockroaches are considered to provide adequate models that can be readily extended to humanoids. Unfortunately, F345 scientists do not know that these are null fields and don’t even suspect that they are wasting their effort and their lives in these scientific bubbles.

Young investigators are taught early on that the only thing that matters is making new discoveries and finding statistically significant results at all cost. In a typical research team at any prestigious university in F345, dozens of pre-docs and post-docs sit day and night in front of their powerful computers in a common hall perpetually data dredging through huge databases. Whoever gets an extraordinary enough omega value (a number derived from some sort of statistical selection process) runs to the office of the senior investigator and proposes to write and submit a manuscript. The senior investigator gets all these glaring results and then allows only the manuscripts with the most extravagant results to move forward. The most prestigious journals do the same. Funding agencies do the same. Universities are practically run by financial officers that know nothing about science (and couldn’t care less about it), but are strong at maximizing financial gains. University presidents, provosts, and deans are mostly puppets good enough only for commencement speeches and other boring ceremonies and for making enthusiastic statements about new discoveries of that sort made at their institutions. Most of the financial officers of research institutions are recruited after successful careers as real estate agents, managers in supermarket chains, or employees in other corporate structures where they have proven that they can cut cost and make more money for their companies. Researchers advance if they make more extreme, extravagant claims and thus publish extravagant results, which get more funding even though almost all of them are wrong.

No one is interested in replicating anything in F345. Replication is considered a despicable exercise suitable only for idiots capable only of me-too mimicking, and it is definitely not serious science. The members of the royal and national academies of science are those who are most successful and prolific in the process of producing wrong results. Several types of research are conducted by industry, and in some fields such as clinical medicine this is almost always the case. The main motive is again to get extravagant results, so as to license new medical treatments, tests, and other technology and make more money, even though these treatments don’t really work. Studies are designed in a way so as to make sure that they will produce results with good enough omega values or at least allow some manipulation to produce nice-looking omega values.

Simple citizens are bombarded from the mass media on a daily basis with announcements about new discoveries, although no serious discovery has been made in F345 for many years now. Critical thinking and questioning is generally discredited in most countries in F345.

[1] The example of uptake of nanoparticles in cells is a case in point. Endocytosis was literally discovered and initially characterized using gold colloids as electron microscopy contrast agents in the 1950s and 1960s, yet half a century later, tens of thousands of articles write that the uptake of nanoparticles in cells is a mystery that urgently needs to be investigated.

Time to reclaim the values of science

This post is dedicated to Paul Picard, my grand dad, who was the oldest reader of my blog. He was 17 (and Jewish) in 1939 so he did not get the chance to go to University. He passed away on the first of October 2016. More on his life here (in French) and some of his paintings (and several that he inspired to his grandchildren and great-grandchildren). The header of my blog is from a painting he did for me

A few recent events of vastly different importance eventually triggered this post.

A  (non-scientist) friend asked my expert opinion about a campaign by a French environmental NGO seeking to  raise money to challenge the use of nanoparticles such as E171 in foods. E171 receives episodic alarmist coverage, some of which were debunked by Andrew Maynard in 2014. The present campaign key dramatic science quote “avec le dioxyde de titane, on se retrouve dans la même situation qu’avec l’amiante il y a 40 ans {with titanium dioxide, we are in the same situation than we were with asbestos 40 years ago}” is from Professor Jürg Tschopp. It comes from an old media interview (2011, RTS) that followed a publication in PNAS. We cannot ask Professor Tschopp what he thinks of the use of this 5 years old quote: unfortunately he died shortly after the PNAS publication. The interpretation of this article has been questioned since: it seems likely that the observed toxicity was due to endotoxin contamination rather than the nanomaterials themselves. There is on the topic of nanoparticles a high level of misinformation and fear that finds its origins (in part) in how the scientific enterprise is run today. Incentives are to publish dramatic results in high impact factor journals which lead many scientists to vastly exaggerate both the risks and the potential of their nanomaterials of choice. The result is that we build myths instead of solid reproducible foundations, we spread disproportionate fears and hopes instead of sharing questions and knowledge. When it comes to E171 additives in foods, the consequences of basing decisions on flawed evidence are limited. After all, even if the campaign is successful, it will only result in M&M’s not being quite as shiny.

I have been worried for some time that the crisis of the scientific enterprise illustrated by this anecdote may affect the confidence of the public in science. In a way, it should; the problems are real, lead to a waste of public money, and, they slow down progress. In another way, technological (including healthcare) progress based on scientific findings has been phenomenal and there are so many critical issues where expertise and evidence are needed to face pressing humanities’ problems that such a loss of confidence would have grave detrimental effects. Last week, in the Spectator, Donna Laframboise published an article entitled “How many scientific papers just aren’t true? Enough that basing government policy on ‘peer-reviewed studies’ isn’t all it’s cracked up to be“. The article starts by a rather typical and justified critique of peer review, citing (peer-reviewed) evidence, and then, moves swiftly to climate change seeking to undermine the enormous solid body of work on man-made climate change. It just happens that Donna Laframboise is working for “a think-tank that has become the UK’s most prominent source of climate-change denial“.

One of the Brexit leaders famously declared that “people in this country have had enough of experts”. A conservative MP declared on Twitter that he”Personally, never thought of academics as ‘experts’. No experience of the real world. Yesterday, Donald Trump, a climate change denier was elected president of the USA: “The stakes for the United States, and the world, are enormous” (Michael Greshko writing for the National Geographic). These are attacks not just on experts, but on knowledge itself, and, the attacks extends to other values dear to science and encapsulated in the “Principle of the Universality of Science“:

Implementation of the Principle of the Universality of Science is fundamental to scientific progress. This Principle embodies freedom of movement, association, expression and communication for scientists, as well as equitable access to data, information and research materials. These freedoms are highly valued by the scientific community and generally well accepted by governments and policy makers. Hence, scientists are normally able to travel to international meetings, associate with colleagues and freely express their opinions regardless of factors such as ethnic origin, religion, citizenship, language, political stance, gender, sex or age. However, this is not always the case and so it is important to have mechanisms in place at the local, national and international levels to monitor compliance with this principle and intervene when breaches occur. The International Council for Science (ICSU) and its global network of Members provide one such mechanism to which individual scientists can turn for assistance. The Principle of the Universality of Science focuses on scientific rights and freedoms but implicit in these are a number of responsibilities. Individual scientists have a responsibility to conduct their work with honesty, integrity, openness and respect, and a collective responsibility to maximize the benefit and minimize the misuse of science for society as a whole. Balancing freedoms and responsibilities is not always a straightforward process. For example, openness and sharing of data and materials may be in conflict with a scientist’s desire to maintain a competitive edge or an employer’s requirements for protecting intellectual property. In some situations, for example during wars, or in specific areas of research, such as development of global surveillance technologies, the appropriate balance between freedoms and responsibilities can be extremely difficult to define and maintain. The benefits of science for human well-being and development are widely accepted. The increased average human lifespan in most parts of the world over the past century can be attributed, more or less directly, to scientific progress. At the same time, it has to be acknowledged that technologies arising from science can inadvertently have adverse effects on people and the environment. Moreover, the deliberate misuse of science can potentially have catastrophic effects. There is an increasing recognition by the scientific community that it needs to more fully engage societal stakeholders in explaining, developing and implementing research agendas. A central aspect of ensuring the freedoms of scientists and the longer term future of science is not only conducting science responsibly but being able to publicly demonstrate that science is being conducted responsibly. Individual scientists, their associated institutions, employers, funders and representative bodies, such as ICSU, have a shared role in both protecting the freedoms and propagating the responsibilities of scientists. This is a role that needs to be explicitly acknowledged and embraced. It is likely to be an increasingly demanding role in the future.

It is urgent that we, scientists, reclaim these values of humanity, integrity and openness and make them central (and visibly so) in our Universities. To ensure this transformation occurs, we must act individually and as groups so that scientists are evaluated on their application of these principles. The absurd publication system where we (the taxpayer) pay millions of £$€ to commercial publishers to share hide results that we (scientists) have acquired, evaluated and edited must end. There are some very encouraging and inspiring open science moves coming from the EU which aim explicitely at making “research more open, global, collaborative, creative and closer to society“. We must embrace and amplify these moves in our Universities. And, as many, e.g. @sazzels19 and @Stephen_curry have said, now more than ever, we need to do public engagement work, not with an advertising aim, but with a truly humanist agenda of encouraging curiosity, critical thinking, debates around technological progress and the wonders of the world.

 

The Internet of NanoThings

Nanosensors and the Internet of Nanothings” ranks 1st in a list of ten “technological innovations of 2016” established by no less than the World Economic Forum Meta-Council on Emerging Technologies [sic].

The World Economic Forum, best known for its meetings in Davos, is establishing this list because:

New technology is arriving faster than ever and holds the promise of solving many of the world’s most pressing challenges, such as food and water security, energy sustainability and personalized medicine. In the past year alone, 3D printing has been used for medical purposes; lighter, cheaper and flexible electronics made from organic materials have found practical applications; and drugs that use nanotechnology and can be delivered at the molecular level have been developed in medical labs.

However, uninformed public opinion, outdated government and intergovernmental regulations, and inadequate existing funding models for research and development are the greatest challenges in effectively moving new technologies from the research lab to people’s lives. At the same time, it has been observed that most of the global challenges of the 21st century are a direct consequence of the most important technological innovations of the 20st century.

Understanding the implications of new technologies are crucial both for the timely use of new and powerful tools and for their safe integration in our everyday lives. The objective of the Meta-council on Emerging Technologies is to create a structure that will be key in advising decision-makers, regulators, business leaders and the public globally on what to look forward to (and out for) when it comes to breakthrough developments in robotics, artificial intelligence, smart devices, neuroscience, nanotechnology and biotechnology.

Given the global reach and influence of the WEF, it is indeed perfectly believable that decision-makers, regulators, business leaders and the public could be influenced by this list.

Believable and therefore rather worrying for – at least the first item – is, to stay polite, complete utter nonsense backed by zero evidence. The argument is so weak, disjointed and illogical that it is hard to challenge. Here are some of the claims made to support the idea that “Nanosensors and the Internet of Nanothings” is a transformative  technological innovations of 2016.

Scientists have started shrinking sensors from millimeters or microns in size to the nanometer scale, small enough to circulate within living bodies and to mix directly into construction materials. This is a crucial first step toward an Internet of Nano Things (IoNT) that could take medicine, energy efficiency, and many other sectors to a whole new dimension.

Except that there is no nanoscale sensor that can circulate through the body and communicate with internet (anyone knows why sensors would have to be nanoscale to be mixed into construction materials?).

The next paragraph seize on synthetic biology:

Some of the most advanced nanosensors to date have been crafted by using the tools of synthetic biology to modify single-celled organisms, such as bacteria. The goal here is to fashion simple biocomputers [Scientific American paywall] that use DNA and proteins to recognize specific chemical targets, store a few bits of information, and then report their status by changing color or emitting some other easily detectable signal. Synlogic, a start-up in Cambridge, Mass., is working to commercialize computationally enabled strains of probiotic bacteria to treat rare metabolic disorders.

What is the link between engineered bacteria and the internet? None. Zero. I am sorry to inform the experts of the WEF that bacteria, even genetically engineered ones, do not have iPhones: they won’t tweet how they do from inside your gut.

I could go on but will stop. Why is such nonsense presented as expert opinion?

Please read this leaflet carefully before taking to Twitter

Please read this leaflet carefully before taking to Twitter

1. Name of the medicinal product

TWITTIVIR 5% w/w cream

2. Qualitative and quantitative composition

TWITTIVIR 5% medical grade w/w cream (cis:trans isomer 95:5)

3. Pharmaceutical form

Cream for topical application (usually to the finger tips).

4. Clinical particulars

 4.1 Therapeutic indications

TWITTIVIR 5% w/w cream is indicated for the treatment of Anemic Network Infection, Grant Blood Clot, Publication Circulatory Virus and Altmetric Intestinal Flu

4.2 Posology and method of administration

TWITTIVIR 5% w/w cream is suitable for adults, children of 13 years  of age and above, and the elderly. TWITTIVIR 5% w/w cream is for external use only and should not be applied to broken skin, mucous membranes or near the eyes.

4.3 Contraindications

TWITTIVIR 5% w/w cream is contra-indicated in subjects with known hypersensitivity to the product and its components. (group 1)

TWITTIVIR 5% w/w cream is contra-indicated in highly obsessive subjects. (group 2)

TWITTIVIR 5% w/w cream is strongly contra-indicated in subjects that cannot resist a Twitter spat with Louise Mensh. (group 3)

4.9 Overdose

There are rare cases of overdosage of TWITTIVIR 5% w/w cream, usually in patients from group 3 above. The effects can be serious, leading to grumpiness and even, in extreme cases (in parents), child neglect. In such cases, the treatment should be immediately stopped.

 

9th of May is Open Access day at the IIB

I am delighted to announce that we will have two external speakers on the 9th of May:

Stephen Curry, Open Access for Academics – the problems and the potential

Michelle Brook,  The cost of academic publishing

 

The talks will take place from noon in LT2, Biological Sciences building. They will be followed by sandwiches and discussions in the common room. University Librarian Phil Sykes will join us too (see him talking about open access here).
At 2 pm, IIB academics will also be invited to a University of Liverpool open access roadshow (SR6). See below for details.

Stephen Curry

Stephen Curry

Stephen Curry is Professor of structural biology at Imperial College London. He is an open access advocate. He blogs at the Reciprocal Space and at the Guardian. You can find him on Twitter@Stephen_Curry.

Michelle

Michelle Brook

Michelle Brook blogs at Quantumplations and works with the Open Knowledge Foundation where she recently published a blog about the cost of academic publishing. You can find her on Twitter @MLBrook.

 

 

* University of Liverpool Open Access roadshow

Presentations from the library (Martin Wolf) and Research Policy (Jane Rees) will explain current funder requirements for OA, and explain how the new institutional repository can help make your publications OA compliant. In addition, we will be high-lighting some of the issues to be considered in developing an Institutional Policy on Open Access. There will be time for questions during the session.

The proof by Twitter – responses to Schekman’s move prove his point

A lot has been written already about Randy Schekman column in the Guardian. The first paragraph criticizes the flawed incentives structure of the research professional world.

I am a scientist. Mine is a professional world that achieves great things for humanity. But it is disfigured by inappropriate incentives. The prevailing structures of personal reputation and career advancement mean the biggest rewards often follow the flashiest work, not the best. Those of us who follow these incentives are being entirely rational – I have followed them myself – but we do not always best serve our profession’s interests, let alone those of humanity and society. […]

The importance as well as the (many) shortcomings of his piece and the potential conflict of interest have been noted. One feature of the response on Twitter is that some of the critique ends up proving his point, i.e. that the pressure to publish in such journals is enormous and has a direct (and disproportionate) effect on careers, e.g.:

 

At least, that is something everybody – critics or supporters of Scheman’s move – seem to agree upon…

The stripy controversy as a window into the scientific process*

* h/t Retraction Watch for the title

Stripy Nanoparticles Revisited was published a year ago.

Science is self correcting? [part one: doing the right thing]

At the core of the scientific process as we imagine it, there is the principle that science is self-correcting. Errors are made, but then, if they have any bearing on future studies, they get corrected. That principle failed (and continues to fail) dramatically. Consider the following.

The first “stripy nanoparticles” article was published in 2004 in Nature Materials. The flaws in that article were many and obvious but colleagues most qualified to identify those flaws, i.e. SPM experts, were not interested enough to react: they privately laughed but publicly did nothing. Correcting the flaws in the literature was “somebody else’s problem“.

In 2005, a graduate student in Francesco Stellacci’s lab at MIT also recognized that the observed stripes were a scanning artifact. He did control experiments and showed that the same stripes occur in the absence of particles on the same instrument with the same settings. He also did simulations that explain how those patterns can be generated by the STM feedback system.  At that point, retracting the Nature Materials paper would have been “doing the right thing“. It was not retracted.

In 2006, the first article was followed  by a second article (in J Am Chem Soc), which acknowledges the presence of the artifact, but desperately tries to demonstrate that the features on the particles are the “reality” (with no less than 18 figures). The article is surreal – it makes the case that the artifact is present everywhere except on the particles, where those stripes which look just like the artifact, are instead, the “reality”. An extraordinary case of confirmation bias.

What happened in 2006 is the very exact opposite of self correction. Predrag’s demonstration was simple and clear but remained confined within MIT [it only became public when Predrag contacted me after the publication of our article last year]. Instead, in the official scientific record, no doubt were raised and an impressive pile of stripy figures and articles started to accumulate. That pile continues to grow at an impressive rate, even after the publication of our article. The count is now at about 35 articles from the same group, with more striking examples of confirmation bias.

Science is self-correcting? [part two: crossing the line]

Universities, journals and funding agencies have proclaimed ethical rules which thankfully generally align. One of those shared rules is that re-use of text or figures (even your own text or figures) is not allowed without proper attribution. If this rule is broken, you would therefore expect rapid action to correct the scientific record.

The stripy controversy includes several examples of data re-use and it is informative to look at how editors and institutions have responded to those cases and what have been the final (?) outcome.

Nature Materials was informed as early as 2009 of this issue since my letter to the editor accompanying our submission did mention that “The same image of the same nanoparticle is used in both the ChemComm and the Nature Materials articles.” That did not lead to any action (and our submission to Nature Materials was rejected)At the end of 2012, with the increased scrutiny brought by the controversy, several other data re-use cases were discovered. Editors of the corresponding journals were contacted by Dave Fernig. MIT and EPFL were also informed of these concerns as well as of other concerns regarding data sharing. The initial responses from Editors are reported by Dave here. Eventually, corrections were issued at both PNAS and Nature Materials. Both of these journals are members of the Committee on Publication Ethics (COPE).

The Journal of Scanning Probe Microscopy however is not a member of COPE and its Editor clearly indicated (within minutes of Dave’s email) that he had no intention to enforce any ethical policies. More worryingly maybe, the EPFL investigation has closed and while it has been successful in ensuring compliance of Francesco Stellacci with his obligation of sharing data, it has not led to the correction of the scientific record in the case of the Journal of Scanning Probe Microscopy article. Three figures of that article contain image reuse (from two different articles) without attribution and the analyses presented in other figures correspond to data from yet a third article. The extent of data re-use in this article would probably warrant a full retraction in a journal that follows COPE guidelines. However, the paper is not retracted; instead is still cited in all recent publications as evidence of thorough and rigorous statistical analysis of the stripes [more on this here for the interested reader].

I guess that the conclusion for this part is somewhat nuanced: the system kind of worked for PNAS and Nature Materials; the editors eventually did the right thing and followed COPE guidelines. The system did not work for the more obscure Journal of Scanning Probe Microscopy and EPFL/MIT/COPE rules were not enforced (that may still happen, or maybe there is an explication/justification for these instances of data re-use in which case I will happily publish a correction to that post and other relevant posts as soon as it is communicated to me). As an aside, I note that the Royal Society of Chemistry is a member of COPE and that therefore Nanoscale Editor-in-Chief (European office) is in charge of applying COPE guidelines to Nanoscale authors.

Science is self-correcting? [part 3: critiquing is not nice]

For science to be self-correcting, there should be an incentive for authors to publish failed replication studies and disseminate analyses which identify fundamental flaws in the peer reviewed literature. Quite clearly there is currently no such incentive. I do sometimes get  asked why I engaged in this controversy. I have no personal conflict with Francesco Stellacci. I had not met him when I submitted a technical comment to Science, and, I had still not met him when I submitted the first version of Stripy Nanoparticles Revisited to Nature Materials. My reason to engage was simple: I had identified flaws in articles which had a significant impact on my field of research and I concluded that it was important to correct the record so that we could build on more solid foundations. I was conscious it would be difficult but I certainly had not predicted it would take three and a half years to publish our manuscript. I also had not predicted the extravagant events of the past 12 months, e.g. Predrag contacting me, a Nature Editor trolling my blog anonymously, the data re-use mentioned above, etc.

We need a system where confrontation of ideas is encouraged and is part of the norm, not one where it takes enormous amounts of effort and unreasonable length of time to publish a “revisited” paper. We need a system where engagement in scientific discussions is rewarded, not one where criticism (online or through the peer-reviewed literature) is seen as not nice “because we are all humans“. The latter quote is a reference to the now famous ACS Nano editorial (excellent responses at Chembark, Nature and Pubpeer). Various reasons for this specific editorial at this specific time have been proposed, from ongoing discussions at PubPeer of papers from one of the ACS Nano Editors to the uncovering by bloggers of a recent cases of fraud. Paul Weiss chief-editor of ACS Nano has denied that any single blogger were targeted but, like Paul Bracher, I certainly took it as a personal attack, especially since the latest stripy article was published in the same journal issue.

The “critiquing is not nice” line is not  very different from the initial response to Monica Byrne naming Boraz as her harasser.  This is inappropriate, not nice. A standard response to expressions of concerns about sexism or racism: we can’t talk about this, it is too serious and too damaging (to the reputation of the offender). There is one additional reason to stay silent. This is from Priya Shetty’s (excellent) piece about sexual harassment and the “deafening silence” that surrounds it:

Other than in whispered conversations in coffee shops or quiet meeting rooms, I’ve never known any woman to name the man involved, for the same reason that most women don’t speak up about sexual harassment in the workplace in general – science and journalism are both fields in which funding and job opportunities are increasingly precarious, and women stay silent for fear that their careers will suffer.

The same peer pressure applies in science around cases of bad science or misconduct and the grey area in between.

Conclusion?

None. You write it.