Science and ethics

Time to reclaim the values of science

This post is dedicated to Paul Picard, my grand dad, who was the oldest reader of my blog. He was 17 (and Jewish) in 1939 so he did not get the chance to go to University. He passed away on the first of October 2016. More on his life here (in French) and some of his paintings (and several that he inspired to his grandchildren and great-grandchildren). The header of my blog is from a painting he did for me

A few recent events of vastly different importance eventually triggered this post.

A  (non-scientist) friend asked my expert opinion about a campaign by a French environmental NGO seeking to  raise money to challenge the use of nanoparticles such as E171 in foods. E171 receives episodic alarmist coverage, some of which were debunked by Andrew Maynard in 2014. The present campaign key dramatic science quote “avec le dioxyde de titane, on se retrouve dans la même situation qu’avec l’amiante il y a 40 ans {with titanium dioxide, we are in the same situation than we were with asbestos 40 years ago}” is from Professor Jürg Tschopp. It comes from an old media interview (2011, RTS) that followed a publication in PNAS. We cannot ask Professor Tschopp what he thinks of the use of this 5 years old quote: unfortunately he died shortly after the PNAS publication. The interpretation of this article has been questioned since: it seems likely that the observed toxicity was due to endotoxin contamination rather than the nanomaterials themselves. There is on the topic of nanoparticles a high level of misinformation and fear that finds its origins (in part) in how the scientific enterprise is run today. Incentives are to publish dramatic results in high impact factor journals which lead many scientists to vastly exaggerate both the risks and the potential of their nanomaterials of choice. The result is that we build myths instead of solid reproducible foundations, we spread disproportionate fears and hopes instead of sharing questions and knowledge. When it comes to E171 additives in foods, the consequences of basing decisions on flawed evidence are limited. After all, even if the campaign is successful, it will only result in M&M’s not being quite as shiny.

I have been worried for some time that the crisis of the scientific enterprise illustrated by this anecdote may affect the confidence of the public in science. In a way, it should; the problems are real, lead to a waste of public money, and, they slow down progress. In another way, technological (including healthcare) progress based on scientific findings has been phenomenal and there are so many critical issues where expertise and evidence are needed to face pressing humanities’ problems that such a loss of confidence would have grave detrimental effects. Last week, in the Spectator, Donna Laframboise published an article entitled “How many scientific papers just aren’t true? Enough that basing government policy on ‘peer-reviewed studies’ isn’t all it’s cracked up to be“. The article starts by a rather typical and justified critique of peer review, citing (peer-reviewed) evidence, and then, moves swiftly to climate change seeking to undermine the enormous solid body of work on man-made climate change. It just happens that Donna Laframboise is working for “a think-tank that has become the UK’s most prominent source of climate-change denial“.

One of the Brexit leaders famously declared that “people in this country have had enough of experts”. A conservative MP declared on Twitter that he”Personally, never thought of academics as ‘experts’. No experience of the real world. Yesterday, Donald Trump, a climate change denier was elected president of the USA: “The stakes for the United States, and the world, are enormous” (Michael Greshko writing for the National Geographic). These are attacks not just on experts, but on knowledge itself, and, the attacks extends to other values dear to science and encapsulated in the “Principle of the Universality of Science“:

Implementation of the Principle of the Universality of Science is fundamental to scientific progress. This Principle embodies freedom of movement, association, expression and communication for scientists, as well as equitable access to data, information and research materials. These freedoms are highly valued by the scientific community and generally well accepted by governments and policy makers. Hence, scientists are normally able to travel to international meetings, associate with colleagues and freely express their opinions regardless of factors such as ethnic origin, religion, citizenship, language, political stance, gender, sex or age. However, this is not always the case and so it is important to have mechanisms in place at the local, national and international levels to monitor compliance with this principle and intervene when breaches occur. The International Council for Science (ICSU) and its global network of Members provide one such mechanism to which individual scientists can turn for assistance. The Principle of the Universality of Science focuses on scientific rights and freedoms but implicit in these are a number of responsibilities. Individual scientists have a responsibility to conduct their work with honesty, integrity, openness and respect, and a collective responsibility to maximize the benefit and minimize the misuse of science for society as a whole. Balancing freedoms and responsibilities is not always a straightforward process. For example, openness and sharing of data and materials may be in conflict with a scientist’s desire to maintain a competitive edge or an employer’s requirements for protecting intellectual property. In some situations, for example during wars, or in specific areas of research, such as development of global surveillance technologies, the appropriate balance between freedoms and responsibilities can be extremely difficult to define and maintain. The benefits of science for human well-being and development are widely accepted. The increased average human lifespan in most parts of the world over the past century can be attributed, more or less directly, to scientific progress. At the same time, it has to be acknowledged that technologies arising from science can inadvertently have adverse effects on people and the environment. Moreover, the deliberate misuse of science can potentially have catastrophic effects. There is an increasing recognition by the scientific community that it needs to more fully engage societal stakeholders in explaining, developing and implementing research agendas. A central aspect of ensuring the freedoms of scientists and the longer term future of science is not only conducting science responsibly but being able to publicly demonstrate that science is being conducted responsibly. Individual scientists, their associated institutions, employers, funders and representative bodies, such as ICSU, have a shared role in both protecting the freedoms and propagating the responsibilities of scientists. This is a role that needs to be explicitly acknowledged and embraced. It is likely to be an increasingly demanding role in the future.

It is urgent that we, scientists, reclaim these values of humanity, integrity and openness and make them central (and visibly so) in our Universities. To ensure this transformation occurs, we must act individually and as groups so that scientists are evaluated on their application of these principles. The absurd publication system where we (the taxpayer) pay millions of £$€ to commercial publishers to share hide results that we (scientists) have acquired, evaluated and edited must end. There are some very encouraging and inspiring open science moves coming from the EU which aim explicitely at making “research more open, global, collaborative, creative and closer to society“. We must embrace and amplify these moves in our Universities. And, as many, e.g. @sazzels19 and @Stephen_curry have said, now more than ever, we need to do public engagement work, not with an advertising aim, but with a truly humanist agenda of encouraging curiosity, critical thinking, debates around technological progress and the wonders of the world.


The Internet of NanoThings

Nanosensors and the Internet of Nanothings” ranks 1st in a list of ten “technological innovations of 2016” established by no less than the World Economic Forum Meta-Council on Emerging Technologies [sic].

The World Economic Forum, best known for its meetings in Davos, is establishing this list because:

New technology is arriving faster than ever and holds the promise of solving many of the world’s most pressing challenges, such as food and water security, energy sustainability and personalized medicine. In the past year alone, 3D printing has been used for medical purposes; lighter, cheaper and flexible electronics made from organic materials have found practical applications; and drugs that use nanotechnology and can be delivered at the molecular level have been developed in medical labs.

However, uninformed public opinion, outdated government and intergovernmental regulations, and inadequate existing funding models for research and development are the greatest challenges in effectively moving new technologies from the research lab to people’s lives. At the same time, it has been observed that most of the global challenges of the 21st century are a direct consequence of the most important technological innovations of the 20st century.

Understanding the implications of new technologies are crucial both for the timely use of new and powerful tools and for their safe integration in our everyday lives. The objective of the Meta-council on Emerging Technologies is to create a structure that will be key in advising decision-makers, regulators, business leaders and the public globally on what to look forward to (and out for) when it comes to breakthrough developments in robotics, artificial intelligence, smart devices, neuroscience, nanotechnology and biotechnology.

Given the global reach and influence of the WEF, it is indeed perfectly believable that decision-makers, regulators, business leaders and the public could be influenced by this list.

Believable and therefore rather worrying for – at least the first item – is, to stay polite, complete utter nonsense backed by zero evidence. The argument is so weak, disjointed and illogical that it is hard to challenge. Here are some of the claims made to support the idea that “Nanosensors and the Internet of Nanothings” is a transformative  technological innovations of 2016.

Scientists have started shrinking sensors from millimeters or microns in size to the nanometer scale, small enough to circulate within living bodies and to mix directly into construction materials. This is a crucial first step toward an Internet of Nano Things (IoNT) that could take medicine, energy efficiency, and many other sectors to a whole new dimension.

Except that there is no nanoscale sensor that can circulate through the body and communicate with internet (anyone knows why sensors would have to be nanoscale to be mixed into construction materials?).

The next paragraph seize on synthetic biology:

Some of the most advanced nanosensors to date have been crafted by using the tools of synthetic biology to modify single-celled organisms, such as bacteria. The goal here is to fashion simple biocomputers [Scientific American paywall] that use DNA and proteins to recognize specific chemical targets, store a few bits of information, and then report their status by changing color or emitting some other easily detectable signal. Synlogic, a start-up in Cambridge, Mass., is working to commercialize computationally enabled strains of probiotic bacteria to treat rare metabolic disorders.

What is the link between engineered bacteria and the internet? None. Zero. I am sorry to inform the experts of the WEF that bacteria, even genetically engineered ones, do not have iPhones: they won’t tweet how they do from inside your gut.

I could go on but will stop. Why is such nonsense presented as expert opinion?

Lab Times: “Flare up over SmartFlares”

Stephen Buckingham interviewed me for Lab Times

On the face of it, Millipore’s SmartFlares are meant to be a tool cell biologists dream of – a way of measuring levels of specific RNA in real time in living cells. But does it really work? Raphaël Lévy and Gal Haimovich are in doubt.

Raphaël Lévy, Senior Lecturer in Biochemistry at the University of Liverpool, UK, was so unconvinced about SmartFlares that he decided to put the technique directly to the test (The Spherical Nucleic Acids mRNA Detection Paradox, Mason et al. ScienceOpen Research). As a result, Lévy has found himself at the centre of a row; not only over whether the technique actually does the job but as to whether it can actually work, at all – even in principle. Lab Times asked Lévy why he is in doubt that SmartFlares really work.

Lab Times:  What’s all the fuss about SmartFlares?

Read it all here (page 50-51).

I can’t resist also quoting this bit of pf the final paragraph…

In interview, Lévy is reasonable and measured in tone. But he is no stranger to controversy and can deliver fierce polemic with style.

If you have not yet, you should also check Leonid Schneider’s earlier and more complete investigation.

The F**** word

I am talking of course of the word fraud.

It is generally understood that the f**** word is best avoided in polite company, especially when talking about the work of colleagues published in peer reviewed journals. If you really must (in which case, you’d better be critical but fair), you should instead simply point to the facts but avoid making explicitly the implication that fraud has happened (the copy and paste similarities between bands, etc; you name it).

Hopefully, from that point, journal editors and scientific institutions whose main mission is service to science and its integrity will take over and will sort out the mess.

Except that it does not happen. Here is an ordinary example:

The same authors have at least two other articles with similar problems, i.e. multiple particles from the same electron microscopy picture that look strangely similar.  Right. I am not going to beat around the bush. This is fraud. There cannot be any innocent way by which such an image can be produced. It is therefore fraud (and poor quality photoshop).

François-Xavier Coudert reported his concerns to the Editors of the respective journals. After this latest series of tweets, one editor finally responded that “authors could not provide original (primary high-res) data due to a “flood” of their lab”. End of story, says editor. Microchimica Acta will not act because they “cannot prove image was manipulated”.

The best Twitter responses so far are by Chris Waldron and Sylvain Deville




Seriously though, if in a case like this, institutions and journals cannot act in a timely manner to fix the scientific record, there is no hope for cases which actually require thinking and investigations.

Here is the PubPeer thread with links to the other articles.

Update (20/12/2015): Editor of Microchimica Acta, Otto Wolfbeis, has been in touch. It is not the end of the story after all. From his email, we learn that the University of Manchester has been alerted and that a draft of a Retraction Note has been sent to the authors for comment.

Update (11/02/2016): Still no expression of concerns nor retractions… and Elsevier and Springer are still selling these fabricated articles:



Update (01/03/2016): RETRACTION of the Microchimica paper (Springer) “Following a balanced discussion of the allegations and after having consulted experts, the Editors of Microchimica Acta have come to the conclusion that there is striking evidence for manipulation.



Disclaimer: This is a personal weblog. The opinions expressed here represent my own and not those of my employer.

What’s wrong with that CNRS press release?

Imagine an important public institution, say, for the sake of example, the police.

Imagine that serious and specific accusations of misconduct have been made against a high ranking officer on a whistle-blower website. These have been picked up in the media. Although there is no suggestion that anybody has been physically harmed, those acts, if proved true, may have costed significant amount of public money and may have had severe consequences on the well being of many people and businesses. The media reports are also a concern because of the damage made to the public trust, essential to the police mission.

Imagine then, that the press release announcing the investigation says nothing of the potential consequences of those putative acts, stresses that the serious and specific accusations are in fact only anonymous comments on a website, indicates that the investigation procedure will be completely opaque to public scrutiny with an undefined timeline, and, finally, concludes with an entire paragraph devoted to the glorification of the work of the accused (and indeed highly qualified and otherwise commendable) officer.

This is, of course, science-fiction. The police would not adopt such a course of action because they know full well that this would only disqualify the investigation and do nothing for the prestige of the (maybe wrongly) accused officer.

This is however very close to what two major scientific institutions have just done.

Last week, the CNRS and ETH Zurich published press releases announcing investigations into allegations of scientific misconduct. Retraction Watch, covering these press releases, “found some of the language in the announcements puzzling. Call us old-fashioned, but generally it’s a good idea to actually do an investigation before saying that “the studies’ findings are not in doubt.”

True, especially in the current context. The scientific enterprise is suffering from a reproducibility crisis. One of the drivers of this crisis is the lack of publication of negative results which is itself a combined consequence of the publication system and of the methods of evaluation of researchers based on where they publish rather than what they publish [I got more (serious) congratulations for my April fool spoof paper in Nature Materials than for my PloS One paper published the day after].

Scientific institutions such as the CNRS and ETH Zurich should be leading the way to change those practices. They should not, at the onset of an investigation, rule out that “studies findings” (maybe) based on data manipulation are *not* in doubt. Instead, they should set firm plans to test how much of this body of work is solid and how much is not. Surely damages to human knowledge and the integrity of the scientific record should be major sources of concern, yet they barely feature in the press release. It would seem that the main (and almost exclusive) concern related to accusations of scientific misconduct is the damage done to the accused until proven guilty/innocent. That concern for individuals is warranted. It should not stop to the accused. If the charges are proved correct, then there are probably a number of other individuals, less prominent and well-known, who have directly suffered to different extent and for whom redress is unlikely to ever happen: the reviewers of papers and grants who have wasted their time on “diagram/chart” which had been “manipulated”; the competitors which may not have had access to such impressive data and therefore would have failed with their papers and grant applications; the PhD students who might have spent three years trying to reproduce some of these experiments without success [you would not have heard about this since negative results are not published] and may have left science in disgust at the end of the process, etc.

If you’re interested, see also this conversation about the CNRS press release via Twitter (with critical contributions from @b_abk6 and others).

and the Lab Times editorial with the important open letter by Vicki Vance

and of course, PubPeer

An accountability problem

In a Times Higher Education article two weeks ago, Paul Jump discussed the current legal threats on post-publication peer review highlighted by “the case of Fazlul Sarkar, a distinguished professor in cancer research at Wayne State University in Detroit [who] claims that anonymous comments posted on PubPeer this summer led to the withdrawal of a $350,000 (£220,000) a year job offer by the University of Mississippi.”

Rebecca Lawrence, Managing director 0f F1000 Research, responded to the above article with a letter entitled “An anonymity problem“, suggesting that anonymous commenting was not appropriate when “scientists’ livelihoods are at stake because of competition for funding and jobs“. Similarly, in an article at The Conversation, Andy Tattersall, Information Specialist at University of Sheffield, presents the fact post-publication peer review may have an impact on scientists as a potential cause for concern.

It should not be a cause for concern. It is normal (but not the norm) that what we publish (rather than the impact factor of the journal in which it is published) and how we respond to critiques of our work, should have an impact on our careers.

The problems in Sarkar’s papers are numerous. The fact that some scientists respond to reasonable criticism of their published work with abuse, legal threats or lawsuits are a clear demonstration of why anonymity is in some cases necessary. Are Rebecca and Andy really suggesting that scientists should not be accountable for what they publish?


Neuroskeptic: Postpublication “Cyberbullying” and the Professional Self

@Neuroskeptic writes:

The Science piece describes two controversies. Controversy #1 is the scientific question of the reality of those stripes. That is not the topic of this post.

Controversy #2 surrounds the way that Controversy #1 has been conducted. Stellacci’s critics say that they’re engaging in post-publication peer review of Stellacci et al’s claims. Stellacci, however, has described their criticisms as ‘cyberbullying‘:

Food for thought for anyone involved, or thinking about getting involved, in post-publication peer review, read it here.