Science has retracted a 2005 paper about a technique called “MAGIC” which used magnetic nanoparticles to look at biomolecular interactions in real time in life cells.
To look at the interaction between two proteins, one was to be fluorescently labelled, e.g. a GFP construct, and its potential binding partner was to be attached on a magnetic nanoparticle. A magnet was then used to move the magnetic nanoparticles in and out of the focus of a microscope objective. If there is binding, then the fluorescence goes in and out of focus, if not, the fluorescence is not affected by the magnetic field.
This sounds nice… except that an investigation found that there was no experimental records to support the data presented in the paper.
As a scientist who sometimes review papers, one of the key questions for me is: was it possible to detect the fraud at the time of submission? Can we learn something from this story about how better reviewing scientific papers. I had a fresh look at the original paper with this question in mind and the answer is not straightforward.
Reviewers do not have access to the lab books and primary evidence. In this case, the falsifications are not obvious: I don’t think that, as a reviewer, I would have suspected that the data were fabricated.
One thing however is striking: the paper reported extraordinary (magical?) results far from what was at the time the state of the art. To take just one example, the technique viability supposed a perfect control on the cytosolic delivery of functional nanomaterials: 4 years later, this is still a field of active investigation and such a control is far from being routinely attained. Yet, in the 2005 paper, claims over of such a control were accepted without the backing of a serious experimental investigation, e.g. no electron microscopy images of the nanoparticles inside the cells were presented.
The paper went several steps ahead of the state of the art (and that’s why it was interesting for Science)… and some of the first steps were poorly (or not) documented, while the last step was documented with fabricated data. Maybe questionning these first steps would have allowed detecting the problem? or maybe not…