This is Part 3 of a series of 3, which also includes Part 1: Plagiarism, and Part 2: Falsification.
In Part 1 and Part 2 of this series, I showed some examples of plagiarism and falsification in scientific papers, which the Office of Research Integrity (ORI) considers two of the three forms of Research Misconduct. Here, we will look at the third type of misconduct, fabrication. ORI defines fabrication as follows:
“Fabrication is making up data or results and recording or reporting them.”Office of Research Integrity: Definition of Research Misconduct
As I wrote in the previous blog post, “Falsification” and “Fabrication” are not always easy to distinguish. In falsification, experimental measurements might have been altered so that research is not accurately represented. Fabrication is making up data, so reporting on experiments that never happened or patients that never existed.
Many cases of falsification could be interpreted as fabrication as well. For example, an immunoblot obtained with antibody A in Figure 1 looks extremely similar to an immunoblot obtained with antibody B in Figure 2. Clearly, an experiment took place, but one of these experiment is made to look like an other experiment, making it falsification. On the other hand, you could argue that one of the two experiments was fabricated, because the experiment with one of the antibodies never took place.
In many of the case summaries reported on ORI’s website, respondents are found to be guilty of both fabrication and/or falsification, showing again that the distinction between these 2 might often not be clear.
In other cases, plagiarism and fabrication can also overlap. For example, a case where Old Paper 1 shows a bunch of experiments and figures, and New Paper 2 from a different research groups show exactly the same measurements and figures. That is data plagiarism on one hand, but probably also fabrication, because Paper 2 appears to have been made up.
Here, I will focus on cases where complete sets of experiments appear to have been made up.
Fabrication cases in the news
- Diederik Stapel, a former professor of Social Psychology in the Netherlands, who made up most of his data. Has currently over 58 retracted papers.
- Yoshitaka Fujii, a Japanese researcher with over 180 retractions, 126 of which were “totally fabricated”. Interestingly, only 1 of these is currently listed on PubPeer.
- Yoshihiro Sato, a Japanese bone researcher who passed away in 2017, and was found to have fabricated at least 20 studies.
Unlike image duplication or textual similarities, data fabrication is harder to detect by just looking at a published paper. These cases are often brought to light by whistleblowers who work closely with a person and suspect that data is made up.
However, sometimes data fabrication can be detected by careful statistical analysis of the data. For example, British anesthesiologist John Carlisle reported statistical flaws in Fujii’s papers, making them unlikely to have actually taken place. He later wrote a larger study about fabrication as one of the cause of non-random data in clinical trials. Similar findings were obtained by Bolland et al. in 2016 in a paper in Neurology.
Chris Hartgerink described the analysis of p values to detect data fabrication in his 2016 paper. And James Heathers and Nick Brown developed the GRIM test to detect anomalies in psychology papers.
- Nature and Science Retractions Connected to Research Misconduct – Chia-Yi Hou – The Scientist – 2019
- “Evidence of fabricated data” leads to retraction of paper on software engineering – Adam Marcus – Retraction Watch -2019
- Data Fabrication & Reproducibility: How Triangulation Offers Novel Solutions – Enago Academy – 2018
- Meet the ‘data thugs’ out to expose shoddy and questionable research – Adam Marcus and Ivan Oransky – Science – 2018
- Data in biofuel paper “had either been grossly misinterpreted or fabricated” – Victoria Stern – Retraction Watch – 2018
- Data fabrication and other reasons for non‐random sampling in 5087 randomised, controlled trials in anaesthetic and general medical journals – Carlisle JB – Anaestesia – 2017
- Exploring John Carlisle’s “bombshell” article about fabricated data in RCTs – Nick Brown – 2017
- Carlisle’s statistics bombshell names and shames rigged clinical trials – Leonid Schneider – For Better Science – 2017
- There’s a way to spot data fakery. All journals should be using it – Ivan Oransky and Adam Marcus – STAT News – 2016
- The value of statistical tools to detect data fabrication – Hartgerink et al. RIO Journal 2: e8860 – 2016
- MIT Terminates Researcher Over Data Fabrication – Jennifer Couzin – Science – 2005
3 thoughts on “What is Research Misconduct? Part 3: Fabrication”
The 3 categories you name are all “active” forms of misconduct, meaning the perpetrators themselves commit it. Therein lies the problem with the ORI cases you cite: the junior (usually first) author gets thrown under the bus, but the PIs who instigated that data fabrication or falsification get portrayed as victims of other people’s fraud. All what counts is who personally faked the data, not who oversaw the data manipulation and profited from it.
This is why I think research misconduct also includes “passive” acts, like condoning, rewarding, and obfuscating questionable research practices in one’s lab, which in particular includes removal or destruction of raw data and retaliations against whistleblowers and critics.
Because I shall shamelessly use this opportunity to promote my site, here are recent examples: