It started with a tweet by a neuroscientist called Leigh, on January 7th, and since then the genie has been out of the bottle. Leigh twittered: “We did experiment 2 because we didn’t know what the fuck to make of experiment 1.” After that, thousands of scientists confessed their small white lies under hashtag ‘overlyhonestmethods’, which usually would not see the light of day. Let alone that they would be published.
First a few examples.
“I cited this paper because everyone else has cited it, though none has ever seen an actual copy.”
“Blood samples were spun at 1500 rpm because the centrifuge made a scary noise at higher speeds.”
“Incubation lasted three days because this is how long the undergrad forgot the experiment in the fridge.”
“The control cohort was made up of anyone in the building we could bribe with freddo frogs (chocolate frogs, ed.). This is true.”
The revelations – some sincere, others probably made up - show that science is carried out by humans, says Jochen Cals, researcher at general practitioner medicine. As far as he is concerned, the ‘overlyhonest’ tweets – to which he also made a contribution – provide insight into how things work in science. “As a researcher you hope to get extraordinary findings, but often one's findings are not extraordinary. You still want them to be published, so you have to ‘sell’ them to a journal. If the results are partly due to accidental circumstances, for instance, most researchers will not formulate it as such. They will turn it into a credible story.”
Cals doesn’t feel that it is a bad thing if white lies are covered. “Let it be clear that there is a line between these confessions and the type of fraud that Diederik Stapel committed. Research is after all also the result of a series of human decisions, on test subjects, on the type of research, on the setting, in the lab or a practical situation. These are not ironclad rules but choices that determine the results to a certain degree.”
What did Cals twitter? Winking: “We did another systematic review because doing trials ourselves is just too much for us lazy buzzers.” A systematic review, a kind of analysis of all relevant studies, is increasingly regarded as a golden standard, says Cals. “But if only three studies have been carried out, then a meta-analysis is of little use.”
In the aftermath of overlyhonestmethods, the ‘overlyhonestreviews’ hashtag appeared, on peer reviews, or the assessment of articles by colleagues. “It shows, among others, that some reviewers are led by political motives and may tear an article from a competing research group to pieces. That is why it is a good thing that leading journals these days demand that reviewers are named.”