The way we fund and publish science encourages fraud. A forum about academic misconduct aims to find practical solutions
Peer review happens behind closed doors, with anonymous reviews only seen by editors and authors. This means we have no idea how effective it is. Photo: Alamy
Science is broken. Psychology was rocked recently by stories of academics making up data, sometimes overshadowing whole careers. And it isn't the only discipline with problems - the current record for fraudulent papers is held by anaesthesiologist Yoshitaka Fujii, with 172 faked articles.
These
scandals highlight deeper cultural problems in academia. Pressure to
turn out lots of high-quality publications not only promotes extreme
behaviours, it normalises the little things, like the selective
publication of positive novel findings – which leads to
"non-significant" but possibly true findings sitting unpublished on
shelves, and a lack of much needed replication studies.
Why does
this matter? Science is about furthering our collective knowledge, and
it happens in increments. Successive generations of scientists build
upon theoretical foundations set by their predecessors. If those
foundations are made of sand, though, then time and money will be wasted
in the pursuit of ideas that simply aren't right.
A recent paper
in the journal Proceedings of the National Academy of Sciences shows
that since 1973, nearly a thousand biomedical papers have been retracted
because someone cheated the system. That's a massive 67% of all
biomedical retractions. And the situation is getting worse - last year, Nature reported that the rise in retraction rates has overtaken the rise in the number of papers being published.
This
is happening because the entire way that we go about funding,
researching and publishing science is flawed. As Chris Chambers and
Petroc Sumner point out, the reasons are numerous and interconnecting:
• Pressure to publish in "high impact" journals, at all research career levels;
• Universities treat successful grant applications as outputs, upon which continued careers depend;
• Statistical analyses are hard, and sometimes researchers get it wrong;
• Journals favour positive results over null findings, even though null findings from a well conducted study are just as informative;
• The way journal articles are assessed is inconsistent and secretive, and allows statistical errors to creep through.
• Universities treat successful grant applications as outputs, upon which continued careers depend;
• Statistical analyses are hard, and sometimes researchers get it wrong;
• Journals favour positive results over null findings, even though null findings from a well conducted study are just as informative;
• The way journal articles are assessed is inconsistent and secretive, and allows statistical errors to creep through.
Problems occur at all levels in the system, and we need to stop stubbornly arguing that "it's not that bad" or that talking about it somehow damages science. The damage has already been done – now we need to start fixing it.
Chambers and Sumner argue that replication is critical to keeping science honest, and they are right. Replication
is a great way to verify the results of a given study, and its
widespread adoption would, in time, act as a deterrent for dodgy
practices. The nature of statistics means that sometimes positive
findings arise by chance, and if replications aren't published, we can't
be sure that a finding wasn't simply a statistical anomaly.
But
replication isn't enough: we need to enact practical changes at all
levels in the system. The scientific process must be as open to scrutiny
as possible – that means enforcing study pre-registration to deter
inappropriate post-hoc statistical testing, archiving and sharing data
online for others to scrutinise, and incentivising these practices (such
as guaranteeing publications, regardless of findings).
The
peer-review process needs to be overhauled. Currently, it happens behind
closed doors, with anonymous reviews only seen by journal editors and
manuscript authors. This means we have no real idea how effective peer
review is – though we know it can easily be gamed. Extreme examples of fake reviewers, fake journal articles, and even fake journals have been uncovered.
More
often, shoddy science and dodgy statistics are accepted for publication
by reviewers with inadequate levels of expertise. Peer review must
become more transparent. Journals like Frontiers already use an interactive reviewing format, with reviewers and authors discussing a paper in a real-time, forum-like setting.
A
simple next step would be to make this system open and viewable by
everyone, while maintaining the anonymity of the reviewers themselves.
This would allow young researchers to be critical of a senior academic's
paper without fear of career suicide.
On 12 November, we are hosting a session on academic misconduct at SpotOn London, Nature's conference about all things science online.
The
aim of the session is to find practical solutions to these problems
that science faces. It will involve scientific researchers, journalists
and journal editors. We've made some suggestions here, but we want more
from you. What would you like to see discussed? Do you have any ideas,
opinions or solutions?
We'll take the best points and air them at
the session, so speak up now! Let's stop burying our heads in the sand
and stand up for good science.
Pete Etchells is a biological psychologist and Suzi Gage is a translational epidemiology PhD student. Both are at the University of Bristol