April 4, 2012

A lot of science is just plain wrong

Suddenly, everybody’s saying it: the scientific and medical literature is riddled with poor studies, irreproducible results, concealed data and sloppy mistakes.
Since these studies underpin a huge number of government policies, from health to the environment, that’s a serious charge.
Let’s start with Stan Young, Assistant Director of Bioinformatics at the US National Institute of Statistical Sciences. He recently gave evidence to the US Congress Committee on Science, Space and Technology about the quality of science used by the US Environmental Protection Agency.
Some might think, he said, that peer review is enough to assure the quality of the work, but it isn’t. “Peer review only says that the work meets the common standards of the discipline and, on the face of it, the claims are plausible. Scientists doing peer review essentially never ask for data sets and subject the paper to the level of examination that is possible by making data electronically available.”
He called for the EPA to make the data underlying key regulations, such as those on air pollution and mortality, available. Without it, he said, those papers are “trust me” science. Authors of research reports funded by the EPA should provide, at the time of publication, three things: the study protocol, the statistical analysis code, and an electronic copy of the data used in the publication.
Further, he calls for data collection and analysis to be funded separately, since they call for different skills and if data building and analysis are together, there is a natural tendency for authors not to share the data until the last ounce of information is extracted. “It would be better to open up the analysis to multiple teams of scientists.”
The problem of data access is not unique to the EPA, or the US. Despite the open data claims made by the UK Government, many sets of data in the social sciences gathered at government expense are not routinely available to scholars, a point made at a conference last month at the British Academy under the auspices of its Languages and Quantitative Skills programme.
Often this is data that is too detailed, sensitive and confidential for general release but that can be made available to researchers through organisations such as the Secure Data Service, which is funded by the Economic and Social Science Research Council. But complaints were made at the conference that SDS data is three years late in being released.
Accessibility of data was also among the points made in a damning survey of cancer research published last week in Nature (1). Glenn Begley spent ten years as head of global cancer research at the biotech firm Amgen, and paints a dismal picture of the quality of much academic cancer research. He set a team of 100 scientists to follow up papers that appeared to suggest new targets for cancer drugs, and found that the vast majority – all but six out of 53 “landmark” publications – could not be reproduced.
That meant that money spent trying to develop drugs on the basis of these papers would have been wasted, and patients might have been put at risk in trials that were never going to result in useful medicines. “It was shocking” Dr Begley told Reuters. “These are the studies that the pharmaceutical industry relies on to identify new targets for drug development. But if you’re going to place a $1 million or $2 million or $5 million bet on an observation, you need to be sure it’s true. As we tried to reproduce these papers we became convinced that you can’t take anything at face value.”
He suggests that researchers should, as in clinical research, be blinded to the control and treatment arms, and that they should be obliged to report all data, negative as well as positive. He recounted to Reuters a shocking story of a meeting with the lead author of one of these irreproducible studies at a conference. He took him through the paper line by line, explaining that his team had repeated the experiment 50 times without getting the result reported. “He said they’d done it six times and got this result once, but put it in the paper because it made the best story. It’s very disillusioning.”
Intense academic pressure to publish, ideally in prestige journals, and the failure of those journals to make proper checks, has both contributed to the problem. Journal editors – even those at Nature, where Begley’s study was published – seem reluctant to acknowledge the problem.  Nature published an editorial that seemed to place the blame on sloppy mistakes and carelessness, but I read Begley’s warning as much more fundamental than that, as did many of those who commented on the editorial.
This website has identified a few examples of implausible results published in distinguished journals, but the editors of those journals don’t seem very bothered. In an era where online publishing with instant feedback and an essentially limitless ability to publish data is available, the journals are too eager to sustain their mystique, and too reluctant to admit to error. That said, retractions have gone up by ten-fold over the past decade, while the literature itself has grown by only 44 per cent, according to evidence given to a US National Academy of Sciences committee last month.
Stan Young, however, does not blame the editors. In an article in last September’s issue of Significance (2), he and colleague Alan Carr argue that quality control cannot be exercised solely at the end of the process, by throwing out defective studies, let alone at the replicative stage. It must be exercised at every stage, by scientists, funders, and academic institutions.
“At present researchers – and, just as important, the public at large – are being deceived, and are being deceived in the name of science. This should not be allowed to continue”, Young and Carr conclude.

References
1. Raise standards for preclinical cancer research, by C. Glenn Begley and Lee M Ellis, Nature 483, pp 531-33, 29 March 2012
2. Deming, data and observational studies, by S. Stanley Young and Alan Karr, Significance, September 2011, pp 116-120

No comments:

Random Posts



.
.

Popular Posts