April 4, 2012

A lot of science is just plain wrong

Suddenly, everybody’s saying it: the scientific and medical literature is riddled with poor studies, irreproducible results, concealed data and sloppy mistakes.
Since these studies underpin a huge number of government policies, from health to the environment, that’s a serious charge.
Let’s start with Stan Young, Assistant Director of Bioinformatics at the US National Institute of Statistical Sciences. He recently gave evidence to the US Congress Committee on Science, Space and Technology about the quality of science used by the US Environmental Protection Agency.
Some might think, he said, that peer review is enough to assure the quality of the work, but it isn’t. “Peer review only says that the work meets the common standards of the discipline and, on the face of it, the claims are plausible. Scientists doing peer review essentially never ask for data sets and subject the paper to the level of examination that is possible by making data electronically available.”
He called for the EPA to make the data underlying key regulations, such as those on air pollution and mortality, available. Without it, he said, those papers are “trust me” science. Authors of research reports funded by the EPA should provide, at the time of publication, three things: the study protocol, the statistical analysis code, and an electronic copy of the data used in the publication.
Further, he calls for data collection and analysis to be funded separately, since they call for different skills and if data building and analysis are together, there is a natural tendency for authors not to share the data until the last ounce of information is extracted. “It would be better to open up the analysis to multiple teams of scientists.”
The problem of data access is not unique to the EPA, or the US. Despite the open data claims made by the UK Government, many sets of data in the social sciences gathered at government expense are not routinely available to scholars, a point made at a conference last month at the British Academy under the auspices of its Languages and Quantitative Skills programme.
Often this is data that is too detailed, sensitive and confidential for general release but that can be made available to researchers through organisations such as the Secure Data Service, which is funded by the Economic and Social Science Research Council. But complaints were made at the conference that SDS data is three years late in being released.
Accessibility of data was also among the points made in a damning survey of cancer research published last week in Nature (1). Glenn Begley spent ten years as head of global cancer research at the biotech firm Amgen, and paints a dismal picture of the quality of much academic cancer research. He set a team of 100 scientists to follow up papers that appeared to suggest new targets for cancer drugs, and found that the vast majority – all but six out of 53 “landmark” publications – could not be reproduced.
That meant that money spent trying to develop drugs on the basis of these papers would have been wasted, and patients might have been put at risk in trials that were never going to result in useful medicines. “It was shocking” Dr Begley told Reuters. “These are the studies that the pharmaceutical industry relies on to identify new targets for drug development. But if you’re going to place a $1 million or $2 million or $5 million bet on an observation, you need to be sure it’s true. As we tried to reproduce these papers we became convinced that you can’t take anything at face value.”
He suggests that researchers should, as in clinical research, be blinded to the control and treatment arms, and that they should be obliged to report all data, negative as well as positive. He recounted to Reuters a shocking story of a meeting with the lead author of one of these irreproducible studies at a conference. He took him through the paper line by line, explaining that his team had repeated the experiment 50 times without getting the result reported. “He said they’d done it six times and got this result once, but put it in the paper because it made the best story. It’s very disillusioning.”
Intense academic pressure to publish, ideally in prestige journals, and the failure of those journals to make proper checks, has both contributed to the problem. Journal editors – even those at Nature, where Begley’s study was published – seem reluctant to acknowledge the problem.  Nature published an editorial that seemed to place the blame on sloppy mistakes and carelessness, but I read Begley’s warning as much more fundamental than that, as did many of those who commented on the editorial.
This website has identified a few examples of implausible results published in distinguished journals, but the editors of those journals don’t seem very bothered. In an era where online publishing with instant feedback and an essentially limitless ability to publish data is available, the journals are too eager to sustain their mystique, and too reluctant to admit to error. That said, retractions have gone up by ten-fold over the past decade, while the literature itself has grown by only 44 per cent, according to evidence given to a US National Academy of Sciences committee last month.
Stan Young, however, does not blame the editors. In an article in last September’s issue of Significance (2), he and colleague Alan Carr argue that quality control cannot be exercised solely at the end of the process, by throwing out defective studies, let alone at the replicative stage. It must be exercised at every stage, by scientists, funders, and academic institutions.
“At present researchers – and, just as important, the public at large – are being deceived, and are being deceived in the name of science. This should not be allowed to continue”, Young and Carr conclude.

References
1. Raise standards for preclinical cancer research, by C. Glenn Begley and Lee M Ellis, Nature 483, pp 531-33, 29 March 2012
2. Deming, data and observational studies, by S. Stanley Young and Alan Karr, Significance, September 2011, pp 116-120

No comments:

Random Posts


  • Plagiarism: text-matching program offers an answer - Correspondance: NATURE

    John Bechhoefer1The removal of almost 70 papers from the arXiv server on suspicion of plagiarism is dismaying (Nature 449, 8; doi:10.1038/449008b 2007). But, in a similar way to that currently being tested by the cooperative group of publishers CrossRef ('Academic accused of living on borrowed lines... READ MORE>>

  • Plagiarism? No, we're just borrowing better English - Correspondance: NATURE

    Ihsan Yilmaz1 The accusations made by arXiv that my colleagues and I have plagiarized the works of others, reported in your News story 'Turkish physicists face accusations of plagiarism' (Nature 449, 8; doi:10.1038/449008b 2007) are upsetting and unfair. It's inappropriate to single out my colleague... READ MORE>>

  • Academic Dishonesty and Graduate Students

    CEW Brownbag Discussion • Research on academic dishonesty among graduate students is comparatively limited. Most studies of academic dishonesty in higher education have tended to focus on undergraduates or on students as a whole, without distinguishing between graduate and undergraduate students. As... READ MORE>>

  • A Case of Plagiarism in the Physics Preprint Server arXiv.

    Alex Bienkowski One of the more interesting developments in web-based scientific publishing has been the growth of arXiv, a “preprint” server originally launched by Paul Ginsparg at Los Alamos and now hosted at Cornell. The system was first called xxx, and the domain was high-energy physics. Later... READ MORE>>

  • Nearly there!

    Chris Leonard Plagiarism & PMC Physics A There has been a lot of recent publicity on the Turkish plagiarism sandal which has affected arXiv and several high-profile physics journals recently. This has been an 'elephant in the room' of science publishing for some years now. Skillfully manipulated... READ MORE>>

  • The Scientist: Me First!

    Glenn McGee The system of scientific authorship is in crisis. Two new rules could help make things right. READ MORE>>

  • Brane-world black holes and energy-momentum vector (removed from JHEP)

    Mustafa Salti et al JHEP12(2006)078 doi:10.1088/1126-6708/2006/12/078Mustafa Salti1, Oktay Aydogdu1 and Murat Korunur21 Department of Physics, Art and Science Faculty, Middle East Technical University, 06531, Ankara-Turkey2Department of Physics, Faculty of Art and Science, Dicle University, 21280, D... READ MORE>>

.

.
.

Popular Posts