Showing posts with label The Guardian. Show all posts
Showing posts with label The Guardian. Show all posts

August 8, 2014

Yoshiki Sasai: A tribute to an outstanding scientist - The Guardian

The scientific community was shocked to hear of the death earlier this week of stem cell researcher Yoshiki Sasai, who apparently committed suicide in the wake of a high profile case of scientific fraud at the RIKEN Center for Developmental Biology (CDB) in Kobe, Japan, where he had worked.
Two papers from the RIKEN CDB, co-authored by Sasai and published in the journal Nature in late January, described a simple method for converting mature cells into embryonic stem cells, called stimulus-triggered acquisition of pluripotency (STAP).
It seemed to good to be true – and it was. The findings were challenged and other labs tried but failed to replicate the method. Lead researcher Haruko Obokata was found guilty of scientific misconduct and in July both of the papers were retracted. Sasai himself was cleared of any involvement in the misconduct, but Obokata did the work under his supervision, and so he was criticised for oversights while the papers were being written up.
I had been working on a feature article about Sasai’s own work for Mosaic, and travelled to Japan earlier this year to visit his lab, as part of my reporting for the article. By coincidence, I arrived the day the STAP method hit the news - the Daily Telegraph had accidentally published their story about it too early - and so found myself competing with several film crews for his attention.
As a result, my visit to the lab was cut short, and I spent far less time there than had been planned, but nevertheless I managed to interview Sasai and two of his colleagues and take a look around.
The story was originally scheduled for publication on 26th August, and my editors at the Wellcome Trust have decided to go ahead and publish it on the scheduled date. They felt that it should mention of these tragic events, without letting them overshadow the real focus of the story, and so, apart from several small changes to the main story, and the addition of a brief epilogue, it is unchanged.
I spent very little time with Sasai but he struck me as a very proud man, and the remarkable work being done in his lab gave him every reason to be, so I do not doubt reports that he had felt “deeply ashamed” about the STAP cell papers and the disrepute they had brought to RIKEN, in the weeks leading up to his death. During this time, an independent committee had recommended that the CDB be dismantled, and Sasai’s mental and physical health had by then suffered considerably, so I feel doubly honoured to have visited him there when I did.
Sadly, many of the news stories about his death have focused on the unfortunate circumstances that mired the last few months of his life. We would like to send our deepest condolences to Sasai’s family and friends and hope that that the Mosaic story will serve as a sensitive and timely tribute to the pioneering work of an outstanding scientist.
This is an unedited version of an article I wrote for the Mosaic blog.

November 2, 2012

Scientific fraud is rife: it's time to stand up for good science - The Guardian

The way we fund and publish science encourages fraud. A forum about academic misconduct aims to find practical solutions
 A meeting room 
Peer review happens behind closed doors, with anonymous reviews only seen by editors and authors. This means we have no idea how effective it is. Photo: Alamy
Science is broken. Psychology was rocked recently by stories of academics making up data, sometimes overshadowing whole careers. And it isn't the only discipline with problems - the current record for fraudulent papers is held by anaesthesiologist Yoshitaka Fujii, with 172 faked articles.
These scandals highlight deeper cultural problems in academia. Pressure to turn out lots of high-quality publications not only promotes extreme behaviours, it normalises the little things, like the selective publication of positive novel findings – which leads to "non-significant" but possibly true findings sitting unpublished on shelves, and a lack of much needed replication studies.
Why does this matter? Science is about furthering our collective knowledge, and it happens in increments. Successive generations of scientists build upon theoretical foundations set by their predecessors. If those foundations are made of sand, though, then time and money will be wasted in the pursuit of ideas that simply aren't right.
A recent paper in the journal Proceedings of the National Academy of Sciences shows that since 1973, nearly a thousand biomedical papers have been retracted because someone cheated the system. That's a massive 67% of all biomedical retractions. And the situation is getting worse - last year, Nature reported that the rise in retraction rates has overtaken the rise in the number of papers being published.
This is happening because the entire way that we go about funding, researching and publishing science is flawed. As Chris Chambers and Petroc Sumner point out, the reasons are numerous and interconnecting:
• Pressure to publish in "high impact" journals, at all research career levels;
• Universities treat successful grant applications as outputs, upon which continued careers depend;
• Statistical analyses are hard, and sometimes researchers get it wrong;
• Journals favour positive results over null findings, even though null findings from a well conducted study are just as informative;
• The way journal articles are assessed is inconsistent and secretive, and allows statistical errors to creep through.
Problems occur at all levels in the system, and we need to stop stubbornly arguing that "it's not that bad" or that talking about it somehow damages science. The damage has already been done – now we need to start fixing it.
Chambers and Sumner argue that replication is critical to keeping science honest, and they are right. Replication is a great way to verify the results of a given study, and its widespread adoption would, in time, act as a deterrent for dodgy practices. The nature of statistics means that sometimes positive findings arise by chance, and if replications aren't published, we can't be sure that a finding wasn't simply a statistical anomaly.
But replication isn't enough: we need to enact practical changes at all levels in the system. The scientific process must be as open to scrutiny as possible – that means enforcing study pre-registration to deter inappropriate post-hoc statistical testing, archiving and sharing data online for others to scrutinise, and incentivising these practices (such as guaranteeing publications, regardless of findings).
The peer-review process needs to be overhauled. Currently, it happens behind closed doors, with anonymous reviews only seen by journal editors and manuscript authors. This means we have no real idea how effective peer review is – though we know it can easily be gamed. Extreme examples of fake reviewers, fake journal articles, and even fake journals have been uncovered.
More often, shoddy science and dodgy statistics are accepted for publication by reviewers with inadequate levels of expertise. Peer review must become more transparent. Journals like Frontiers already use an interactive reviewing format, with reviewers and authors discussing a paper in a real-time, forum-like setting.
A simple next step would be to make this system open and viewable by everyone, while maintaining the anonymity of the reviewers themselves. This would allow young researchers to be critical of a senior academic's paper without fear of career suicide.
On 12 November, we are hosting a session on academic misconduct at SpotOn London, Nature's conference about all things science online.
The aim of the session is to find practical solutions to these problems that science faces. It will involve scientific researchers, journalists and journal editors. We've made some suggestions here, but we want more from you. What would you like to see discussed? Do you have any ideas, opinions or solutions?
We'll take the best points and air them at the session, so speak up now! Let's stop burying our heads in the sand and stand up for good science.
Pete Etchells is a biological psychologist and Suzi Gage is a translational epidemiology PhD student. Both are at the University of Bristol

October 12, 2012

Scientific fraud: a sign of the times? - The Guardian

If you read about scientific fraud in the recent news, it would seem that there is much to worry about. It's on the rise, apparently! There has been a 10-fold increase in the number of retracted papers since the 1970's, and a number of these are due to fraud or suspected fraud.

An investigation of retractions from the biomedical scientific literature database PubMed published in the prestigious Proceedings of the National Academy of Science USA (PNAS) found that a whopping 63.2% of health- and life-science related retractions were due to fraud, suspected fraud or plagiarism, with good old honest error retractions in the sound minority. This sounds scary – especially the 'suspected fraud'. Is this just the tip of the scientific deceit iceberg? Just how many lies are lurking in the scientific literature?

Then there are the stories. Professor Marc Hauser (formerly) of Harvard was accused by the U.S. Department of Health and Human Services's Office of Research Integrity of inventing results to support his idea of a biological foundation for cognition in monkeys – specifically if they could recognize changes in sound patterns like human babies can. Hauser was a popular scientist too; he even has a best-selling book: Moral Minds: How Nature Designed Our Universal Sense of Right and Wrong where he somewhat ironically argued that "policy wonks and politicians should listen more closely to our intuitions and write policy that effectively takes into account the moral voice of our species." Which worked out in his case; he was busted for scientific misconduct. His book also tells us that "our ability to detect cheaters who violate social norms is one of nature's gifts". Nature's gifts or not, his students and research assistants blew the whistle.

And this isn't just in life science, it's everywhere. Physics has its high profile cheaters too! There is Jan Hendrik Schön, the physicist who made up his data – 26 of his papers have been retracted and he has been stripped of his doctoral degree. And then there is the cold fusion boys who, to be fair, are probably more victims of faulty equipment and sticking to your beloved theory despite the facts, than perpetrators of actual fraud. Psychology is not immune either; Dirk Smeester, whose results seemed too good to be true, has also been caught just making stuff up.

Is no scientific discipline safe? Are scientists just incapable of keeping their modern houses clean? It has been argued that because of recent pressure for scientists to publish groundbreaking results that change the world, the temptation to commit fraud is perhaps bound to increase, implying that there was a simpler, more honest time for science. Dewy-eyed, there is a temptation to believe that scientists back in the day were only of high moral character and were purely duty-bound to pursue the truth. But this isn't really true. Fraud in science isn't new, just like fraud in anything isn't new.>>>

September 13, 2012

False positives: fraud and misconduct are threatening scientific research - The Guardian

Alok Jha
Dirk Smeesters had spent several years of his career as a social psychologist at Erasmus University in Rotterdam studying how consumers behaved in different situations. Did colour have an effect on what they bought? How did death-related stories in the media affect how people picked products? And was it better to use supermodels in cosmetics adverts than average-looking women?

The questions are certainly intriguing, but unfortunately for anyone wanting truthful answers, some of Smeesters' work turned out to be fraudulent. The psychologist, who admitted "massaging" the data in some of his papers, resigned from his position in June after being investigated by his university, which had been tipped off by Uri Simonsohn from the University of Pennsylvania in Philadelphia. Simonsohn carried out an independent analysis of the data and was suspicious of how perfect many of Smeesters' results seemed when, statistically speaking, there should have been more variation in his measurements.

The case, which led to two scientific papers being retracted, came on the heels of an even bigger fraud, uncovered last year, perpetrated by the Dutch psychologist Diederik Stapel. He was found to have fabricated data for years and published it in at least 30 peer-reviewed papers, including a report in the journal Science about how untidy environments may encourage discrimination.

The cases have sent shockwaves through a discipline that was already facing serious questions about plagiarism.

"In many respects, psychology is at a crossroads – the decisions we take now will determine whether or not it remains a serious, credible, scientific discipline along with the harder sciences," says Chris Chambers, a psychologist at Cardiff University.

"We have to be open about the problems that exist in psychology and understand that, though they're not unique to psychology, that doesn't mean we shouldn't be addressing them. If we do that, we can end up leading the other sciences rather than following them."

Cases of scientific misconduct tend to hit the headlines precisely because scientists are supposed to occupy a moral high ground when it comes to the search for truth about nature. The scientific method developed as a way to weed out human bias. But scientists, like anyone else, can be prone to bias in their bid for a place in the history books.

Increasing competition for shrinking government budgets for research and the disproportionately large rewards for publishing in the best journals have exacerbated the temptation to fudge results or ignore inconvenient data.

Massaged results can send other researchers down the wrong track, wasting time and money trying to replicate them. Worse, in medicine, it can delay the development of life-saving treatments or prolong the use of therapies that are ineffective or dangerous. Malpractice comes to light rarely, perhaps because scientific fraud is often easy to perpetrate but hard to uncover.

The field of psychology has come under particular scrutiny because many results in the scientific literature defy replication by other researchers. Critics say it is too easy to publish psychology papers which rely on sample sizes that are too small, for example, or to publish only those results that support a favoured hypothesis. Outright fraud is almost certainly just a small part of that problem, but high-profile examples have exposed a greyer area of bad or lazy scientific practice that many had preferred to brush under the carpet.

Many scientists, aided by software and statistical techniques to catch cheats, are now speaking up, calling on colleagues to put their houses in order.

Those who document misconduct in scientific research talk of a spectrum of bad practices. At the sharp end are plagiarism, fabrication and falsification of research. At the other end are questionable practices such as adding an author's name to a paper when they have not contributed to the work, sloppiness in methods or not disclosing conflicts of interest.

"Outright fraud is somewhat impossible to estimate, because if you're really good at it you wouldn't be detectable," said Simonsohn, a social psychologist. "It's like asking how much of our money is fake money – we only catch the really bad fakers, the good fakers we never catch."

If things go wrong, the responsibility to investigate and punish misconduct rests with the scientists' employers, the academic institution. But these organisations face something of a conflict of interest. "Some of the big institutions … were really in denial and wanted to say that it didn't happen under their roof," says Liz Wager of the Committee on Publication Ethics (Cope). "They're gradually realising that it's better to admit that it could happen and tell us what you're doing about it, rather than to say, 'It could never happen.'"

There are indications that bad practice – particularly at the less serious end of the scale – is rife. In 2009, Daniele Fanelli of the University of Edinburgh carried out a meta-analysis that pooled the results of 21 surveys of researchers who were asked whether they or their colleagues had fabricated or falsified research.

Publishing his results in the journal PLoS One, he found that an average of 1.97% of scientists admitted to having "fabricated, falsified or modified data or results at least once – a serious form of misconduct by any standard – and up to 33.7% admitted other questionable research practices. In surveys asking about the behaviour of colleagues, admission rates were 14.12% for falsification, and up to 72% for other questionable research practices."

A 2006 analysis of the images published in the Journal of Cell Biology found that about 1% had been deliberately falsified.  

Rise in retractions
According to a report in the journal Nature, published retractions in scientific journals have increased around 1,200% over the past decade, even though the number of published papers had gone up by only 44%. Around half of these retractions are suspected cases of misconduct.

Wager says these numbers make it difficult for a large research-intensive university, which might employ thousands of researchers, to maintain the line that misconduct is vanishingly rare.

New tools, such as text-matching software, have also increased the detection rates of fraud and plagiarism. Journals routinely use these to check papers as they are submitted or undergoing peer review. "Just the fact that the software is out there and there are people who can look at stuff, that has really alerted the world to the fact that plagiarism and redundant publication are probably way more common than we realised," says Wager. "That probably explains, to a big extent, this increase we've seen in retractions."

Ferric Fang, a professor at the University of Washington School of Medicine and editor in chief of the journal Infection and Immunity, thinks increased scrutiny is not the only factor and that the rate of retractions is indicative of some deeper problem.

He was alerted to concerns about the work of a Japanese scientist who had published in his journal. A reviewer for another journal noticed that Naoki Mori of the University of the Ryukyus in Japan had duplicated images in some of his papers and had given them different labels, as if they represented different measurements. An investigation revealed evidence of widespread data manipulation and this led Fang to retract six of Mori's papers from his journal. Other journals followed suit.  

Self-correction
The refrain from many scientists is that the scientific method is meant to be self-correcting. Bad results, corrupt data or fraud will get found out – either when they cannot be replicated or when they are proved incorrect in subsequent studies – and public retractions are a sign of strength.

That works up to a point, says Fang. "It ended up that there were 31 papers from the [Mori] laboratory that were retracted, many of those papers had been in the literature for five-10 years," he says. "I realised that 'scientific literature is self-correcting' is a little bit simplistic. These papers had been read many times, downloaded, cited and reviewed by peers and it was just by the chance observation by a very attentive reviewer that opened this whole case of serious misconduct."

Extraordinary claims that change the paradigm for a field will elicit lots of attention and people will look at the results very carefully. But cases such as Dr Mori's – where work is flawed and falsified but the results themselves are not particularly surprising or sensational and may even corroborated by others who perform their experiments legitimately – the misconduct is difficult to detect. "It's not that the results are wrong, it's that the data are false," says Fang.

And, often, research studies are very difficult to replicate. "If someone says they did a 15-year clinical study with 9,000 subjects and they publish their results, you may have to take their word for it because you're not going to be able to run out and recruit 9,000 patients of your own and do a 15-year study just to try to corroborate something that somebody else has done," says Fang. "A number of cases recently have come to light only because the investigators didn't have institutional review board approval for their studies.

Upon digging deeper, the institutions questioned whether any of the studies were done at all. This kind of misconduct is very difficult to detect otherwise."

Selective publishing
In psychology research, there is a particular problem with researchers who selectively publish some of their experiments to guarantee a positive result. "Let's say you have this theory that, when you play Mozart, people want to pay more for musical instruments," says Simonsohn. "So you do a study and you play Mozart (or not) and you ask people, 'How much would you pay for a piano or flute and five instruments?'"

If it turned out that only the price of a single type of instrument, violins, say, went up after people had listened to Mozart, it would be possible to publish a research paper that omitted the fact that the researchers had ever asked about any other instruments. This would not allow the reader to make a proper assessment of the strength of the effect that Mozart may (or may not) have on how much a person would pay for musical instruments.

Fanelli has examined this positive result bias. He looked at 4,600 studies across all disciplines between 1990 and 2007, and counted the number of papers that, after declaring an intent to test a particular hypothesis, reported a positive support for it. The overall frequency of positive supports had grown by more than 22% over this time period. In a separate study, Fanelli found that "the odds of reporting a positive result were around five times higher among papers in the disciplines of psychology and psychiatry and economics and business compared with space science".

Culture of neophilia
This issue is exacerbated in psychological research by the "file-drawer" problem, a situation when scientists who try to replicate and confirm previous studies find it difficult to get their research published. Scientific journals want to highlight novel, often surprising, findings. Negative results are unattractive to journal editors and lie in the bottom of researchers' filing cabinets, destined never to see the light of day.

"We have a culture which values novelty above all else, neophilia really, and that creates a strong publication bias," says Chambers. "To get into a good journal, you have to be publishing something novel, it helps if it's counter-intuitive and it also has to be a positive finding. You put those things together and you create a dangerous problem for the field."

When Daryl Bem, a psychologist at Cornell University in New York, published sensational findings in 2011 that seemed to show evidence for psychic effects in people, many scientists were unsurprisingly sceptical. But when psychologists later tried to publish their (failed) attempts to replicate Bem's work, they found journals refused to give them space. After repeated attempts elsewhere, a team of psychologists led by Chris French at Goldsmith's, University of London, eventually placed their negative results in the journal PLoS One this year.

There is no suggestion of misconduct in Bem's research but the lack of an avenue in which to publish failed attempts at replication suggests self-correction can be compromised and people such as Smeesters and Stapel can remain undetected for a long time.

In some cases, misconduct (or fraud) has grave implications. In 2006, Anil Potti and colleagues at Duke University reported in the New England Journal of Medicine that they had developed a way to track the progression of a patient's lung cancer with a device, called an expression array, that could monitor the activity of thousands of different genes.

In a subsequent report in Nature Medicine, the same scientists wrote about a way to use their expression array to work out which drugs would work best for individual patients with lung, breast or ovarian cancer, depending on their patterns of gene activity. Within months of that publication, the biostatisticians Keith Baggerly and Kevin Coombes of the MD Anderson Cancer Centre in Houston had their doubts, and began uncovering major flaws in the work.

"It looked so promising that they actually started to do trials of cancer patients, they chose the chemotherapy depending on this test," says Wager. "The test has turned out to be completely invalid, so people were getting the wrong therapy, because the paper was not retracted quickly enough."

Blowing the whistle
Despite Baggerly and Coombes raising the alarm several times with the institutions involved, it was not until 2010 that Potti resigned from Duke University and several of the papers referring to his work on the expression array were retracted."Usually there is no official mechanism for a whistleblower to take if they suspect fraud," says Chambers. "You often hear of cases where junior members of a department, such as PhD students, will be the ones that are closest to the coalface and will be the ones to identify suspicious cases. But what kind of support do they have? ... That's a big issue that needs to be addressed."

In July this year, a group of the UK's main research funders and university groups published a Concordat to Support Research Integrity. "I don't think anyone would want to see a command-control direct regulation approach here," says Christopher Hale, deputy director of policy at Universities UK. "The concordat ... outlines a framework and then identifies how people fit within that and what actions they will take forward to strengthen it." The concordat requires institutions to have a process in place for dealing with misconduct, which includes appointing a senior person at the institution who can provide the necessary leadership and oversight during investigations.

Michael Farthing, vice-chair of the UK Research Integrity Office and vice-chancellor of the University of Sussex, has been a long-time campaigner on getting institutions and funders to take research misconduct seriously. In a recent article for Times Higher Education, Farthing said he supported the concordat but that it would not be enough. He stopped short of suggesting a statutory regulator for research but wrote: "Government and research leaders should take action to support and encourage excellence in research integrity, not sit on their hands until – as has happened in other countries – a scandal drives them towards legislation."

Statements of principle are one thing – every university and research council probably already has one applauding honourable research and deploring fraud – the key is the steps institutions take in understanding and de-incentivising misconduct.

The economics of science
The pressure to commit misconduct is complex. Arturo Casadevall of the Albert Einstein College of Medicine in New York and editor in chief of the journal mBio, places a large part of the blame on the economics of science. "What is happening in recent years is that the rewards have become too high, for example, for publishing in certain journals. Just like we see the problem in sports that, if you compete and you get a reward, it translates into everything from money and endorsements and things like that. People begin to take risks because the rewards are disproportionate."

As a PhD student in the 1980s, Casadevall says he published research in a few different journals depending on what his research was about. "Within 10 years, all you heard was, 'Where is the paper going to be published?' not 'What's in it?'. Scientists have got into this idea that where you publish determines the value of the work and that's crazy. What's important is what's in the paper."

Casadevall and Fang are aware that their spotlight on misconduct has the potential to show up scientists in a disproportionately bad light – as yet another public institution that cannot be trusted beyond its own self-interest. But they say staying quiet about the issue is not an option.

"Science has the potential to address some of the most important problems in society and for that to happen, scientists have to be trusted by society and they have to be able to trust each others' work," Fang says. "If we are seen as just another special interest group that are doing whatever it takes to advance our careers and that the work is not necessarily reliable, it's tremendously damaging for all of society because we need to be able to rely on science."

For Simonsohn, the biggest issue with outright fraud is not that the bad scientist gets caught but the corrupting effect the work can have on the scientific literature. To reduce the potential negative effects dramatically, Simonsohn suggests requiring scientists to post their data online. "That's very minimal cost and it has many benefits beyond reduction of fraud. It allows other people to learn things from your data which you were not able to learn about, it allows calibration of other models, it allows people to, three years later, reanalyse your data with new techniques."

Ivan Oransky, editor of the Retraction Watch blog that collects examples of retracted papers, argues: "The reason the public stops trusting institutions is when [its members] say things like, 'There's nothing to see here, let us handle it,' and then they find out about something bad that happened that nobody handled. That's when mistrust builds.The big challenges that face humanity, says Casadevall, are scientific ones – climate change, a new pandemic, the fact that most of our calories are coming from a very few plants, which are susceptible to new pests. "These are the big problems and humanity's defence against them is science. We need to make the enterprise work better."  

Malpractice and misconduct 
The South Korean scientist Hwang Woo-suk, rose to international acclaim in 2004 when he announced, in the journal Science, that he had extracted stem cells from cloned human embryos. The following year, Hwang published results showing he had made stem cell lines from the skin of patients – a technique that could help create personalised cures for people with degenerative diseases. By 2006, however, Hwang's career was in tatters when it emerged that he had fabricated material for his research papers. Seoul National University sacked him and, after an investigation in 2009, he was convicted of embezzling research funds.

Around the same time, a Norwegian researcher, Jon Sudbø, admitted to fabricating and falsifying data. Over many years of malpractice, he perpetrated one of the biggest scientific frauds ever carried out by a single researcher – the fabrication of an entire 900-patient study, which was published in the Lancet in 2005.

Marc Hauser, a psychologist at Harvard University whose research interests included the evolution of morality and cognition in non-human primates, resigned in August 2011 after a three-year investigation by his institution found he was responsible for eight counts of scientific misconduct. The alarm was raised by some of his students, who disagreed with Hauser's interpretations of experiments that involved the, somewhat subjective, procedure of working out a monkey's thoughts based on its response to some sight or sound.

Hauser last week admitted to making "mistakes" that led to the findings of research misconduct. "I let important details get away from my control, and as head of the lab, I take responsibility for all errors made within the lab, whether or not I was directly involved," says Hauser in a statement sent to Nature. The doubts over Hauser's work affect a whole field of scientific work that uses the same research technique.

This article was amended on 14 September 2012. The original referred to Liz Wager of the Committee on Public Ethics rather than Publication Ethics. This has been corrected.

June 19, 2012

Romanian prime minister accused of plagiarism - The Guardian

Victor Ponta copied parts of his doctoral thesis from two law scholars, claims scientific journal
A scientific journal has claimed that Romania's new prime minister has copied large swaths of his doctoral thesis without proper attribution.
Nature said in a press release it had seen documents that indicated that more than half of Victor Ponta's 432-page thesis on the international criminal court was plagiarised from a work of two Romanian law scholars.
The state news agency, Agerpres, said Ponta denies the allegation and is offering to submit his work to "any kind of test". Ponta accused the Romanian president, Traian Basescu, a bitter political rival, of orchestrating the attack.
Ponta became prime minister on 7 May after the previous government was ousted after losing a confidence vote. Two of his appointments for education minister stepped down after they were accused of plagiarism

Random Posts



.
.

Popular Posts