December 30, 2011

The Year of the Retraction: A look back at 2011

If Retraction Watch was actually a business, as opposed — for the moment, anyway — to a labor of love for two guys with day jobs, 2011 would have been a very good year for business.

It was a year that will probably see close to 400 retractions, including a number of high-profile ones, once the dust settles. Those high numbers caught the attention of a lot of major media outlets, from Nature to NPR to the Wall Street Journal. Science publications, including LiveScience and The Scientist, have done their own end-of-year retraction lists.
It was also a good year for us at Retraction Watch. Many news outlets featured us in their coverage, either picking up stories we’d broken or asking us for comment on big-picture issues. Three national NPR programs — Science Friday, On the Media, and All Things Considered — had us on air. We launched a column in LabTimes, and Nature asked us to write a year-end commentary. We even earned a Wikipedia entry.
All of that has contributed to the fact that sometime today, we’ll surpass 1.5 million pageviews. We’ve tapped into a passionate and helpful community of readers, without whom much of Retraction Watch wouldn’t be possible. You send us great tips, add valuable commentary, keep us honest by correcting our errors, and encourage us at ever step.
So: Thank you.
Now, which journals had the most retractions?
By our unofficial count, the winner would appear to be the Journal of Biological Chemistry, with 15. Those tended to come in groups, with four from Silvia Bulfone-Paus’s lab, four from the late Maria Diverse-Pierlussi’s group, three from Jesse Roman’s lab, and two from Zhiguo Wang’s. One of the other two was from Harvard cancer researcher — and New Yorker staff writer — Jerome Groopman’s lab. There are four more on the way from Ashok Kumar’s University of Ottawa lab, although it looks as though those will actually appear in 2012.
We covered seven in Blood, eight in the Proceedings of the National Academy Sciences, and eight in the Journal of Immunology.
Science had five, including two very high-profile ones, longevity genes and chronic fatigue syndrome-XMRV, compared to just two last year.
Last year, Nature did some soul-searching when they got to November and had published four retractions. This year, they published just one.
Rounding out the “glamour journals,” Cell had just one, compared to four last year.
Personal best?
Of course, the person with the most retractions was Joachim Boldt, with 89. Naoki Mori was a distant second, with 32, although a few of those ran last year. Claudio Airoldi’s group retracted 11. And most of the journals listed above were touched by other big names in Retractionville, including Bulfone-Paus, who has now retracted 13, and Anil Potti, who has now retracted 7.
Jatinder Ahluwalia was only forced to retract one paper this year, after retracting one last year. But the revelations that followed cost him a job at the University of East London, and may cause Imperial to strip him of his PhD. His case is a good reminder of why it’s a good idea to poke at what lies beneath retractions.
Here were our top five posts by traffic:
  1. The retraction of an editorial about why semen is a good Valentine’s Day gift, by eminent surgeon Lazar Greenfield
  2. The retraction of a bizarre paper claiming that science and spirituality both came from space
  3. Our coverage of the Claudio Airoldi case
  4. Our scoop on two retractions by Zhiguo Wang, who later resigned from the Montreal Heart Institute
Looking forward to 2012
Our wish list is much the same as it was for 2011, particularly better explanations of why particular papers are being retracted, and better publicity for retractions. We’ll add one item to that list: Journals, please stop letting researchers make claims in retraction notices and corrections, unless you peer-review them. Why should we trust the word of researchers who’ve demonstrated they make errors, intentional or not?
And we’ll be keeping an eye on what may be an emerging trend: The mega-correction. We’ve seen errata notices that correct so many different errors, it’s hard to believe the paper shouldn’t have been retracted. It’s unclear what this means yet, but watch this space for coverage of more examples.
We also may be coming to your town. See our list of upcoming appearances, which will be regularly updated. Get in touch if you’d like to host us; we love engaging with readers in person. And don’t forget the Retraction Watch Store.
Happy New Year.

December 6, 2011

10 Academic Frauds Who Had Everyone Fooled

Admit it. We’ve all had that moment, deep into a school research project, where the realization hits that the neat hypothesis we had when we started working is not going to be borne out by the data. At that point, we are faced with two options: a) start over, instantly making all those hours already spent a complete waste of time; or, b) fudge the data and transform that stinker into a sexy little piece of academia. For a student, the consequences of such fakery could be as severe as expulsion from school. But for professional academics, the stakes are much higher. Millions of dollars and professional and personal reputations hang in the balance.
Here are ten of the worst frauds, fakers, and phonies ever to pull the wool over the bespectacled eyes of the academic world: >>>

December 5, 2011

Fraud in the ivory tower (and a big one too)

The fraud of Diederik Stapel – professor of social psychology at Tilburg University in the Netherlands – was enormous. His list of publications was truly impressive, both in terms of the content of the articles as well as its sheer number and the prestige of the journals in which it was published: dozens of articles in all the top psychology journals in academia with a number of them in famous general science outlets such as Science. His seemingly careful research was very thorough in terms of its research design, and was thought to reveal many intriguing insights about fundamental human nature. The problem was, he had made it all up…
For years – so we know now – Diederik Stapel made up all his data. He would carefully reiterature, design all the studies (with his various co-authors), set up the experiments, print out all the questionnaires, and then, instead of actually doing the experiments and distributing the questionnaires, made it all up. Just like that.
He finally got caught because, eventually, he did not even bother anymore to really make up newly faked data. He used the same (fake) numbers for different experiments, gave those to his various PhD students to analyze, who then in disbelief slaving away in their adjacent cubicles discovered that their very different experiments led to exactly the same statistical values (a near impossibility). When they compared their databases, there was substantial overlap. There was no denying it any longer; Diederik Stapel, was making it up; he was immediately fired by the university, admitted to his lengthy fraud, and handed back his PhD degree.
In an open letter, sent to Dutch newspapers to try to explain his actions, he cited the huge pressures to come up with interesting findings that he had been under, in the publish or perish culture that exist in the academic world, which he had been unable to resist, and which led him to his extreme actions.
There are various things I find truly remarkable and puzzling about the case of Diederik Stapel.
  • The first one is the sheer scale and (eventually) outright clumsiness of his fraud. It also makes me realize that there must be dozens, maybe hundreds of others just like him. They just do it a little bit less, less extreme, and are probably a bit more sophisticated about it, but they’re subject to the exact same pressures and temptations as Diederik Stapel. Surely others give in to them as well. He got caught because he was flying so high, he did it so much, and so clumsily. But I am guessing that for every fraud that gets caught, due to hubris, there are at least ten other ones that don’t.
  • The second one is that he did it at all. Of course because it is fraud, unethical, and unacceptable, but also because it sort of seems he did not really need it. You have to realize that “getting the data” is just a very small proportion of all the skills and capabilities one needs to get published. You have to really know and understand the literature; you have to be able to carefully design an experiment, ruling out any potential statistical biases, alternative explanations, and other pitfalls; you have to be able to write it up so that it catches people’s interest and imagination; and you have to be able to see the article through the various reviewers and steps in the publication process that every prestigious academic journal operates. Those are substantial and difficult skills; all of which Diederik Stapel possessed. All he did is make up the data; something which is just a small proportion of the total set of skills required, and something that he could have easily outsourced to one of his many PhD students. Sure, you then would not have had the guarantee that the experiment would come out the way you wanted them, but who knows, they could.
  • That’s what I find puzzling as well; that at no point he seems to have become curious whether his experiments might actually work without him making it all up. They were interesting experiments; wouldn’t you at some point be tempted to see whether they might work…?
  • Truly amazing I also find the fact that he never stopped. It seems he has much in common with Bernard Madoff and his Ponzi Scheme, or the notorious traders in investments banks such as 827 million Nick Leeson, who brought down Barings Bank with his massive fraudulent trades, Societe Generale’s 4.9 billion Jerome Kerviel, and UBS’s 2.3 billion Kweku Adoboli. The difference: Stapel could have stopped. For people like Madoff or the rogue traders, there was no way back; once they had started the fraud there was no stopping it. But Stapel could have stopped at any point. Surely at some point he must have at least considered this? I guess he was addicted; addicted to the status and aura of continued success.
  • Finally, what I find truly amazing is that he was teaching the Ethics course at Tilburg University. You just don’t make that one up; that’s Dutch irony at its best.

November 20, 2011

Journal Editors' Reactions to Word of Plagiarism? Largely Silence - THE CHRONICLE of HIGHER EDUCATION

Tom Bartlett
Lior Shamir was surprised to learn that one of his papers had been plagiarized. He was even more surprised to learn that it had been plagiarized, by his count, 21 times.
But what really astonished him is that no one seemed to care.
In July, Mr. Shamir, an assistant professor of computer science at Lawrence Technological University, near Detroit, received an anonymous e-mail signed "Prof. Against Plagiarism." That's how he found out that multiple paragraphs from a paper he had presented at a 2006 conference, titled "Human Perception-Based Color Segmentation Using Fuzzy Logic," also appeared in a 2010 paper by two professors in Iran. There was no question of coincidence—the wording was identical—and his paper wasn't even cited.
Curious, he started to poke around some more. One of the Iranian professors, Ali Moghani, a professor at the Institute for Color Science and Technology, in Tehran, appeared to have copied parts of the paper in eight different publications. (Mr. Moghani did not respond to a request for comment.) But he wasn't the only one. The more Mr. Shamir looked, the more he found. Those 21 papers had 26 authors, all of whom had published Mr. Shamir's work under their names, without credit.
It's not as if the paper was a central part of his academic work. In fact, he had forgotten about it until he got the anonymous e-mail.
Now, though, he was intrigued, and more than a little annoyed. So he started contacting journals, indexing services, conference organizers. He sent, by his estimate, about 30 e-mails. He expected that the papers, once it was shown that they had been plagiarized, would be retracted. Maybe he would get an explanation, or an apology, or a response of some kind.
In fact, he received only a couple of replies.
Among those he did receive was a reply from Mohammad Reza Darafsheh, the other Iranian academic. Mr. Darafsheh, a professor of mathematics at the University of Tehran, wrote that "[a]bout the overlap of some sentences in chapter 4 of our paper with yours we feel sorry." But he added that it was "only about one page." The email ended with an offer to collaborate with Mr. Shamir in the future.
When contacted by The Chronicle, Mr. Darafsheh wrote in an e-mail that only one paragraph was identical to the original, and that it had "no scientific value." After it was pointed out to Mr. Darafsheh that, in truth, about 400 words of the eight-page paper appeared to have been copied directly from Mr. Shamir's paper, he insisted that there had been no copying, and that it was merely a "co-accident."
Mr. Darafsheh and Mr. Moghani's paper was published in the Italian Journal of Pure and Applied Mathematics. The Chronicle contacted the editor, Piergiulio Corsini, who in turn asked Violeta Leoreanu Fotea, a professor of mathematics at Alexandru Ioan Cuza University, in Romania, to investigate. After reviewing both papers, she wrote that she could "not say that Darafsheh and Moghani have plagiarized the work of Shamir."
After The Chronicle e-mailed her multiple examples of just such copying from the paper, Ms. Leoreanu Fotea acknowledged that it was "a lot of identical text," and said Mr. Corsini would decide how to handle the matter. But he wrote in an e-mail to The Chronicle that he was not sure what decision he was supposed to make. "The paper has been already published, and I cannot cancel it," he wrote. "I'm sorry for what happened."
Later, Ms. Leoreanu Fotea wrote to say that "two lines on this unpleasant episode of plagiarism" would appear in a future edition of the journal.
'Deny the Undeniable'
In 2009, another paper that borrowed heavily from Mr. Shamir's without credit was published in the Proceedings of the Second International Conference on Emerging Trends in Engineering & Technology. One of the co-authors was Preeti Bajaj, president of the G.H. Raisoni College of Engineering, in India, who was also chair of the conference where the plagiarized paper was presented.
That plagiarism was first reported this past September by the journal Nature India, in which Ms. Bajaj acknowledged that portions were copied but blamed a graduate assistant who was a coauthor of the paper. She told Nature India that the assistant had been fired. What she did not mention was that the paper was published again this year in the Journal of Information Hiding and Multimedia Signal Processing. In an e-mail to The Chronicle, she wrote in uncertain English that as a co-author, "I'm guilty but I didn't knew my student will do so." In a follow-up message, she asserted that the "research truth can be known to only those who understands and work on the technology." Ms. Bajaj did not respond to a request for further explanation.
"Surreal" was how Mr. Shamir characterized Ms. Bajaj's defense. Indeed, the response in general has bewildered him. He says he's been greeted either by silence or by attempts to "ridiculously deny the undeniable."
That reaction is echoed by Gerald Koocher, editor of the journal Ethics & Behavior and co-author of Responding to Research Wrongdoing: A User-Friendly Guide. He found Mr. Darafsheh's argument that only one page had been copied laughable. As for Ms. Bajaj's insisting that she didn't know what her graduate assistant had done, Mr. Koocher was unpersuaded: "What does it say about your scholarly integrity that you don't vet what your students write?"
Regarding the behavior of journal editors, he wonders whether there is a reluctance to investigate because doing so might reflect poorly on them. "If you admit that your journal published plagiarized material, you might feel that you have not adequately protected the journal," he says. Of course, Mr. Koocher says, that's no excuse.
At least one investigation continues. The American Institute of Physics, which published a paper co-written by Mr. Moghani in its conference proceedings, says it's looking into allegations of plagiarism. (In this case, Mr. Moghani's abstract is nearly identical to Mr. Shamir's.)
The institute did not respond to two e-mails Mr. Shamir wrote, but it did respond to an inquiry from The Chronicle. Mark Cassar, publisher of journals and technical publications at the institute, wrote that it "regrets not responding to Prof. Shamir in a timely fashion."
Mr. Shamir sees a larger danger here: "Science is based on sharing, and the sharing of results and ideas is protected by strict and welldefined ethics guidelines. If editors allow violating these guidelines, this whole sensitive structure might collapse."
So why did this one rather minor publication attract so many plagiarists? Mr. Shamir finds that yet another mystery. "It wasn't even such a good paper," he says.

November 17, 2011

Breaking news: Prolific Dutch heart researcher fired over misconduct concerns - Retraction Watch

Don Poldermans, a leading heart specialist, has been fired over concerns that he committed research misconduct. According to a report on the website DutchNews.nl:
"Erasmus University in Rotterdam has sacked a professor in cardio-vascular medicine for damaging the institution’s academic integrity and for ‘scientific misconduct’, the NRC reports on Thursday.
The professor is accused of faking academic data and compromising patient trust, the paper says. In particular, he failed to obtain patient consent for carrying out research and recorded results ‘which cannot be resolved to patient information,’ the university said.
Don Poldermans has spent years researching the risk of complications during cardio-vascular surgery and has some 500 publications to his name.
A spokesman for Poldermans told the paper he admitted not keeping to research protocols but denied faking data."
One of Poldermans’ most widely known areas of research involved the effects of beta-blockers on surgery patients, for which he conducted some of the foundational trails. A search of Medline revealed at least 75 publications on that subject alone.
So far, we have no indication about which, if any, of Poldermans’ publications will be retracted. Sixteen of his papers have been cited at least 100 times, according to Thomson Scientific’s Web of Knoweldge, and one, in the European Heart Journal, has been cited more than 700.
Steven Shafer, editor of Anesthesia & Analgesia, which published one of Poldermans’ articles in 2009, as well as an editorial, called the news “mindboggling.”
"We’ll write a note to the university and ask them, is this paper fraudulent or not. When this happens you have to consider every paper suspect."
The case comes just weeks after officials at Tilburg University in the Netherlands fired Diederik Stapel, a noted social psychologist, for fabricating data in at least 30 papers.
Update 1:30 pm Eastern, 11/17/11: According to a statement from Poldermans’ institution, Erasmus MC, the researcher was fired earlier this week after questions surfaced about a study involving outcomes of surgery patients.
"Erasmus MC dismissed Prof. D. Poldermans on 16 November because of violation of academic integrity. Research carried out under his leadership was not always performed in accordance with current scientific standards.
An inquiry committee on Academic Integrity concluded that the professor was careless in collecting the data for his research. In one study it was found that he used patient data without written permission, used fictitious data and that two reports were submitted to conferences which included knowingly unreliable data.
Regret
The professor agrees with the committee’s conclusions and expressed his regret for his actions. Poldermans feels that as experienced researcher he should have been more accurate but states that his actions were unintentional.
Action
The study that gave rise to the inquiry committee having to take action was the health of patients who had to undergo surgery. The aim of the study was to identify which factors can contribute to being able to better estimate the risks of complications. There were no medical implications for the patients who took part in the studies.
Apologize
Erasmus MC will, however, endeavor to inform the patients concerned personally and apologize to them."
Here’s a link to a press release, in Dutch, from Erasmus MC about the matter.
All this suggests that the vast bulk of Poldermans’ 500-odd publications won’t require retraction. That should be a relief to editors and researchers — and patients — alike, given his outsized influence on the field. However, it’s still too soon to tell, and we’ll be watching this case closely as it unfolds.
Hat tip for press release: Larry Husten

November 3, 2011

Real scientists never report fraud

Diederik Stapel has been a psychology professor at major universities for the last ten years. He published well over 100 research papers in prestigious journals such as Science. Some of his research papers have been highly cited. He trained nearly 20 Ph.D. students.
It was recently fired when it was finally determined that he was making up all of his research data, including the data that he was providing to students. He was making up research assistants and experiments. He wasn’t even particularly careful as the data had significant statistical anomalies (such as identical averages for different data sets).
Managers, colleagues, journals, collaborators and competitors failed to openly report him. It took outsiders (students) to report him. The best journals, and correspondingly, the best scientists were repeatedly fooled by Stapel. Judging by his numerous citations, people built on his work…
People who want to believe that “peer reviewed work” means “correct work” will object that this is just one case. But what about the recently dismissed Harvard professor Marc Hauser? We find exactly the same story. Marc Hauser published over 200 papers in the best journals, making up data as he went. Again colleagues, journals and collaborators failed to openly challenge him: it took naive students, that is, outsiders, to report the fraud.
The real scientists, the peers of the researchers, don’t report fraud. Questioning someone’s results is a dangerous adventure.
Some point out to me that this does not apply to fields such as Computer Science. Really? Have you ever tried to reproduce the experimental results from popular papers? Quite often, it is very difficult or even impossible. It does not help that Computer Science researchers almost never post their software or data. (Almost all my software is already online.)
But what is critical is that traditional peer review does not protect against fraud. It is merely a check that the work appears superficially correct and interesting. A reviewer who would go out of his way to check whether a paper reports truthful results should not expect accolades. That is not how the game is played.
Further reading: How reliable is science?

The Fraud Who Fooled (Almost) Everyone - THE CHRONICLE of HIGHER EDUCATION

It’s now known that Diederik Stapel, the Dutch social psychologist who was suspended by Tilburg University in September, faked dozens of studies and managed not to get caught for years despite his outrageous fabrications. But how, exactly, did he do it?
That question won’t be fully answered for a while—the investigation into the vast fraud is continuing. But a just-released English version of Tilburg’s interim report on Stapel’s deception begins to fill in some of the details of how he manipulated those who worked with him.
This was, according to the report, his modus operandi:
  • Pretending to help fellow researchers
Stapel would chat with colleagues about what they were working on. Nothing unusual there. But then, as luck would have it, he would reveal that he had an old data set that he’d never gotten around to using that “matched the colleague’s needs perfectly.” He turned that data set over, the paper was published, and Stapel was listed as a co-author. None of those colleagues, according to the report, knew that the data were made up.
  • Making it seem plausible
Stapel was savvy enough to create convincing cover stories. The fictitious research he was doing would take “many weeks, or even months” to finish. When asked why other researchers couldn’t contact the high schools where he was conducting some of his research, Stapel explained that it was to “prevent the schools being overrun with similar requests, which would hamper [his] access to them.”
  • Mixing fact and fiction
While apparently Stapel could be “vague” at times about how his research was conducted, he threw in just enough actual details to create some verisimilitude. For instance, details about the curriculum and location of a particular high school were true, though the studies were never conducted and the research assistants who helped him were imaginary.
  • Intimidation
The report describes Stapel as charismatic and well respected. His research papers—like the one about how meat-eaters are supposedly selfish—made a splash with the news media. His success seemed to insulate him from criticism. When a young researcher asked for access to raw data, Stapel accused the researcher of “calling his capacities and experience as a renowned professor into question.” He also made those around him feel lucky to be working with him and bragged about his data. “Be aware that you have gold in your hands,” he told one researcher.
  • Controlling the data
This is probably the most important part of Stapel’s deception. He and he alone was in charge of his data. Others were not allowed access to it. He handled the processing and coding of the data. Graduate students who worked with him were told that “they could make better use of their time for the real scientific work (analyzing and writing).” Likewise, when working with more-senior researchers, Stapel “took personal charge of the ‘data collection’ and provided the outcomes, but not the raw data.” According to the report, “probing questions were usually cut short with an appeal to the trust that Mr. Stapel was entitled to.”
But he didn’t successfully bluff everybody. Three young researchers blew the whistle on Stapel in August, bringing their concerns to the head of the department. In addition, three other young researchers “had previously raised the alarm.” Two professors had suspicions but apparently didn’t come forward. From the report: “The committee concludes that the six young whistle-blowers showed more courage, vigilance, and inquisitiveness than incumbent full professors. ”
While it is becoming clearer how Stapel committed his fraud, the larger question is why. In separate statements, he explained that “I was not able to withstand the pressure to score points, to publish, to always have to be better,” and that he felt “a sense of dismay and shame” but that he was “sincerely committed to the field of social psychology, young researchers, and other colleagues.”
Apparently, he saw no contradiction between that commitment and systematically manufacturing results for years, harming his graduate students and co-authors along the way, and staring down anyone who would dare question him.

November 1, 2011

Diederik Stapel: The Lying Dutchman - The Washington Post

Big science news today out of the Netherlands: A top social scientist, Diederik Stapel, of Tilburg University, has been suspended after an investigation showed that he’s been fabricating his data for years. This may seem far away and esoteric in the extreme, but there’s collateral damage here in DC, home base of the AAAS journal Science, which published one of Diederik Stapel’s papers in April.

That paper, “Coping With Chaos: How Disordered Contexts Promote Stereotypying and Discrimination,” claimed that people were more likely to be prejudicial toward others when in the presence of litter, a broken sidewalk, an abandoned bicycle, etc.

The problem is, there may not have been any experiment upon which this conclusion was based. Stapel apparently invented his raw data and then handed it to his graduate students to intepret. Read the story at Science Insider:

“The panel reported that he would discuss in detail experimental designs, including drafting questionnaires, and would then claim to conduct the experiments at high schools and universities with which he had special arrangements. The experiments, however, never took place, the universities concluded. Stapel made up the data sets, which he then gave the student or collaborator for analysis, investigators allege. In other instances, the report says, he told colleagues that he had an old data set lying around that he hadn’t yet had a chance to analyze. When Stapel did conduct actual experiments, the committee found evidence that he manipulated the results.

“Many of Stapel’s students graduated without having ever run an experiment, the report says. Stapel told them that their time was better spent analyzing data and writing. The commission writes that Stapel was ’lord of the data’ in his collaborations. It says colleagues or students who asked to see raw data were given excuses or even threatened and insulted.”

It’s not known yet if the Science paper in April was one of the ones with fabricated data, but Science spokeswoman Kathy Wren said this afternoon, “It seems highly likely that this Science paper is involved.” She said the Dutch investigators alerted the journal in September that the April paper might be tainted.

The journal’s editor-in-chief, Bruce Alberts, issued a brief statement today, called an “Editorial Expression of Concern,” in which he noted the findings released Monday by the Dutch investigators. He said the report “indicates that the extent of the fraud by Stapel is substantial.”

His students were victims, too — and ultimately realized that they were being taken for a ride. According to Science Insider, 14 of 21 of the theses published by Stapel’s students were affected by the tainted data.

Should the journal Science have known that this was a bogus paper? There’s a peer review process, but it’s one that isn’t designed to detect outright, bald-faced fraud.

Wren said today, “Science is not an investigative body, and so if a scientist is intentionally trying to deceive, the peer review system is not really set up to investigate that sort of thing.”

[Good report here by Ewen Callaway of Nature, republished by Scientific American’s website. Describes Stapel as a wunderkind. The investigative report has a statement from Stapel: “I have made mistakes, but I was and am honestly concerned with the field of social psychology. I therefore regret the pain that I have caused others.”]

October 21, 2011

Summation Letter on Plagiarism Case: Psychopathy requires legal entitlement, not logic or shame

This is to summarize the situation since my discovery of the masterpiece of plagiarism in a tortuous system of willfully aiding and abetting the theft of intellectual property amidst lip-service to high piety, on Sept. 28, 2011.
Over a hundred senior academics and senior civil servants of Pakistan occupying the position of Vice Chancellor/Rector were informed of this system-wide collusion on October 07, 2011 in a detailed Letter to Editor: >>>

October 6, 2011

Foreign student rule-breaking: culture clash or survival skills? -TIMES HIGHER EDUCATION

North American administrators call high rates of plagiarism 'tip of the iceberg'. Jon Marcus reports
Gary Pavela remembers being surprised by the defiant reaction of a visiting student from China who he confronted over a clear-cut incident of plagiarism.
"But in my culture, we view it as honouring someone to use their words," the student told Mr Pavela, who is the director of academic integrity at Syracuse University in the US.
He thought about that for a moment before responding.
No, Mr Pavela told the student, there really was no cultural difference in that regard.
"All we're asking is that you honour them a little bit more by giving them the credit," he said.
Such conversations are becoming increasingly commonplace for administrators in the US and Canada, as North American universities aggressively recruit international students - and find that a disproportionate number of them break the academic rules.
In one study, the University of Windsor in the Canadian province of Ontario tracked how many foreign students were being cited for academic dishonesty compared with their Canadian classmates. It found that one in 53 international students had been charged versus one in 1,122 Canadians.
Even that, said Danielle Istl, Windsor's academic integrity officer, "is only the tip of the iceberg. We don't know how much goes on behind the scenes."
Most of the international students who wound up in the disciplinary process were accused of plagiarism, she added.
"To me, that isn't that surprising because you have students whose first language isn't English and they may struggle writing papers in English."
However, other studies have found that the most common offence perpetrated by foreign students is cheating in examinations.
But many of the misdemeanours are not deliberate, said Florida Doci, a student from Albania and an officer of Windsor's International Student Society.
"Most of the international students have not had to write a paper and follow the rules of referencing (before)," she said. "They happen to cheat or make mistakes like this because they don't know they're doing it. They're used to writing down whatever they read.
"I see it more as a problem that affects international students because of where they come from, rather than something they're doing intentionally." 

'Survival mechanism'
While administrators are hesitant to generalise further about what may be driving students from abroad to cheat, they acknowledge that cultural differences play a major role - although not the kind claimed by Mr Pavela's unrepentant student.
Twenty per cent of international students in the US come from China (up 30 per cent on last year alone) and 15 per cent are from India, the largest groups of foreign students in the country (the numbers are similar in Canada). Experienced administrators suggest that this has a lot to do with the rise in cheating.
In some countries - China and India included - "the climate for academic integrity is not strong", said Mr Pavela, a lawyer by training who has served as a consultant to the US State Department.
"It is not simply an issue of the deficiencies of students, but includes faculty who cut corners or who do not share any more of a commitment to academic integrity than students do," he added.
Cheating for such students, he said, "is a survival mechanism. They are part of cultures where you have to do what you have to do."
Compounding this is the pressure heaped on Chinese and Indian students by relatives and sponsors.
"Those pressures include the potential embarrassment of having to go home (having not) succeeded here," said Don McCabe, professor of management and global business at Rutgers Business School and founding president of the Center for Academic Integrity.
But Professor McCabe added that US and Canadian universities had to take their share of the blame, too.
"It's the fault of the institutions in the sense that they aggressively recruit these students and don't adequately orient them in the different traditions of academic integrity," he argued.
At Windsor, international undergraduates do receive orientation, including a separate programme for engineering and management students, and yet another focused on academic integrity and managing exams tailored to foreign graduate teaching assistants.
International students in master's programmes for management and engineering are also required to sign "academic honesty agreements".
There are plans for even more comprehensive measures to be introduced next year.
Mr Pavela said this was welcome, but cautioned that highlighting concerns about international students' honesty could cause further problems.
"The debate here includes whether there is a 'spotlighting effect' going on, that we are more likely to scrutinise people from a different culture," he said.

October 5, 2011

Retracted Science and the Retraction Index

Ferric C. Fang, Editor in Chief
Arturo Casadevall, Editor in Chief
R. P. Morrison, Editor

Articles may be retracted when their findings are no longer considered trustworthy due to scientific misconduct or error, they plagiarize previously published work, or they are found to violate ethical guidelines. Using a novel measure that we call the “retraction index,” we found that the frequency of retraction varies among journals and shows a strong correlation with the journal impact factor. Although retractions are relatively rare, the retraction process is essential for correcting the literature and maintaining trust in the scientific process.>>>

Science publishing: The trouble with retractions - NATURE

A surge in withdrawn papers is highlighting weaknesses in the system for handling them.
This week, some 27,000 freshly published research articles will pour into the Web of Science, Thomson Reuters' vast online database of scientific publications. Almost all of these papers will stay there forever, a fixed contribution to the research literature. But 200 or so will eventually be flagged with a note of alteration such as a correction. And a handful — maybe five or six — will one day receive science's ultimate post-publication punishment: retraction, the official declaration that a paper is so flawed that it must be withdrawn from the literature.

It is reassuring that retractions are so rare, for behind at least half of them lies some shocking tale of scientific misconduct — plagiarism, altered images or faked data — and the other half are admissions of embarrassing mistakes. But retraction notices are increasing rapidly. In the early 2000s, only about 30 retraction notices appeared annually. This year, the Web of Science is on track to index more than 400 (see 'Rise of the retractions') — even though the total number of papers published has risen by only 44% over the past decade. 
Perhaps surprisingly, scientists and editors broadly welcome the trend. "I don't think there's any doubt that we're detecting more fraud, and that systems are more responsive to misconduct. It's become more acceptable for journals to step in," says Nicholas Steneck, a research ethicist at the University of Michigan in Ann Arbor. But as retractions become more commonplace, stresses that have always existed in the system are starting to show more vividly.
When the UK-based Committee on Publication Ethics (COPE) surveyed editors' attitudes to retraction two years ago, it found huge inconsistencies in policies and practices between journals, says Elizabeth Wager, a medical writer in Princes Risborough, UK, who is chair of COPE. That survey led to retraction guidelines that COPE published in 2009. But it's still the case, says Wager, that "editors often have to be pushed to retract".
Other frustrations include opaque retraction notices that don't explain why a paper has been withdrawn, a tendency for authors to keep citing retracted papers long after they've been red-flagged (see 'Withdrawn papers live on') and the fact that many scientists hear 'retraction' and immediately think 'misconduct' — a stigma that may keep researchers from coming forward to admit honest errors.
Perfection may be too much to expect from any system that has to deal with human error in all its messiness. As one journal editor told Wager, each retraction is "painfully unique".
But as more retractions hit the headlines, some researchers are calling for ways to improve their handling. Suggested reforms include better systems for linking papers to their retraction notices or revisions, more responsibility on the part of journal editors and, most of all, greater transparency and clarity about mistakes in research.
The reasons behind the rise in retractions are still unclear. "I don't think that there is suddenly a boom in the production of fraudulent or erroneous work," says John Ioannidis, a professor of health policy at Stanford University School of Medicine in California, who has spent much of his career tracking how medical science produces flawed results.
In surveys, around 1–2% of scientists admit to having fabricated, falsified or modified data or results at least once (D. Fanelli PLoS ONE 4, e5738; 2009). But over the past decade, retraction notices for published papers have increased from 0.001% of the total to only about 0.02%. And, Ioannidis says, that subset of papers is "the tip of the iceberg" — too small and fragmentary for any useful conclusions to be drawn about the overall rates of sloppiness or misconduct.
Instead, it is more probable that the growth in retractions has come from an increased awareness of research misconduct, says Steneck. That's thanks in part to the setting up of regulatory bodies such as the US Office of Research Integrity in the Department of Health and Human Services. These ensure greater accountability for the research institutions, which, along with researchers, are responsible for detecting mistakes.
The growth also owes a lot to the emergence of software for easily detecting plagiarism and image manipulation, combined with the greater number of readers that the Internet brings to research papers. In the future, wider use of such software could cause the rate of retraction notices to dip as fast as it spiked, simply because more of the problematic papers will be screened out before they reach publication. On the other hand, editors' newfound comfort with talking about retraction may lead to notices coming at an even greater rate.
"Norms are changing all the time," says Steven Shafer, editor-in-chief of the journal Anesthesia & Analgesia, who has participated in two major misconduct investigations — one of which involved 11 journals and led to the retraction of some 90 papers.

It's none of your damn business!
But willingness to talk about retractions is hardly universal. "There are a lot of publishers and a lot of journal editors who really don't want people to know about what's going on at their publications," says New York City-based writer Ivan Oransky, executive editor at Reuters Health. In August 2010, Oransky co-founded the blog Retraction Watch with Adam Marcus, managing editor at Anesthesiology News. Since its launch, Oransky says, the site has logged 1.1 million page views and has covered more than 200 retractions.
In one memorable post, the reporters describe ringing up one editor, L. Henry Edmunds at the Annals of Thoracic Surgery, to ask about a paper withdrawn from his journal (see go.nature.com/ubv261). "It's none of your damn business!" he told them. Edmunds did not respond to Nature 's request to talk for this article.
The posts on Retraction Watch show how wildly inconsistent retractions practices are from one journal to the next. Notices range from informative and transparent to deeply obscure. A typically unhelpful example of the genre would be: "This article has been withdrawn at the request of the authors in order to eliminate incorrect information." Oransky argues that such obscurity leads readers to assume misconduct, as scientists making an honest retraction would, presumably, try to explain what was at fault.
To Drummond Rennie, deputy editor of the Journal of the American Medical Association, there are two obvious reasons for obscure retraction notices: "fear and work."
The fear factor, says Wager, is because publishers are very frightened of being sued. "They are incredibly twitchy about publishing anything that could be defamatory," she says.
'Work' refers to the phenomenal effort required to sort through authorship disputes, concerns about human or animal subjects, accusations of data fabrication and all the other ways a paper can go wrong. "It takes dozens or hundreds of hours of work to get to the bottom of what's going on and really understand it," says Shafer. Because most journal editors are scientists or physicians working on a voluntary basis, he says, that effort comes out of their research and clinical time.
But the effort has to be made, says Steneck. "If you don't have enough time to do a reasonable job of ensuring the integrity of your journal, do you deserve to be in business as a journal publisher?" he asks. Oransky and Marcus have taken a similar stance. This summer, for example, Retraction Watch criticized the Journal of Neuroscience for a pair of identical retraction notices it published on 8 June: "At the request of the authors, the following manuscript has been retracted."
But the journal's editor-in-chief, neuroscientist John Maunsell of Harvard Medical School in Boston, Massachusetts, argues that such obscurity is often the most responsible course to take. "My feeling is that there are far fewer retractions than there should be," says Maunsell, who adds that he has conducted 79 ethics investigations in more than 3 years at the journal — 1 every 2–3 weeks. But "authors are reluctant to retract papers", he says, "and anything we put up in the way of a barrier or disincentive is a bad thing. If authors are happier posting retractions without extra information, I'd rather see that retraction go through than provide any discouragement."
At the heart of these arguments, says Steneck, lie shifting norms of how responsible journal editors should be for the integrity of the research process. In the past, he says, "they felt that institutions and scientists ought to do it". More and more journal editors today are starting to embrace the gatekeeper role. But even now, Shafer points out, they have only limited authority to challenge institutions that are refusing to cooperate. "I have had institutions, where I felt there was very clear misconduct, come back and tell me there was none," Shafer says. "And I have had a US institution tell me that they would look into allegations of misconduct only if I agreed to keep the results confidential."

The blame game
Discussions on Retraction Watch make it clear that many scientists would like to separate two aspects of retraction that seem to have become tangled together: cleaning up the literature, and signalling misconduct. After all, many retractions are straightforward and honourable. In July, for example, Derek Stein, a physicist at Brown University in Providence, Rhode Island, retracted a paper in Physical Review Letters on DNA in nanofluidic channels when he found that a key part of the analysis had been performed incorrectly. His thoroughness and speed — the retraction came just four months after publication — were singled out for praise on Retraction Watch.
But because almost all of the retractions that hit the headlines are dramatic examples of misconduct, many researchers assume that any retraction indicates that something shady has occurred. And that stigma may dissuade honest scientists from doing the right thing. One American researcher who talked to Nature about his own early-career retraction said he hoped that his decision would be seen as a badge of honour. But, even years later and with his career established, he still did not want Nature to use his name or give any details of the case.
There is no general agreement about how to reduce this stigma. Rennie suggests reserving the retraction mechanism exclusively for misconduct, but that would require the creation of a new term for withdrawals owing to honest mistakes. At the other extreme, Thomas DeCoursey, a biologist at Rush University Medical Center in Chicago, argues for retraction of any paper that publishes results that are not reproducible. "It does not matter whether the error was due to outright fraud, honest mistakes or reasons that simply cannot be determined," he says.
A better vocabulary for talking about retractions is needed, says Steneck — one acknowledging that retractions are just as often due to mistakes as to misconduct. Also useful would be a database for classifying retractions. "The risk for the research community is that if it doesn't take these problems more seriously, then the public — journalists, outsiders — will come in and start to poke at them," he points out.
The only near-term solution comes back to transparency. "If journals told readers why a paper was retracted, it wouldn't matter if one journal retracted papers for misconduct while another retracted for almost anything," says Zen Faulkes, a biologist at the University of Texas–Pan American in Edinburg, Texas.
Oransky agrees. "I think that what we're advocating is part of a much larger phenomenon in public life and on the Web right now," he says. "What scientists should be doing is saying, 'In the course of what we do are errors, and among us are also people that commit misconduct or fraud. Look how small that number is! And here's what we're doing to root that out.'"
Richard Van Noorden is an assistant news editor for Nature in London. For more analysis of retraction statistics, click here.

September 13, 2011

Paper mill websites increase in Turkey

Çağla Pınar Tunçel - Hürriyet Daily News
Academics have decried the rise in the number of Turkish “paper mill” websites offering to write theses for students, yet company officials have defended their business, saying they are legal even as scholars warn of the ramifications.
“Our company, which is run by academics, provides translation service and ensures that the text conforms with linguistic terminology while writing the thesis,” one company official told the Hürriyet Daily News on condition of anonymity.
The official said the business was legal because his company only provided thesis “consultancy” and paid tax, but added that many other disrespectful companies were becoming involved in plagiarism as they wrote the theses.
But Bertil Emrah Odar, the dean of Koç University’s Law Department, told the Daily News that the businesses, which offer unique and personalized content, were entirely illegal and could result in the student who bought the paper becoming the subject of an investigation by either the Higher Education Board, or YÖK, or university administration.
“An academic may lose his title in the event of plagiarism,” she said. “If the person is a member of an association, for instance a doctors’ or lawyers’ association, then the group may decide to ban the person from the occupation forever.”
According to Ali Çarkoğlu, an academic at Koç University, said that even if an investigation did not result in any punishment, a scholar thought to have bought an article would likely be ostracized by the academic community.
The cost of purchasing such material ranges between 5,000 and 20,000 Turkish Liras, depending on certain criteria, such as whether the work is a Master’s or a Ph.D. theses, whether there were any surveys conducted or whether there are any foreign sources, Doğan news agency, or DHA, reported Monday.
Requests to write theses are usually rejected if there is less than eight weeks left to finish the work, while the clients are also required to transmit all interviews with thesis advisors to the consultancy businesses, according to reports.
Most writers refuse to do any work for less than 1,000 liras, or pen any theses for less than 3,000 liras. Reports indicate that many customers are attendees of private universities, while students who are employed in a job are also more likely to use such services.
Following negotiations, some 20 percent of the price is paid by the clients at the beginning of the work, while another 20 percent is paid after the draft theses are finished. The rest of the amount is paid after the work is finally completed, according to reports.
Many of the enterprises work with around 300 to 400 expert personnel who specialize in about 200 different topics, reports said.

September 5, 2011

Publish-or-perish: Peer review and the corruption of science - The Guardian

David Colquhoun   
Pressure on scientists to publish has led to a situation where any paper, however bad, can now be printed in a journal that claims to be peer-reviewed.
Peer review is the process that decides whether your work gets published in an academic journal. It doesn't work very well any more, mainly as a result of the enormous number of papers that are being published (an estimated 1.3 million papers in 23,750 journals in 2006). There simply aren't enough competent people to do the job. The overwhelming effect of the huge (and unpaid) effort that is put into reviewing papers is to maintain a status hierarchy of journals. Any paper, however bad, can now get published in a journal that claims to be peer-reviewed.
The blame for this sad situation lies with the people who have imposed a publish-or-perish culture, namely research funders and senior people in universities. To have "written" 800 papers is regarded as something to boast about rather than being rather shameful. University PR departments encourage exaggerated claims, and hard-pressed authors go along with them. >>>

August 11, 2011

Q&A: The Impact of Retractions - TheScientist

Is the pressure of the publish-or-perish mentality driving more researchers to commit misconduct? By Tia Ghose 
After six articles from a single research group—the laboratory of Naoki Mori at the University of the Ryukyus in Japan—were retracted from Infection and Immunity earlier this year, Editor-in-Chief Ferric Fang did some soul searching. He and Arturo Casadevall, editor-in-chief of the American Society for Microbiology journal mBio and Fang’s long-time friend and colleague, decided to explore the issue more deeply in an editorial published this week (August 8) in Infection and Immunity.
Fang, a bacteriologist at the University of Washington, recently talked with The Scientist about the rising number of retractions, why high profile journals may have more retractions, and what pressures lead some scientists to fudge their data.
The Scientist: Tell me a little more about the retractions in the Infection and Immunity articles.
Ferric Fang: [An investigation by the investigator’s institution found that] gel pictures had been cut and pasted, and then misrepresented to be different things. We reviewed all the manuscripts and came to the conclusion that the institution was correct. At this point we notified the investigator of our findings and we invited him to reply and try to explain the findings. Through this discussion, we reached our conclusion that in fact there had been inappropriate manipulation of these figures.
This led us to do some soul searching about why misconduct occurs and whether retractions are really all there is to it—and they’re pretty rare—or whether there’s a lot more misconduct going on, and retractions are the tip of the iceberg. And I’m sorry to say I’ve come more or less to the latter conclusion.
TS: In your editorial, you note that retractions are on the rise. Why is that, and is there any way to reverse the trend?
FF: I think it behooves scientists to take a look at the way we have organized the scientific community and the kinds of pressure we put on scientists. We have a situation now where people’s careers are on the line, it’s very difficult to get funding, and getting funding is dependent on publication. They’re human beings and if we put them under extraordinary pressures, they may in some cases yield to bad behavior.
TS: You also developed the “retraction index,” a measure of a given journal’s retraction rate, which showed the rate of retraction was positively correlated with the impact factor of the journal. Why do you think that is?
FF: The idea to look at the correlation between the number of retractions and journal impact factor was first suggested by my co-author, Arturo Casadevall. One of the reasons we devised this retraction index is the idea that maybe the pressures to try to get papers in prestigious journals was a driving force in encouraging people to engage in misconduct. I’m not excusing the behavior by any means at all.  But I know of cases, for example, where scientists have committed misconduct, who if they’re not successful in their research, they’ll lose their job and they might be deported from the country. So these are extraordinary pressures that are being put on people. I don’t think it’s going to bring out the best science—it’s going to discourage a lot of things we want to have in science, like people feeling free to explore and take chances.
TS: Is it possible that there are more people looking at those top-tier journals, so the mistakes are just caught more?
FF: That’s certainly a possibility. Extraordinary claims require a higher bar before the scientific community accepts them, and I think some of this work that’s published in the glamour mag journals—Science, Nature, Cell—are in those journals because they’re sensational: things like the arsenic using bacterium for example, or the novel murine virus that was associated with chronic fatigue syndrome. These claims, because they have such enormous implications and because they’re so sensational, they’re going to be subjected to a very high level of scrutiny. If that claim was made in an obscure journal, it might take a longer time [to] attract attention.
TS: Reviewers are the main route to catch misconduct before publication, but retractions are on the rise. Is there a better system?
FF: I don’t know that there is a better system… We’ve had a number of times where questions have been raised about whether data are fishy or not, and we haven’t been able to conclusively establish that. And you don’t have access to the primary data, right? You don’t have the lab notebook, you’re not there at the bench when the person is doing that experiment.
Reviewers may call into question certain observations, but if you have a single lane in a gel that’s beautifully spliced in but is actually lifted from another paper in another field, from the same lab four years earlier in a completely different journal, it will just take dumb luck for the reviewer to realize that.
TS: What if people just submitted their raw data when they submitted a paper?
FF: I think it would make the job of reviewing incredibly more challenging. But I don’t think even that can completely solve the problem. You don’t have any way of knowing that what is sent to you is really complete or accurate. If somebody is bound and determined to commit misconduct, they’re going to be very difficult to detect.
F. Fang, A. Casadevall, “Retracted science and the retraction index,” Infection and Immunity, doi:10.1128/IAI.05661-11, 2011.

Random Posts



.
.

Popular Posts