July 27, 2010

Scientists informally intervene in cases of sloppy research - Ars Technica

John Timmer
Most people involved in scientific research are well aware of the big three ethical lapses: fabrication, falsification, and plagiarism. These acts are considered to have such a large potential for distorting the scientific record that governments, research institutions, and funding bodies generally have formal procedures to investigate incidents, and formal sanctions for those found to have infringed. But the big three are hardly a complete list of all the problems that can produce misleading results; anything from poor record-keeping to sloppy techniques can cause errors to creep into the scientific literature, and there are rarely formal procedures to deal with them.
That doesn't mean they're not dealt with, however. A survey published by Nature has found that researchers regularly engage in informal interventions with colleagues if they suspect that there's any form of misconduct going on—even if they think the problems are inadvertent.
The survey asked about what its authors term "acts that could corrupt the scientific record," and defined them very broadly to include things like "poor supervision of assistants, carelessness, authorship disputes, failure to follow the rules of science, conflicts of interest, incompetence, and hostile work environments that impact on research quality." To get a sense for how these are dealt with, they looked up several thousand researchers who have received funding from the National Institutes of Health, and asked them to fill out an online survey.
The questions in the survey, as well as the responses of those queried, have been posted in a PDF at the authors' website.

Good news and bad news
The majority of the 2,600 researchers who responded had experienced a case where they suspected scientific errors were occurring—84 percent, in total. The authors ascribe this number, which is much higher than most other estimates, to the loose definition of misconduct that they provided. An alternate explanation might be that the self-selecting group that responded was more did so in part because they were aware of these issues. The authors omitted the 400 or so who had never noticed misconduct from most of their further analysis.
The good news for the scientific community is that, when researchers became aware of potential problems, they were fairly likely to do something about it. Almost two-thirds reported taking some type of action about the issues they noticed. Of the remainder, most felt that either action was already underway, or were too removed from the lab with issues to have a good sense of how to intervene.
Over 30 percent of those who acted went straight to the source, and had a discussion with the person they felt was having troubles. Another eight percent sent a message of concern to that individual (90 percent of these were signed), while 16 percent alerted someone in a position of authority about the trouble.
In about 21 percent of the cases where someone chose to intervene, the issue got bumped up to formal proceedings. Some of these may have been the result of denial on the part of the people involved (19 percent of the responses) or cases where the individuals failed to act at all (another 14 percent). Still, there were some good outcomes; in about 30 percent of the cases, the problem was either corrected or it was recognized that it was too late to do anything about it. One striking number here was that, out of all these instances, only a fraction of a percent turned out to be cases where the worries about problems were unwarranted.
About equal numbers of those polled expressed satisfaction and dissatisfaction with the results. Over half also felt that the incident had either had no effect on their career, or had even enhanced it. Still, that would seem to leave a lot of individuals who were dissatisfied and suffered some form of negative impact from the event.
There are a lot of interesting details in the numbers, as well. For example, many of those who chose to act did so in part because they considered their institutions unlikely to do anything. Those who were satisfied with the outcomes were also more likely to have been in a situation where the problems were inadvertent.
Overall, there are some promising aspects to these results. Scientists clearly feel that their ethics compel them to intervene in cases where the potential to distort the scientific record doesn't rise to the level of actual fraud. And many of these interventions appear to end in a satisfactory manner. But there are clearly still cases where institutions don't take the issues seriously, and the scientists who try to do the right thing feel that they suffer consequences as a result.
There's no obvious way to force institutions to take scientific errors and misconduct seriously. But the institutions that do so may want to consider the evidence that this informal policing of scientific ethics takes place. Providing support and advice on how to manage these situations, which can easily devolve into conflict, could significantly improve the scientific community's ability to police itself.

July 12, 2010

Prof Faces Plagiarism Charge

Shanghai Daily
A university professor is at the center of a plagiarism scandal after he was accused of copying from books written by Western researchers in his doctoral dissertation.
Zhu Xueqin, a history professor at Shanghai University, denied the online accusation after a newspaper report about the controversy and said he would write back to refute the accusation soon.
School officials said they were aware of the allegation and would study it, but no action has been taken so far.
A popular online post said Zhu's doctoral dissertation "The Collapse of Moral Utopia" had copied from several overseas books including "Rousseau and the Republic of Virtue" by American scholar Carol Blum.
Zhu said that he had listed the book in his references and made annotations.
He said the person accusing him of plagiarism should reveal his or her identity. If so, Zhu said he would be willing to talk with the person.
The online poster goes by the name "Isaiah" and is a PhD student, but has thus far declined to reveal his or her true identity, according to Oriental Morning Post.
Isaiah said Zhu copied lengthy paragraphs without attributions and listed many detailed examples. Moreover, Zhu copied the book's main idea, context, examples and structure, Isaiah alleges.
Zhu got his PhD degree in 1992 and his doctoral dissertation was published in 1994.
Isaiah said that Zhu may not be guilty of plagiarism according to academic norms at that time, but that he would surely be guilty if he had published it today.
Similar academic plagiarism scandals occurred in recent years.
Wang Mingming, a professor at Peking University, was accused of copying a book by an American researcher in 2002. The university ceased his recruitment of doctoral students.
Shanghai's Fudan University canceled Xu Yan's associate professor title last February after plagiarism lawsuits were filed against her.

July 8, 2010

Journals step up plagiarism policing

Cut-and-paste culture tackled by CrossCheck software.
Declan Butler
Major science publishers are gearing up to fight plagiarism. The publishers, including Elsevier and Springer, are set to roll out software across their journals that will scan submitted papers for identical or paraphrased chunks of text that appear in previously published articles. The move follows pilot tests of the software that have confirmed high levels of plagiarism in articles submitted to some journals, according to an informal survey by Nature of nine science publishers. Incredibly, one journal reported rejecting 23% of accepted submissions after checking for plagiarism.
Over the past two years, many publishers (including Nature Publishing Group) have been trialling CrossCheck, a plagiarism checking service launched in June 2008 by CrossRef, a non-profit collaboration of 3,108 commercial and learned society publishers. The power of the service — which uses the iThenticate plagiarism software produced by iParadigms, a company in Oakland, California — is the size of its database of full-text articles, against which other articles can be compared. Publishers subscribing to CrossCheck must agree to share their own databases of manuscripts with it. So far, 83 publishers have joined the database, which has grown to include 25.5 million articles from 48,517 journals and books.
Catching copycats
As publishers have expanded their testing of CrossCheck in the past few months, some have discovered staggering levels of plagiarism, from self-plagiarism, to copying of a few paragraphs or the wholesale copying of other articles. Taylor & Francis has been testing CrossCheck for 6 months on submissions to three of its science journals. In one, 21 of 216 submissions, or almost 10%, had to be rejected because they contained plagiarism; in the second journal, that rate was 6%; and in the third, 13 of 56 of articles (23%) were rejected after testing, according to Rachael Lammey, a publishing manager at Taylor & Francis's offices in Abingdon, UK.
The three journals were deliberately selected because they had seen instances of plagiarism in the past, says Lammey. "My suspicion is that when we roll this out to other journals the numbers would be significantly lower." Mary Ann Liebert, a publishing company in New Rochelle, New York, has found that 7% of accepted articles in one of its journals had to be rejected following testing, says Adam Etkin, director of online and Internet services at the company.
CrossRef's product manager for CrossCheck, Kirsty Meddings, based in Oxford, UK, says that publishers are now checking about 8,000 articles a month, but many say that they have few hard statistics on the levels of plagiarism they are finding. Most are delegating CrossCheck testing to journal editors, and have not yet compiled detailed results. "We leave the use of the service to the discretion of the editor-in-chief of the journal, with some choosing to check every submission, but most use it only to check articles they consider suspicious," says Catriona Fennell, director of journal services at Elsevier in Amsterdam. "We are seeing a really wide variety of usage."
Publishers are unsure whether plagiarism is on the increase, whether it is simply being discovered more often, or both. "Not so many years ago, we got one or two alleged cases a year. Now we are getting one or two a month," says Bernard Rous, director of publications at the Association for Computing Machinery in New York, the world's biggest learned society for scientific computing, which is in the early stages of implementing CrossCheck. "There probably is more plagiarism than people have been aware of," adds Lammey.
Casting the net wider
The levels of plagiarism uncovered by CrossCheck have been more than enough to persuade publishers to embrace the software. "As you can see, CrossCheck is having an effect both on the papers we review and those we accept for publication, and with this in mind, we're keen to roll this trial out to our other journals," says Lammey. Most of the publishers interviewed by Nature said they had similar plans.
Using the CrossCheck software brings extra costs and overheads for journals. Publishers seem to find the fees reasonable, which start out at $0.75 per article checked and decrease with volume. The bigger overhead, they say, is the time needed for editors to check papers flagged by the software as suspiciously similar.
Establishing plagiarism requires "expert interpretation" of both articles, says Fennell. The software gives an estimate of the percentage similarity between a submitted article and ones that have already been published, and highlights text they have in common. But similar articles are sometimes false positives, and some incidents of plagiarism are more serious than others.
Self-plagiarism of materials and methods can sometimes be valid, for example, says Fennell. "There are only so many different ways you can describe how to run a gel," she says. "Plagiarism of results or the discussion is a greater concern." Sorting out acceptable practice from misconduct can often take a lot of time, says Lammey.
Overall, publishers say that they are delighted to have a tool to police submissions. "We are using CrossCheck on about a dozen journals, and it has spotted things that we would otherwise have published," says Aldo de Pape, manager of science and business publishing operations at Springer in Rotterdam, the Netherlands. "Some were very blatant unethical cases of plagiarism. It has saved us a lot of embarrassment and trouble."

Plagiarism pinioned

NATURE/EDITORIAL  doi:10.1038/466159b Published online 07 July 2010
There are tools to detect non-originality in articles, but instilling ethical norms remains essential
It is both encouraging and disheartening to hear that major science publishers intend to roll out the CrossCheck plagiarism-screening service across their journals (see page 167).
What is encouraging is that many publishers are not only tackling plagiarism in a systematic way, but have agreed to do so by sharing the full text of their articles in a common database. This last was not a given, considering the conservatism of some companies, yet it was a necessary step for the service to function — the iThenticate software used by CrossCheck works by comparing submitted articles against a database of existing articles. CrossCheck's 83 members have already made available the full text of more than 25 million articles.
What is disheartening is that plagiarism seems pervasive enough to make such precautions necessary. In one notable pilot of the system on three journals, their publisher had to reject 6%, 10% and 23% of accepted papers, respectively.
Granted, there are reasons to believe that such levels of plagiarism are exceptional. Previous studies of samples on the physics arXiv preprint server (see Nature 444, 524–525; 2006) and of PubMed abstracts (see Nature doi:10.1038/news.2008.520; 2008) found much lower rates. But the reality is that data are sorely lacking on the true extent of plagiarism, whether its prevalence is growing substantially and what differences might exist between disciplines. The hope is that the roll-out of CrossCheck will eventually yield reliable data on such questions over wide swathes of the literature — while also acting as a powerful deterrent to would be plagiarists.
In the process, editors and publishers must remember that plagiarism comes in many varieties and degrees of severity, and that responses should be proportionate. For example, past studies suggest that self-plagiarism, in which a researcher copies his or her own words from a published paper, is far more common than plagiarism of the work of others. Arguably, self-plagiarism can sometimes be justified, as when a researcher is bringing similar ideas before readers of journals in a different field. All plagiarism can also involve honest errors or mitigating circumstances, such as a scientist with a poor command of English paraphrasing some sentences of the introduction from similar work.
Such examples underscore that plagiarism-detection software is an aid to, not a substitute for, human judgement. One rule of thumb used by Nature journals and others in considering an article's degree of similarity to past articles — in particular, for small amounts of self-plagiarism in review articles — is whether the paper is otherwise of sufficient originality and interest.
Nature Publishing Group is a member of CrossCheck and has been testing the service on submissions to its own journals. It has noted only trace levels of plagiarism in research articles, which are spot-checked, and often in only the supplementary methods. Plagiarism has been more common in submitted reviews, all of which are tested. This is particularly true in clinical reviews, although the rates are still far below the 1% mark, and in most instances concerned some level of self-plagiarism.
Although the ability to detect plagiarism is a welcome advance, addressing the problem at its source remains the key issue. More and more learned societies, research institutions and journals have in recent years adopted comprehensive ethical guidelines on plagiarism, many of which carefully distinguish between different levels of severity. It is crucial that research organizations in all countries, and particularly the mentors of young researchers, instil in their scientists the accepted norms of the international scientific community when it comes to plagiarism and publication ethics.

Random Posts



.
.

Popular Posts