December 10, 2009

Plagiarism scandal grows in Iran

Investigation finds more cases of duplication in publications co-authored by ministers and senior officials.
EXCLUSIVE
Nature has uncovered further instances of apparent plagiarism in papers co-authored by government ministers and senior officials in Iran. The spate of new examples raises questions about whether such incidents are symptomatic of conditions also common in other developing countries — such as difficulties with English or pressure to acquire academic credentials as a prerequisite for promotion — or whether they are also linked specifically to the Iranian regime, where growth of a merit-based university culture has been undermined by political appointments and purges of reform-minded scientists (see page 699).>>>

October 12, 2009

Analysis of retractions puts spotlight on academia

Nicola Jones

Nature Medicine 15, 1101 (2009)
doi:10.1038/nm1009-1101


About half of the medical papers retracted over the past few decades were pulled because of misconduct rather than an innocent mistake, according to two new studies. And that fraction is on the increase.
Yet although drug companies are often portrayed by the popular press as the source of all evil in biomedical publishing, just 4% of retractions due to misconduct had declared pharmaceutical sponsorship.>>>

September 9, 2009

Peer reviewers satisfied with system : TIMES HIGHER EDUCATION

David Schley

But Sense About Science survey finds that two thirds of those polled think it is failing to detect plagiarism.
With the number of learned papers published each year rising to 1.3 million, the peer- review system might be expected to be fraying at the seams.
But an international survey of academics states that two thirds are satisfied with the current system for monitoring the quality of scholarly output, and 90 per cent of those who participate as reviewers remain keen to take part.
The findings were published by the charity Sense About Science at the British Science Festival, held at the University of Surrey, on 8 September.
Tracy Brown, the charity’s managing director, said the issue of whether the system was sustainable was a matter of “public as well as scientific interest”.
But while many of the survey’s findings are reassuring, concerns have been raised.
The vast majority of researchers polled say that peer review should detect plagiarism and fraud, but only about one third think it is doing so.
Similarly, while most respondents say that the system should be able to ensure that papers acknowledge any previous work used, only half think it does so effectively.
Despite these issues, participants caution that expecting reviewers to approach manuscripts with suspicion runs counter to the assumption of honesty and the spirit of collaboration in science.
They add that such a tactic would make the task of peer review unmanageable.
Adrian Mulligan, associate director of research and academic relations at Elsevier, said that the launch later this year of Crosscheck, a pan-publisher plagiarism-detection tool, could resolve some of the problems raised.
Given the principle of openness in science, there is a surprisingly strong desire for anonymity from reviewers, with a double-blind process considered to be most effective.
This consensus has been attributed to a desire to protect junior academics asked to review work by more senior colleagues. According to the survey, editors have warned that completely open reviewing reduced the number of people willing to participate and led to “lame” reviews of little value.
Although more than two thirds of the survey’s respondents state that training would be beneficial, Ms Brown said she was hesitant about the peer-review process being professionalised, as it was difficult to see how any qualification could meet the needs of different disciplines.
Instead, she advocated the nurturing of postdoctoral researchers and postgraduate students by more experienced peers, but noted with disappointment that very few reviews were currently undertaken collaboratively with junior colleagues.
A full report is due to be published in November – following peer review.
For more details, see:
www.senseaboutscience.org.uk/index.php/site/project

Further survey findings
A third of respondents say they are happy to review up to five papers a year, with a further third happy to review up to ten.



On average, academics decline two papers each year, principally because they are outside their area of expertise, although workload is another frequently cited reason.


The average time taken to review a paper is six hours. However, there is a great deal of variability: one in every 100 participants in the survey claims to have taken more than 100 hours to review their last paper.

August 30, 2009

Self-plagiarism: unintentional, harmless, or fraud?

THE LANCET
Volume 374, Issue 9691, 29 August 2009-4 September 2009, Page 664
Editorial

The intense pressure to publish to advance careers and attract grant money, together with decreasing time available for busy researchers and clinicians, can create a temptation to cut corners and maximise scientific output. Journals are increasingly seeing submissions in which large parts of text have been copied from previously published papers by the same author.
Whereas plagiarism—copying from others—is widely condemned and regarded as intellectual theft, the concept of self-plagiarism is less well defined. Some have argued that it is impossible to steal one's own words. The excuse editors hear when confronting authors about self-plagiarism is that the same thing can only be said in so many words. This might sometimes be legitimate, perhaps for specific parts of a research paper, such as a methods section. However, when large parts of a paper are a word-for-word copy of previously published text, authors' claims that they have inadvertently used exactly the same wording stretch credibility.
There is a clear distinction between self-plagiarism of original research and review material. Republishing large parts of an original research paper is redundant or duplicate publication. Publishing separate parts of the same study with near identical introduction and methods sections in different journals is so-called salami publication. Both practices are unacceptable and will distort the research record. Self-plagiarism in review or opinion papers, one could argue, is less of a crime with no real harm done. It is still an attempt to deceive editors and readers, however, and constitutes intellectual laziness at best.
Deception is the key issue in all forms of self-plagiarism, including in reviews. Few editors will knowingly republish a paper that contains large parts of previously published material. Few readers will happily read the same material several times in different journals. An attempt to deceive amounts to fraud and should not be tolerated by the academic community.

August 29, 2009

Retractions up tenfold - TIMES HIGHER EDUCATION

20 August 2009
'Publish or perish' factor in withdrawal of science papers. Zoe Corbyn reports
The rate at which scientific journal articles are being retracted has increased roughly tenfold over the past two decades, an exclusive analysis for Times Higher Education reveals.
Growth in research fraud as a result of greater pressure on researchers to publish, improved detection and demands on editors to take action have been raised as possible factors in the change.
The study, by the academic-data provider Thomson Reuters, follows the retraction last month of a paper on the creation of sperm from human embryonic stem cells.
The paper, written by researchers at Newcastle University, was withdrawn by the Stem Cells and Development journal following its discovery that the paper's introduction was largely plagiarised.
The Thomson Reuters analysis charts the number of peer-reviewed scientific-journal articles produced each year from 1990 and the number of retractions.
It shows that over nearly 20 years the number of articles produced has doubled, but the number of retractions - still a small fraction of the literature - has increased 20 times. This is equal to a tenfold increase, factoring in the growth of articles.
The data are extracted from the Thomson Reuters Web of Science citation database, and apply to the journals covered by its Science Citation Index Expanded.
Whereas in 1990, just five of the nearly 690,000 journal articles that were produced worldwide were retracted, last year the figure was 95 of the 1.4 million papers published.
The growth has been particularly pronounced in the past few years, even factoring out 22 retracted papers authored by Jan Hendrik Schon, the disgraced German physicist, earlier this decade.
James Parry, acting head of the UK Research Integrity Office (UKRIO), said it was impossible to know for certain the reasons for the increase.
"It might reflect a real increase in misconduct or, more likely, an increase in detection compared with 20 years ago," he said.
He noted that while "most" retractions were for misconduct or questionable practice, "many" were the result of honest errors, such as an author misinterpreting results and realising the mistake later.
"Some editors have been very slow to spot misconduct and to take action when they do," he added.
Harvey Marcovitch, former chair of the Committee on Publication Ethics, welcomed the analysis. He said he had always thought that the number of retractions was small, but had never seen the figures before.
He hoped that the increased publicity scientific fraud had received in recent years had raised awareness - making scientists more likely to alert journal editors, and editors more prepared to investigate claims.
Editors, he agreed, had been notoriously reluctant to retract, for reasons ranging from "not having permission of authors, to being unsure about what retraction meant, to not knowing precisely what to do".
He said plagiarism software could also play a part in the rise - the British Medical Journal uses it to evaluate suspect papers, while Nature is trialling it for some papers and all review articles.
Both Mr Parry and Dr Marco-vitch stressed that misconduct was likely to be more common than the retraction figures suggest.
"Even on a conservative estimate of 1 per cent misconduct, we might expect 15,000 retractions a year, but we have a lot less," Mr Parry said.
"This suggests significant under-detection, which fits with what editors have told UKRIO."
He added that there was evidence that people still frequently quoted papers after they had been retracted. "The system is not working as well as it could," he said.
Aubrey Blumsohn, a former University of Sheffield academic and now a campaigner for greater openness in research conduct, said that only a "tiny proportion" of the papers known to have serious problems were retracted.
"Journal editors and institutions generally engage in a fire-fighting exercise to avoid retractions," he said.
"Anyone looking at this problem in detail knows of dozens of papers that are frankly fraudulent, but they are never retracted."
He said that the ways in which the scientific community "covers its tracks and prevents fraud being prosecuted" must be investigated.
Peter Lawrence, a scientist from the Medical Research Council's Laboratory of Molecular Biology in Cambridge, speculated that more plagiarism and better detection had pushed up the retraction rate.
Blaming a culture of "publish or perish", he said: "It's now a desperate struggle for survival."
He added that there was overwhelming pressure to be published in big journals: "You need to sensationalise results, be economical with rigour, and hype, hype, hype."
zoe.corbyn@tsleducation.com
Research, page 21
WIDESPREAD MISCONDUCT
A new study assesses the reasons for more than 300 journal retractions over the past 20 years.
The analysis looks at 312 cases of withdrawals listed in the PubMed database between 1988 and 2008. The authors, Liz Wager, chair of the Committee on Publication Ethics, and Peter Williams, research fellow in the department of information studies at University College London, found that 25 per cent were due to plagiarism or falsified data and 26 per cent were due to honest errors. The reasons for the other retractions were not given.
The study, Why and How Do Journals Retract Articles?, is due to be presented in September to the Sixth International Congress on Peer Review and Biomedical Publication in Vancouver.
It follows a paper published this year in the PLOS One journal that aggregates studies on how frequently scientists falsify research. It says that about 2 per cent admitted to having fabricated, falsified or otherwise modified data or results "at least once". Almost 34 per cent admitted to "questionable research practices".
The paper, How Many Scientists Fabricate and Falsify Research? A Systematic Review and Meta-Analysis of Survey Data, is written by Daniele Fanelli, Marie Curie research fellow at the University of Edinburgh.
CODE OF PRACTICE: TAKE THE PLAUDITS AND THE BRICKBATS
Anyone listed as an author on a paper should be prepared to take "public responsibility" for the work, a body that battles research misconduct advises.
The advice is featured in a code of practice for research, due to be launched next month by the UK Research Integrity Office (UKRIO).
The code is designed to help universities formulate institutional guidelines.
"Researchers should be aware that anyone listed as an author should be prepared to take public responsibility for the work, ensure its accuracy and be able to identify their contribution to it," it says.
James Parry, acting head of the UKRIO, said the document would provide "broad standards and principles" for best practice in research.
It follows a case at Newcastle University, which is investigating the plagiarised introduction of a stem-cell paper listing eight authors. The paper was retracted from the Stem Cells and Development journal last month after the problem came to light.
A research associate who has since left the university was blamed for the error, but leading scientists have criticised the senior authors involved for not taking responsibility.
For a copy of the UKRIO code: www.ukrio.org.

August 12, 2009

Perishing Without Publishing - INSIDE HIGHER ED

Welcome to the 21st century. Journals and publishing houses are folding faster than a roomful of origami artists, while new online journals are appearing all the time. Nietzsche once proclaimed the demise of God, but the new mantra is “Print is dead!” Maybe, maybe not; but however these transformations shake out, getting published somewhere remains crucial for newcomers to academia. It's still publish-or-perish in many places, even if some of those who publish will never have a hard copy, while others will treasure that they can hold their work in their hands. As one who has served (and is serving) as an associate editor for actual paper journals, let me share some bad practice observations that could sandbag your career -- and this advice almost all applies to any online peer-reviewed journal too. >>>

July 12, 2009

The insider’s guide to plagiarism

Editorial
Nature Medicine, 707 (2009)

Scientific plagiarism—a problem as serious as fraud—has not received all the attention it deserves.

Reduced budgets are affecting research just as they are every sector of the economy. So, how can struggling scientists increase their chances of securing their share of financial resources in these tough times?Publish, of course!

What? You don’t have the resources to do the experiments? Don’t worry! A little creative writing might be all you need to sail through the financial crisis. Here’s how: use a solid paper as your base; carry out a parallel set of experiments in your favorite model; tweak the data so that the numbers are not identical but remain realistic; and, when you’re ready to write it all up, paraphrase the original paper ad libitum. Last, submit your new manuscript to a modest journal in the hopes that the authors of the paper you used as ‘inspiration’ won’t notice your ‘tribute’ to their work—even though imitation is supposed to be the sincerest form of flattery, their approval of your ‘reworking’ of their paper cannot be guaranteed. If all goes well, getting a couple of these manuscripts under your belt might make all the difference when you apply for that elusive grant.

Does this strategy work? Unfortunately, all too often it does, even though many eyes examine every paper before it ends up on a printed page. And when scrutiny identifies cases of potential plagiarism, serious corrective action doesn’t always take place. Consider a recent report (Science 323, 1293–1294, 2009) in which software tools and manual comparison helped identify cases of suspected plagiarism. When the authors of 163 suspicious studies were contacted, about 30% disavowed misconduct, and over 20% of coauthors claimed no involvement in writing the papers.
>>>

July 10, 2009

The truth will out

Editorial

Nature Physics 5, 449 (2009)

Fraud in science is difficult to spot immediately, but, as high-profile cases show, it does get found out. Tackling plagiarism is at least becoming an easier fight.

Introduction
Scientific misconduct comes in many forms. Fabrication lies at one extreme, but plagiarism and 'citation amnesia' are more common. Some have come to question the peer review system, especially following the spectacular cases of Hendrik Schön and Scott Reubens. Schön was a Bell Labs researcher whose organic field-effect transistors exhibited the fractional quantum Hall effect, superconductivity, lasing, you name it. That he didn't keep a lab book or any raw data during his PhD would already constitute bad practice, but then he went on to actually fabricate data. In 2002, a committee found him guilty of scientific misconduct on 16 out of 24 allegations, and at least 21 of his published papers have since been retracted (a new book chronicling Schön's rise and fall is reviewed on p451 of this issue). Reuben's case came to light in March 2009, when 21 of his papers containing faked data were retracted from anaesthesiology journals. Millions of patients have been treated according to his studies of combinations of drugs for pain relief. In many cases, the patients in his clinical trials were made up. >>>



July 8, 2009

Plagiarism, salami slicing, and Lobachevsky

Leonard Berlin
Department of Radiology, Rush North Shore Medical Center,

Skeletal Radiol (2009) 38:1–4, DOI 10.1007/s00256-008-0599-0

Who made me the genius I am today,
Who’s the Professor that made me that way?
One man deserves the credit,
One man deserves the blame,
And Nicolai Ivanovich Lobachevsky is his name.
In one word he told me the secret of success:
Plagiarize! Plagiarize! Plagiarize!
Let no one else’s work evade your eyes.
Only be sure always to call it please “Research.”
Tom Lehrer, “Lobachevsky,” 1953 [1]

A half century ago, then well-known humorist-songwriter Tom Lehrer composed and popularized a song parodying the subject of plagiarism. He named the song after Russian mathematician Lobachevsky (1793–1856), famous for his development of non-Euclidean geometry, not because Lobachevsky was a plagiarist but rather for “prosodic” reasons [1]. Why recall a 55-year-old song today? The answer is obvious: plagiarism has found its way into both the contemporary public news media and the scientific literature. >>>

July 2, 2009

Dear Plagiarist - INSIDE HIGHER ED

Dear Student,
When you got your paper back with a grade of F for plagiarism, you reacted in predictable fashion -- with indignant denial of any wrongdoing. You claimed “you cited everything” and denied that you had committed intentional plagiarism, or ever would.
This response is all too familiar to an experienced professor. Only once in my three decades of teaching has a student I caught plagiarizing owned up to it right away. And in that case, I believe (perhaps cynically) that she (a graduate student) thought a forthright confession might lead me to lighten the penalty. It didn’t; I failed her for the course and wrote her up. Indeed, I found out later that she had been caught plagiarizing by a colleague the previous term and let off lightly. I suspect that, because too many professors (many of them adjuncts fearful of student backlash) overlook or are unwilling to pursue plagiarism -- the process can be labor intensive, and it is always unpleasant -- cheating has become a way of life for many students, and they are genuinely surprised at being held responsible for it. So I don’t doubt that your shock is real.>>>

May 29, 2009

How Many Scientists Fabricate And Falsify Research?(ScienceDaily)

It's a long-standing and crucial question that, as yet, remains unanswered: just how common is scientific misconduct? In the online, open-access journal PLoS ONE, Daniele Fanelli of the University of Edinburgh reports the first meta-analysis of surveys questioning scientists about their misbehaviours. The results suggest that altering or making up data is more frequent than previously estimated and might be particularly high in medical research. >>>

May 22, 2009

Plagiarism Sleuths

Jennifer Couzin-Frankel & Jackie Grom
Science 22 May 2009: Vol. 324. no. 5930, pp. 1004 - 1007

A Texas group is trolling through publications worldwide hunting for signs of duplicated material. The thousands of articles they've flagged online raise questions about standards in publishing—and about the group's own tactics.>>>

March 13, 2009

Plagiarism in the news (CrossRef)

A number of articles and news items have brought the issue of plagiarism into focus recently. Last week, a short paper in Science provided an update on the research by Harold Garner and his colleagues that was previously reported in Nature News, and has since been commented on in a number of places including SSP’s Scholarly Kitchen blog.
Garner’s team has taken abstracts from Medline and used a piece of software called eTBLAST to compare them against each other for similar and overlapping text. To date, with a combination of machine and human analysis, they have identified 9120 articles with "high levels of citation similarity and no overlapping authors", and 212 pairs of articles "with signs of potential plagiarism". They have gone on to contact authors and editors and (under assurances of anonymity) have received a range of responses from outrage to apology to denial. As of February 2009 they are aware of their study having triggering 83 internal investigations leading to 46 retractions.
In The Scientist Garner explains that technology has a role to play in plagiarism detection because "You can't expect all the editors and reviewers to have all 18,000,000 papers in their head from biomedicine”. Technology will never be an adequate substitute for a human domain expert’s knowledge and judgment, but a system such as CrossCheck can scan vast amounts of content and flag up potential issues, saving time and adding a level of reassurance previously unavailable.
The CrossCheck database currently contains almost 11 million content items and is on course to become the most comprehensive resource against which to check scholarly content for plagiarism. Look out for sessions on CrossCheck and plagiarism at the UKSG conference at the end of the month, and also at the Council of Science Editors meeting in May.

March 10, 2009

Plagiarism and other scientific misconducts

EDITORIAL

Journal of Cancer Research and Clinical Oncology

K. Höffken and H. Gabbert

When we were young scientists we heard that: ‘‘games authors play’’ and learned that results of scientific work was published by the same authors in different order in different journals. However, the content of the publications differed only slightly from each other (e.g., by omitting one and adding another table or figure) and the conclusions were almost identical.

When we grew older, we encountered affairs of scientific misconduct ranging from copying text from other scientific papers up to faking results.

When we became editors of our journal, we hoped that we would be immune from such assaults. However, we had to learn that each of the above examples of plagiarism and of other scientific misconduct could happen to us. We met double publications, learned that authors sent manuscripts simultaneously to more than one journal or were informed that authors copied and pasted text (as can be seen from the example below).

Original version
Recent technologic advances now make it feasible to better tackle the methodological challenges of detecting EBV in breast cancers. Consequently, a critical next step in understanding this relationship is to apply detection strategies that are sensitive and specific for EBV and able to localize the EBV to particular benign or malignant cells within the tissue. A recent National Cancer Institute recommendation specifies an approach combining real-time quantitative PCR, which allows measurement of the amount of viral DNA in archival tissue samples, with laser capture microdissection to improve localization of viral DNA to benign or malignant components of a tissue sample (90).

Plagiarized version
Recent technological advances now make it feasible to better tackle the methodological challenges of detecting virus in breast cancers. A critical next step in understanding this relationship is to apply detection strategies that are sensitive and specific for virus and able to localize this agent to particular malignant cells within the tissue. A recent National Cancer Institute recommendation specifies an approach combining real-time quantitative PCR, which allows measurement of the amount of viral load in archival tissue samples, with laser capture microdissection to improve localization of viral nucleic acid to benign or malignant components of a tissue sample.


What did we learn from these facts?

1. Science is not immune from fraud, misconduct nor void of bad scientists. Fortunately, these are exemptions!

2. Journals are not protected against these assaults and

3. Even the best prevention system did not exclude that it happened to us and that it will happen again.

What can we do to improve our prevention mechanisms?

1. We count on the readiness and awareness of our readers.

2. We will relentlessly denounce the criminal methods and their originators.

3. We will put the persons on a black list and urge other journals to deny them the right for publication.

Please support us with our efforts. Do not hesitate to inform us about any irregularity, violation or infringement.

J Cancer Res Clin Oncol (2009) 135:327–328

March 9, 2009

Plagiarism in Scientific Publications

Editorial Article

Peter R. Mason

Biomedical Research & Training Institute, Harare, Zimbabwe

J Infect Developing Countries 2009; 3(1):1-4. >>>

March 7, 2009

Combating plagiarism

Editorial

Nature Photonics 3, 237 (2009)
doi:10.1038/nphoton.2009.48

Accountability of coauthors for scientific misconduct, guest authorship and deliberate or negligent citation plagiarism, highlight the need for accurate author contribution statements.>>>

March 6, 2009

Responding to Possible Plagiarism

SCIENCE, 6 March 2009: Vol. 323. no. 5919, pp. 1293 - 1294
DOI: 10.1126/science.1167408
Tara C. Long,1 Mounir Errami,2 Angela C. George,1 Zhaohui Sun,2 Harold R. Garner1,2*
The peer-review process is the best mechanism to ensure the high quality of scientific publications. However, recent studies have demonstrated that the lack of well-defined publication standards, compounded by publication process failures (1), has resulted in the inadvertent publication of several duplicated and plagiarized articles.
The increasing availability of scientific literature on the World Wide Web has proven to be a double-edged sword, allowing plagiarism to be more easily committed, while simultaneously enabling its simple detection through the use of automated software. Unsurprisingly, various publishing groups are now taking steps to reinforce their publication policies to counter the fraudulent acts of a few (2). There are now dozens of commercial and free tools available for the detection of plagiarism. Perhaps the most popular programs are iParadigm's "Ithenticate" (http://ithenticate.com/) and TurnItIn's originality checking (http://turnitin.com/), which recently partnered with CrossRef (http://www.crossref.org/) to create CrossCheck, a new service for verifying the originality of scholarly content. However, the content searched by this program spans only a small sampling of journals indexed by MEDLINE. Others include EVE2, OrCheck, CopyCheck, and WordCHECK, to name a few.
We recently introduced an automated process to identify highly similar citations in MEDLINE (3, 4). Our detection of duplicates relies heavily on human inspection in conjunction with computational tools including eTBLAST (5, 6) and Déjà vu, a publicly available database (7, 8). As of 20 February 2009, there were 9120 entries in Déjà vu with high levels of citation similarity and no overlapping authors. Thus far, full-text analysis has led to the identification of 212 pairs of articles with signs of potential plagiarism. The average text similarity between an original article and its duplicate was 86.2%, and the average number of shared references was 73.1%. However, only 47 (22.2%) duplicates cited the original article as a reference. Further, 71.4% of the manuscript pairs shared at least one highly similar or identical table or figure. Of the 212 duplicates, 42% also contained incorrect calculations, data inconsistencies, and reproduced or manipulated photographs.
There has been a paucity of literature examining the reactions of stakeholders (both victims and perpetrators) when confronted with evidence of possible misconduct. Studying these reactions may help to illuminate the reasons for such misconduct and might provide a way for the scientific community to prevent such activity in the future. Therefore, we merged data from previous studies (3) with additional information based on our personal communications with authors and journal editors directly associated with 163 of these cases of potential plagiarism.
A questionnaire (see table S1) was composed, supplemented with annotated electronic copies of both manuscripts, and sent via e-mail to the authors and editors of the earlier and later manuscripts.
From the 163 sets of questionnaires sent, we received a reply in 144 cases (88.3%). Anonymity was guaranteed to all respondents. The reactions by the respondents were intense and diverse, and although it is difficult to quantify the various responses, a general picture can be painted. Before receiving the questionnaire, 93% of the original authors were not aware of the duplicate's existence. The majority of these responses were appreciative in nature. The responses from duplicate authors were more varied; of the 60 replies, 28% denied any wrongdoing, 35% admitted to having borrowed previously published material (and were generally apologetic for having done so), and 22% were from coauthors claiming no involvement in the writing of the manuscript. An additional 17% claimed they were unaware that their names appeared on the article in question. The journal editors primarily confirmed receipt and addressed issues involving policies and potential actions. Excerpts from statements made by authors and editors illustrate the many possible perspectives in response to evidence of possible plagiarism. Table 1 provides a sampling of these responses, with an expanded list available in tables S2 to S5.
Although the goal of the questionnaire was merely to solicit information, the very act of sending it appeared to trigger further action by journals in many cases. Editors have launched 83 internal investigations thus far, 46 of which have, according to the editors of the journals, led to eventual retraction of the duplicate article. It is unclear what defines a "retraction," however, because many editors only stated that a comment would be published in their journal, or that the article would simply be removed from the journal's Web site. Unfortunately, these actions do not propagate back to MEDLINE unless an explicit request is made by the journal; therefore, researchers and clinicians may never become aware of an article's retracted status.
To assess how articles of this nature affect the scientific community, we recorded the impact factors for each journal in which the 212 articles and their duplicates were published using the Thomson Scientific Journal Citation Reports feature (9). A large portion of the duplicates were published in low-profile journals; thus, impact factors were available for only 199 of the 285 different journals. The impact factors of journals publishing original articles were significantly higher (P < 0.001), averaging 3.87 and spanning 0.147 to 52.589, than those of the journals publishing duplicate articles, averaging 1.6 and spanning 0.272 to 6.25.
Utilizing the ISI Web of Knowledge to determine how many times each article had been cited (10), we found that original publications were cited 28 times on average, whereas their corresponding duplicates were cited only twice. Although the original articles are older and have thus had more exposure, in 10 of the pairs, the duplicate article was cited at least as often as the original publication. This may be because scientists rely heavily on finding information through PubMed searches which, by default, return more recent articles first, ensuring that a plagiarized article will always appear higher on a list of search results than its original counterpart. As a result, citations that would have otherwise gone to an original publication are instead diverted to a plagiarized one.


Figure 1 Sampling of responses from
authors and editors
CREDIT: (ICON) JUPITERIMAGES
Authors of earlier article
"I have been a research scientist for more than 50 years, and this is the first time I've ever experienced such a blatant case of plagiarism. It sure was an eye-opener!"
"I have no statement. I cannot prove that this is plagiarism. Even if it is, what can be done?"
"[My] major concern is that false data will lead to changes in surgical practice regarding procedures."
"We were very sorry and somewhat surprised when we found their article. I don't want to accept them as scientists."
Editors of journal publishing earlier article
"It's my understanding that copying someone else's description virtually word-for-word, as these authors have done, is considered a compliment to the person whose words were copied."
The two articles" are the same patients, the figures are the same, and the writing is blatant plagiarism.
One of these papers is a false publication. We cannot let this one go unaddressed."
"We were not aware of this duplicate publication, and would not have given permission for this, as it clearly violates copyright."
"I have been Editor for 14+ years and this is the first time this issue has been raised."
"It is clear that the subsequent author frankly, fraudulently used identical data … in writing the second article. There is no way under the stars that we could have picked that up ourselves."
Authors of later article
"I would like to offer my apology to the authors of the original paper for not seeking the permission for using some part of their paper. I was not aware of the fact I am required to take such permission."
"There are probably only 'x' amount of word combinations that could lead to 'y' amount of statements.
… I have no idea why the pieces are similar, except that I am sure I do not have a good enough memory--and it is certainly not photographic--to have allowed me to have 'copied' his piece…. I did in fact review [the earlier article] for whatever journal it was published in."
"I know my careless mistake resulted in a severe ethical issue. I am really disappointed with myself as a researcher."
"It was a joke, a bad game, an unconscious bet between friends, 10 years ago that such things … happened. I deeply regret."
"I was not involved in this article. I have no idea why my name is included."
Editors of journal publishing later article
"Looks like [the author of the later article] did it again in 2001. This example is a bit more embarrassing because the author of the original paper is [the] editor of the journal where [the author of the later article] published the copied work. Looks like we will have to publish two retractions."
"Believe me, the data in any paper is the responsibility of the authors and not the journal."
"I really appreciate your work and your e-mail has promoted us to exercise more strict control over duplicate publication."
"There can be no doubt that this is willful and deliberate plagiarism. Like the chance of monkeys typing out the works of Shakespeare, it would be incredible that the similarities could arise by chance."
"The news has taken us by surprise and a sense of deep concern. We are calling an emergency meeting of the editorial board to discuss the matter. [Our journal] deeply condemns the act and we stand firm to take necessary actions against the authors."
Of the 175 journal editors with whom we communicated, 11 admitted they had never personally dealt with a potentially plagiarized manuscript and were unsure how to proceed. The majority of these editors showed deep concern and were open to any helpful suggestions or recommendations we could offer, at which point we directed them to the Office of Research Integrity's guidance document for editors on Managing Allegations of Scientific Misconduct (11). In spite of this concern, nearly half of all the duplications brought to light by our questionnaires have received no action. In fact, on 12 separate occasions, editors specifically indicated that cases involving their journal would not be reviewed. This variation in feedback reveals a great deal about the attitudes and motivations of scientists around the globe, including why some journal editors do not pursue obvious cases of duplication. Some apparently do not want to deal with the added stress of conducting a thorough investigation. Others feel it may bring bad publicity or reflect poorly on their journal's review process. While there will always be a need for authoritative oversight, the responsibility for research integrity ultimately lies in the hands of the scientific community. Educators and advisors must ensure that the students they mentor understand the importance of scientific integrity. Authors must all commit to both the novelty and accuracy of the work they report. Volunteers who agree to provide peer review must accept the responsibility of an informed, thorough, and conscientious review. Finally, journal editors, many of whom are distinguished scientists themselves, must not merely trust in, but also verify the originality of the manuscripts they publish.
References and Notes
  1. L. Gollogly, H. Momen, Rev. Saude Publica 40, 24 (2006).
  2. C. White, BMJ 336, 797 (2008).
  3. M. Errami et al., Bioinformatics 24, 243 (2008).
  4. Materials and methods are available as supporting material on Science Online.
  5. J. Lewis, S. Ossowski, J. Hicks, M. Errami, H. R. Garner, Bioinformatics 22, 2298 (2006).
  6. M. Errami, J. D. Wren, J. M. Hicks, H. R. Garner, Nucleic Acids Res. 35, W12 (2007).
  7. M. Errami, H. Garner, Nature 451, 397 (2008).
  8. M. Errami, Z. Sun, T. C. Long, A. C. George, H. R. Garner, Nucleic Acids Res. 37, D921 (2009).
  9. Journal Citation Reports, ISI Web of Knowledge (Thomson Reuters, Philadelphia, 2008); http://isiwebofknowledge.com/products_tools/analytical/jcr/.
  10. ISI Web of Knowledge (Thomson Reuters, Philadelphia, 2008); http://isiknowledge.com/.
  11. Office of Research Integrity (ORI), Managing Allegations of Scientific Misconduct: A Guidance Document for Editors (ORI, U.S. Department of Health and Human Services, Rockville, MD, 2000); http://ori.dhhs.gov/documents/masm_2000.pdf.
  12. We thank D. Trusty for computer administrative support; J. Loadsman as a substantial contributing curator; W. Fisher for useful comments, discussions, and manuscript editing; D. Wu and W. Fisher for assistance in obtaining full text articles; L. Gunn for administrative assistance; and the numerous Déjà vu users who have reported inaccuracies or have alerted us to questionable publications. This work was funded by NIH grant 5R01LM009758-02, the Hudson Foundation, and the P. O'B. Montgomery Distinguished Chair.
Supporting Online Material

10.1126/science.1167408

1McDermott Center for Human Growth and Development, The University of Texas Southwestern Medical Center, 5323 Harry Hines Boulevard, Dallas, TX 75390-9185, USA.
2Division of Translational Research, The University of Texas Southwestern Medical Center, 5323 Harry Hines Boulevard, Dallas, TX 75390-9185, USA.
*Author for correspondence. E-mail: harold.garner@utsouthwestern.edu

March 5, 2009

Study finds plenty of apparent plagiarism (Science News)

Data mining reveals too many similarities between papers

Web edition : Thursday, March 5th, 2009

access
IS THIS PLAGIARISM?
Yellow highlights aspects of this paper that copy material published in a previous paper — by other authors.

UT Southwestern Medical Center
If copying is the sincerest form of flattery, then journals are publishing a lot of amazingly flattering science. Of course to most of us, the authors of such reports would best be labeled plagiarists — and warrant censure, not praise.
But Harold R. Garner and his colleagues at the University of Texas Southwestern Medical Center at Dallas aren’t calling anybody names. They’re just posting a large and growing bunch of research papers — pairs of them — onto the Internet and highlighting patches in each that are identical.
Says Garner: “We’re pointing out possible plagiarism. You be the judge.” But this physicist notes that in terms of wrong-doing, authors of the newest paper in most pairs certainly appear to have been “caught with their hands in the cookie jar.”
Garner's team developed data-mining software about eight years ago that allows a resarcher to input lots of text — the entire abstract of a paper, for instance — and ask the program to compare it to everything posted on a database. Such as the National Library of Medicine's MEDLINE, which abstracts all major biomedical journal articles. The software then looks for matches to words, phrases, numbers — anything, and pulls up matches that are similar. The idea: to help scientists find papers that offer similar findings, contradictions, even speculations that might suggest promising new directions in a given research field.
Early on, Garner says, his team realized this software also had the potential for highlighting potential plagiarism. But that was not their first priority. In fact, his group didn't really begin looking in earnest for signs of copycatting until about two years ago.
Today, Garner’s group has published a short paper in Science on results of a survey it conducted among authors of pairs of remarkably similar papers (identified from MEDLINE), and the editors who published those papers. The Texas team wanted to find out whether the apparent copycats — not only the authors but also the editors who published their work — would own up to plagiarism. And once confronted with this public finger pointing, what would they do about it?
The real surprise, says Garner — indeed, “the shock” — was that so few authors of the initial papers were aware of the copycat’s antics. Prior to emailing PDFs that highlighted identical passages in each set of paired papers, 93 percent said they had been unaware of the newer paper.
Since those newer papers were all available via MEDLINE searches, they should have come up every time authors of the first paper searched for work on topics related to their own. In fact, Garner points out, because MEDLINE posts search results in reverse chronological order, copycatted papers should turn up before the papers on which they had been based.
To date, 83 of the 212 pairs of largely identical papers identified so far by the data-mining software that Garner’s team has developed have triggered formal investigations by the journals involved. In 46 instances, editors of the second papers have issued retractions. However, what constitutes a retraction varied considerably. It might have been broad publication of problems with the offending second paper — both in the journal and in a notice sent to MEDLINE.
Other times, some website might have acknowledged the retraction of some or all of a paper, with no notification of the problem forwarded to MEDLINE. In such cases, Garner notes, anyone using MEDLINE's search function would get no warning that the abstract it pulled up relates to findings that have been discredited.
Have you ever shared this material on apparent plagiarism with the administrators of the second paper's authors, I asked Garner. "No, that would have put us into this situation where we would be acting more as police or an investigatory body," he said. And they're not anxious to serve as honesty cops.
Too bad.
So far, his team's software has turned up more than 9,000 'highly similar' papers in biomedical journals indexed by MEDLINE. And only 212 are copycats? Actually, Garner says, that estimate is probably way low. Of that big number, "We have only gotten through looking at 212 so far." Their investigations continue.
For more on the implications of such copycatting, check out my next post.

March 1, 2009

Borrowing words, or claiming them?

Editorial

Nature Immunology 10, 225 (2009)
doi:10.1038/ni0309-225

Journals are taking steps to stem of the practice of plagiarism.

Have you ever experienced a sense of déjà vu after reading a colleague's manuscript or researching a topic of interest? A paragraph or entire section sounds eerily familiar—too familiar, perhaps, because it is a word-for-word, verbatim (or nearly so, with a few synonyms tossed in) replication of another piece written by different authors. Or maybe a result or hypothesis is claimed to be 'novel' but in fact others have reported such findings and the previous work is not cited. Or the same data are presented in both earlier and subsequent publications from an author, but the later publication fails to acknowledge the fact that the data were included in the earlier work. Are any of these situations acceptable? In fact they are not. All three scenarios represent examples of what can be considered plagiarism.>>>


February 3, 2009

It's Culture, Not Morality - INSIDE HIGHER ED

Scott Jaschik
What if everything you learned about fighting plagiarism was doomed to failure? Computer software, threats on the syllabus, pledges of zero tolerance, honor codes -- what if all the popular strategies don't much matter? And what if all of that anger you feel -- as you catch students clearly submitting work they didn't write -- is clouding your judgment and making it more difficult to promote academic integrity? >>>

January 1, 2009

Problems with anti-plagiarism database

Mauno Vihinen

SIR — Sophisticated tools have been developed to detect duplicate publication and plagiarism, as noted in M. Errani and H. Garner’s Commentary ‘A tale of two citations’ (Nature 451, 397–399; 2008) and in your News story ‘Entire-paper plagiarism caught by software’ (Nature 455, 715;2008). >>>

Random Posts



.
.

Popular Posts