August 30, 2009

Self-plagiarism: unintentional, harmless, or fraud?

THE LANCET
Volume 374, Issue 9691, 29 August 2009-4 September 2009, Page 664
Editorial

The intense pressure to publish to advance careers and attract grant money, together with decreasing time available for busy researchers and clinicians, can create a temptation to cut corners and maximise scientific output. Journals are increasingly seeing submissions in which large parts of text have been copied from previously published papers by the same author.
Whereas plagiarism—copying from others—is widely condemned and regarded as intellectual theft, the concept of self-plagiarism is less well defined. Some have argued that it is impossible to steal one's own words. The excuse editors hear when confronting authors about self-plagiarism is that the same thing can only be said in so many words. This might sometimes be legitimate, perhaps for specific parts of a research paper, such as a methods section. However, when large parts of a paper are a word-for-word copy of previously published text, authors' claims that they have inadvertently used exactly the same wording stretch credibility.
There is a clear distinction between self-plagiarism of original research and review material. Republishing large parts of an original research paper is redundant or duplicate publication. Publishing separate parts of the same study with near identical introduction and methods sections in different journals is so-called salami publication. Both practices are unacceptable and will distort the research record. Self-plagiarism in review or opinion papers, one could argue, is less of a crime with no real harm done. It is still an attempt to deceive editors and readers, however, and constitutes intellectual laziness at best.
Deception is the key issue in all forms of self-plagiarism, including in reviews. Few editors will knowingly republish a paper that contains large parts of previously published material. Few readers will happily read the same material several times in different journals. An attempt to deceive amounts to fraud and should not be tolerated by the academic community.

August 29, 2009

Retractions up tenfold - TIMES HIGHER EDUCATION

20 August 2009
'Publish or perish' factor in withdrawal of science papers. Zoe Corbyn reports
The rate at which scientific journal articles are being retracted has increased roughly tenfold over the past two decades, an exclusive analysis for Times Higher Education reveals.
Growth in research fraud as a result of greater pressure on researchers to publish, improved detection and demands on editors to take action have been raised as possible factors in the change.
The study, by the academic-data provider Thomson Reuters, follows the retraction last month of a paper on the creation of sperm from human embryonic stem cells.
The paper, written by researchers at Newcastle University, was withdrawn by the Stem Cells and Development journal following its discovery that the paper's introduction was largely plagiarised.
The Thomson Reuters analysis charts the number of peer-reviewed scientific-journal articles produced each year from 1990 and the number of retractions.
It shows that over nearly 20 years the number of articles produced has doubled, but the number of retractions - still a small fraction of the literature - has increased 20 times. This is equal to a tenfold increase, factoring in the growth of articles.
The data are extracted from the Thomson Reuters Web of Science citation database, and apply to the journals covered by its Science Citation Index Expanded.
Whereas in 1990, just five of the nearly 690,000 journal articles that were produced worldwide were retracted, last year the figure was 95 of the 1.4 million papers published.
The growth has been particularly pronounced in the past few years, even factoring out 22 retracted papers authored by Jan Hendrik Schon, the disgraced German physicist, earlier this decade.
James Parry, acting head of the UK Research Integrity Office (UKRIO), said it was impossible to know for certain the reasons for the increase.
"It might reflect a real increase in misconduct or, more likely, an increase in detection compared with 20 years ago," he said.
He noted that while "most" retractions were for misconduct or questionable practice, "many" were the result of honest errors, such as an author misinterpreting results and realising the mistake later.
"Some editors have been very slow to spot misconduct and to take action when they do," he added.
Harvey Marcovitch, former chair of the Committee on Publication Ethics, welcomed the analysis. He said he had always thought that the number of retractions was small, but had never seen the figures before.
He hoped that the increased publicity scientific fraud had received in recent years had raised awareness - making scientists more likely to alert journal editors, and editors more prepared to investigate claims.
Editors, he agreed, had been notoriously reluctant to retract, for reasons ranging from "not having permission of authors, to being unsure about what retraction meant, to not knowing precisely what to do".
He said plagiarism software could also play a part in the rise - the British Medical Journal uses it to evaluate suspect papers, while Nature is trialling it for some papers and all review articles.
Both Mr Parry and Dr Marco-vitch stressed that misconduct was likely to be more common than the retraction figures suggest.
"Even on a conservative estimate of 1 per cent misconduct, we might expect 15,000 retractions a year, but we have a lot less," Mr Parry said.
"This suggests significant under-detection, which fits with what editors have told UKRIO."
He added that there was evidence that people still frequently quoted papers after they had been retracted. "The system is not working as well as it could," he said.
Aubrey Blumsohn, a former University of Sheffield academic and now a campaigner for greater openness in research conduct, said that only a "tiny proportion" of the papers known to have serious problems were retracted.
"Journal editors and institutions generally engage in a fire-fighting exercise to avoid retractions," he said.
"Anyone looking at this problem in detail knows of dozens of papers that are frankly fraudulent, but they are never retracted."
He said that the ways in which the scientific community "covers its tracks and prevents fraud being prosecuted" must be investigated.
Peter Lawrence, a scientist from the Medical Research Council's Laboratory of Molecular Biology in Cambridge, speculated that more plagiarism and better detection had pushed up the retraction rate.
Blaming a culture of "publish or perish", he said: "It's now a desperate struggle for survival."
He added that there was overwhelming pressure to be published in big journals: "You need to sensationalise results, be economical with rigour, and hype, hype, hype."
zoe.corbyn@tsleducation.com
Research, page 21
WIDESPREAD MISCONDUCT
A new study assesses the reasons for more than 300 journal retractions over the past 20 years.
The analysis looks at 312 cases of withdrawals listed in the PubMed database between 1988 and 2008. The authors, Liz Wager, chair of the Committee on Publication Ethics, and Peter Williams, research fellow in the department of information studies at University College London, found that 25 per cent were due to plagiarism or falsified data and 26 per cent were due to honest errors. The reasons for the other retractions were not given.
The study, Why and How Do Journals Retract Articles?, is due to be presented in September to the Sixth International Congress on Peer Review and Biomedical Publication in Vancouver.
It follows a paper published this year in the PLOS One journal that aggregates studies on how frequently scientists falsify research. It says that about 2 per cent admitted to having fabricated, falsified or otherwise modified data or results "at least once". Almost 34 per cent admitted to "questionable research practices".
The paper, How Many Scientists Fabricate and Falsify Research? A Systematic Review and Meta-Analysis of Survey Data, is written by Daniele Fanelli, Marie Curie research fellow at the University of Edinburgh.
CODE OF PRACTICE: TAKE THE PLAUDITS AND THE BRICKBATS
Anyone listed as an author on a paper should be prepared to take "public responsibility" for the work, a body that battles research misconduct advises.
The advice is featured in a code of practice for research, due to be launched next month by the UK Research Integrity Office (UKRIO).
The code is designed to help universities formulate institutional guidelines.
"Researchers should be aware that anyone listed as an author should be prepared to take public responsibility for the work, ensure its accuracy and be able to identify their contribution to it," it says.
James Parry, acting head of the UKRIO, said the document would provide "broad standards and principles" for best practice in research.
It follows a case at Newcastle University, which is investigating the plagiarised introduction of a stem-cell paper listing eight authors. The paper was retracted from the Stem Cells and Development journal last month after the problem came to light.
A research associate who has since left the university was blamed for the error, but leading scientists have criticised the senior authors involved for not taking responsibility.
For a copy of the UKRIO code: www.ukrio.org.

August 12, 2009

Perishing Without Publishing - INSIDE HIGHER ED

Welcome to the 21st century. Journals and publishing houses are folding faster than a roomful of origami artists, while new online journals are appearing all the time. Nietzsche once proclaimed the demise of God, but the new mantra is “Print is dead!” Maybe, maybe not; but however these transformations shake out, getting published somewhere remains crucial for newcomers to academia. It's still publish-or-perish in many places, even if some of those who publish will never have a hard copy, while others will treasure that they can hold their work in their hands. As one who has served (and is serving) as an associate editor for actual paper journals, let me share some bad practice observations that could sandbag your career -- and this advice almost all applies to any online peer-reviewed journal too. >>>

Random Posts



.
.

Popular Posts