Showing posts with label EDITORIAL. Show all posts
Showing posts with label EDITORIAL. Show all posts

June 1, 2011

From and to a very grey area

EDITORIAL
Howy Jacobs 
The scandal surrounding the former German Defence Minister Karl-Theodor von und zu Guttenberg, who resigned after facing accusations of plagiarism in parts of his doctoral thesis, raises troubling issues for all of us in academia. Guttenberg was an easy target: first, because he is a well-known politician; second, because his plagiarism was blatant, even though questions remain as to whether there was actual intention to mislead. The successful campaign to remove him from office was heralded as a remarkable victory for academics, who demanded that he acknowledge his mistakes and stand down. Had he not done so, his critics argued, the reputation of the German academic system could have been gravely damaged.
His error was not simply an academic one, like writing the wrong answer in an examination. Serious academic misconduct has wider implications, calling into question whether a person can be trusted with any task, let alone the conduct of public affairs. Politicians who obtained an academic degree by deception should be treated no differently from those found to have fiddled their parliamentary expenses or employed an illegal alien as a housemaid.
Despite Guttenberg's resignation, the case reveals serious flaws in the academic practices not just of Germany, but of all countries. His immediate supervisor, as well as the faculty which conferred his now-withdrawn degree, seem guilty of a dereliction of duty. At the very least, one might have expected that some kind of formal checking procedure would exist, even if it was not followed. But, to my knowledge, such screening is not performed systematically, even though it is common practice for vetting undergraduate work. Disparities between disciplines are also a thorny issue. According to some commentators, Guttenberg's fault was simply to have omitted quotation marks and citations, since quoting at length from other works is accepted practice in much of the humanities and social sciences.
In molecular biology, where most of the research literature is freely available online, at least for those active in research, the opportunities for such misconduct might seem legion. Yet direct plagiarism is easy to detect using simple text-matching tools, now commonly employed by journal editors and publishers, including ourselves. Where doctoral theses are structured around peer-reviewed articles (co-) authored by the candidate, such safeguards exclude direct plagiarism, although the surrounding thesis, which can vary in size from a few pages to a voluminous work in its own right, still needs to be carefully scrutinized. There is an obvious danger that a lazy student will simply copy and paste chunks of text that have appeared in previous doctoral theses rather than peruse, analyse and crystallize the relevant background literature for himself. In countries where doctoral theses can contain mainly unpublished data and interpretation thereof, comprehensive vetting is required.
Although most faculties have nominal rules regarding plagiarism, implementation is patchy, and regulations have not kept pace with the burgeoning of electronic literature, nor changes in practice. Even journal publishers do not apply uniform standards. I recently heard a case of an editor of a prestigious journal whose attention was drawn to the fact that the introduction section of a paper published in the journal matched almost word for word a passage from a recently published review article. Her response was, ‘so what?’. She felt that the substance of the two papers being different negated this supposedly minor fault.
Self-plagiarism is a particularly grey area, even though most would consider it unacceptable in a doctoral thesis. A number of years ago, I was reading the draft of a PhD thesis from one of my own students, whose written English was poor. Virtually the entire text appeared to have been processed from the original Hungarian, using an early version of the Google translator. But suddenly, when I reached the second paragraph of the Discussion, I encountered a long passage written in grammatically impeccable, if slightly clunky English prose. Further analysis quickly confirmed my initial suspicion: I myself was the authors of this haploblock of text. At my insistence, she re-wrote the entire Discussion, and finally understood that the object of the exercise was not to produce a ‘perfect’ thesis, but to produce her own thesis. In the end, it was well appreciated by her examiners.
Not all such cases are so clear-cut. Paraphrasing of someone else's – or one's own – ‘perfect’ text is generally considered to be plagiarism just as serious as copy-pasting: arguably more so, since it reveals a blatant attempt to deceive. But it can hardly be avoided in some portions of a scientific dissertation: for example, there are only so many ways of stating that DNA was recovered by precipitation with two volumes of ethanol, followed by centrifugation.
Where does all this leave us? I believe we need to devise a universally agreed code of practice, accompanied by clear vetting procedures that specify the responsibility of the supervisor, the department and faculty in the process. Of course, we have not reached a globally accepted definition of what actually constitutes plagiarism, nor even what is a PhD. But, as in many areas of academic life, there is a serious danger that if we don't do it ourselves, some ghastly state bureaucracy will end up forcing us to do it ‘their’ way.
To protect the identity and reputation of third parties, some ‘facts’ in the above account have been deliberately falsified.

May 25, 2011

Copy and paste - NATURE

Editorial
Nature 473, Pages:419–420 , Date published:(26 May 2011)
A slow university investigation into serious accusations of misconduct benefits no one.
As retractions go, it may not look like a big deal. Earlier this month, a statistics journal decided to pull a little-cited 2008 paper on the social networks of author–co-author relationships after it emerged that sections were plagiarized from textbooks and Wikipedia. The fact that this caused a wave of glee to ripple through the climate-change blogosphere takes some explaining.
Two of the paper's authors, Yasmin Said and Edward Wegman, both of George Mason University in Fairfax, Virginia, are also authors of an infamous 2006 report to Congress, co-written with statistician David Scott of Rice University in Houston, Texas. That report took aim at climatologist Michael Mann of Pennsylvania State University in University Park, suggesting that he was working in an isolated social network separated from “mainstream statisticians”, and that he had such close ties with the rest of the field that truly independent peer review of his work was not possible. This report came to be known as the Wegman report, and has been frequently cited by climate-change sceptics.
This social-network analysis of Mann and his co-authors — with Mann's name removed — was cut down to an academic paper and published two years later in the journal Computational Statistics & Data Analysis. It is this paper that the journal has decided to retract. So it seems likely that the plagiarism in the 2008 paper is also present in the 2006 Congress report. Still not look like a big deal?
That doubts about the 2006 report have resulted in concrete action is mainly down to the sterling work of an anonymous climate blogger called Deep Climate. His website first reported plagiarism in a different section of the congressional report in December 2009. One of those whose work was plagiarized is Raymond Bradley, director of the Climate System Research Center at the University of Massachusetts, Amherst. Ironically, Bradley was one of the co-authors of the climate reconstructions criticized by the Wegman report. Bradley, alerted by Deep Climate, complained to George Mason University on 5 March last year.
Wegman has blamed a graduate student for the plagiarism. Daniel Walsch, spokesperson for George Mason University, says that an internal review of the matter began in the autumn. He cannot estimate when that review will be complete, and, until it is, he says, the university regards it as a “personnel matter” and will not comment further. He adds that the review is still in the “inquiry” phase to ascertain whether a full investigation should be held. “Whether it is fast or slow is not as important as it being thorough and fair,” says Walsch.
The fact that 14 months have passed since Bradley's complaint without it being resolved is disheartening but not unusual. An examination of George Mason University's misconduct policies suggests that investigations should be resolved within a year of the initial complaint, including time for an appeal by the faculty member in question. According to the university's own timeline, the initial inquiry should have been complete within 12 weeks of the initial complaint — in May 2010. But there are loopholes galore for extensions, and, like many universities, George Mason seems content to drag its feet.
Long misconduct investigations do not serve anyone, except perhaps university public-relations departments that might hope everyone will have forgotten about a case by the time it wraps up. But in cases such as Wegman's, in which the work in question has been cited in policy debates, there is good reason for haste. Policy informed by rotten research is likely to have its own soft spots. Those who have been wronged deserve resolution of the matter. And one can hardly suppose that those who have been wrongfully accused enjoy living under a cloud for months.
So, what incentives do universities have to pick up the pace? Agencies such as the US Office of Research Integrity and ethics offices at funding bodies should take universities to task for slow investigations and demand adherence to the schedules listed in university policies. However, the agencies themselves haven't exactly been models of swift justice. The most recent annual report from the Office of Research Integrity — for 2008 — reported that the cases closed in that year spent a mean of 14.1 months at the agency. Perhaps it should fall to accreditation agencies to push for speedy investigations. Tom Benberg, vice-president of the Commission on Colleges of the Southern Association of Colleges and Schools — the agency that accredits George Mason University — says that his agency might investigate if the university repeatedly ignored its own policies on the timing of misconduct inquiries. To get the ball rolling, he says, someone would have to file a well-documented complaint.
Even if funding and accreditation agencies fail to apply pressure, universities should take the initiative to move investigations along as speedily as possible while allowing time for due process. Once an investigation is complete, the institution should be as transparent as it can about what happened. Especially when public funds are involved, or at public universities, the taxpayer has a right to know what happened when papers are retracted — even if the faculty member in question is eventually exonerated. This tidies the scientific record, clears the air and kicks the legs out from under any conspiracy theories. Over to you, George Mason University.

May 3, 2011

How journal editors can detect and deter scientific misconduct?

Misconduct happens. So what can journal editors do find and prevent it?

While we don’t claim to be experts in working on the other side of the fence — eg as editors — Ivan was flattered to be asked by session organizers at the Council of Science Editors to appear on a panel on the subject. He was joined on the panel by:

Science executive editor Monica Bradford

Annals of Internal Medicine editor in chief Christine Laine

American Association for Cancer Research publisher Diane Scott-Lichter

Committee on Publication Ethics’s Liz Wager

Their presentations were chock-full of good tips and data. Bradford, for example, said that Science had published  45 retractions since 1997. And Laine recommended copying all of a manuscript’s authors on every communication, which could help prevent author forgery that seems to be creeping into the literature. >>>

March 14, 2011

Notes on a scandal

EDITORIAL
Nature Volume: 471, Pages: 135–136 , doi:10.1038/471135b

How an organism is affected by a particular gene mutation, as every geneticist knows, depends on that organism's genetic background. Although an obesity mutation introduced into one strain of mouse might produce a fat animal with diabetes, the same mutation in a mouse strain of slightly different genetic background could create a fat but otherwise healthy animal.
Similarly, the effects of a cry of academic distress seem to depend on a community's societal background. How else to explain the contrasting results of two academic revelations: the plagiarism affair that consumed Germany for two weeks until academic disapproval forced the resignation of the defence minister, Karl-Theodor zu Guttenberg, on 1 March — and an exposé of comparable wrongdoing by the Italian minister of education, Mariastella Gelmini, in 2008, which had zero impact.
The German scandal broke on 16 February, when the daily newspaper Süddeutsche Zeitung revealed that the hugely popular Guttenberg had apparently taken a short cut to his doctorate in law by copying other published works without attribution in his thesis. The report sparked an intense reaction hard to imagine in countries such as the United States and Britain, where the academic achievements (if any) or failures of politicians are not considered serious issues.
German citizens looked to the Internet to discover the extent of Guttenberg's plagiarism, which turned out to be quite shameless. The University of Bayreuth withdrew his PhD and is now investigating whether he had just been careless or had intended to deceive. At first, Guttenberg attempted to underplay the importance of “inadequate footnotes” in a thesis; the issue faded to insignificance, he implied, next to his momentous political mission of reorganizing the German armed forces and controlling their presence in Afghanistan. His popularity among the general public remained undiminished, and Chancellor Angela Merkel, herself a PhD physicist, tried to limit damage to her government by saying that she had “hired a politician, not a scientific assistant”. That was a fatal mistake. Within days, tens of thousands of PhD holders had signed a letter deploring her “mockery” of an academic system that represented decency, honour and responsibility — attributes that they insisted should be reflected in a democratic government. Crushed by this attack of righteousness, Guttenberg finally resigned.
Like Guttenberg, Gelmini was a graduate in law. And like him, she felt that her driving ambition justified taking short cuts in academic procedures to get the degree that would help her political career. In 2001 she travelled from her home town of Brescia in the north of Italy to Reggio Calabria, in the far south, to sit her bar exams. At the time, pass rates in the north were below 10%, compared with a rate of suspiciously more than 90% in Reggio Calabria, a city otherwise known for low academic standards. After the press revealed the Reggio Calabria bar exam to be a scam, the Italian academic community called for Gelmini's resignation — to no avail. The irony of having a minister with responsibility for universities who herself cheerfully admits to having dodged academic rules is not lost on the community.
In Germany, Italy and neighbouring countries in Europe, politicians are frequently drawn from academia. Credentials help political careers, and nearly 20% of the German parliament hold PhDs. But then, almost 9% of Italian parliamentarians are university professors, so the differing reactions to calls for resignation prompted by scholastic misdemeanours cannot be down to ignorance about how universities work. Instead, the difference seems to be based on how large a threat each government considers the weapon of moral correctness to be — and how dangerous is the academic community wielding that weapon.
Should anyone really have expected the government of Silvio Berlusconi to fear such a weapon?
It is more surprising, and gratifying, to find that in Germany, one of the world's richest and most powerful countries, rage against an academic cheat can provoke serious consequences. Not only was Guttenberg popular, but he hadn't previously made any serious political errors that would have seen charges of plagiarism considered the last straw.
Still, there may not be a lesson for many other countries here. Germany is known as the 'country of poets and philosophers' — a rare societal background, and one apparently conducive to propagation of honourable academic values. Like our more fortunate mutant mouse, all there seems plump and healthy, even as it remains unfathomably mysterious to those on the outside.

November 4, 2010

A painful remedy - NATURE

EDITORIAL
Nature, Volume:468, Page:6, doi:10.1038/468006b
Published online 03 November 2010

The number of papers being retracted is on the rise, for reasons that are not all bad.
Few experiences can be more painful to a researcher than having to retract a research paper. Some papers die quietly, such as when other scientists find that the work cannot be replicated and simply ignore it. Yet, as highlighted by several episodes in recent years, the most excruciating revelation must be to find not only that a paper is wrong, but that it is the result of fraud or fabrication, which itself requires months or years of investigation. Where once the research seemed something to be exceptionally proud of, the damage caused by fraudulent work can spread much wider, as discovered by associates of the Austrian physicist Jan Hendrick Schön and the South Korean stem-cell biologist Woo Suk Hwang. But whatever the reason for a retraction, all of the parties involved — journals included — need to face up to it promptly.
This year, Nature has published four retractions, an unusually large number. In 2009 we published one. Throughout the past decade, we have averaged about two per year, compared with about one per year in the 1990s, excluding the pulse of retractions of papers co-authored by Schön.
Given that Nature publishes about 800 papers a year, the total is not particularly alarming, especially because only some of the retractions are due to proven misconduct. A few of the Nature research journals have also had to retract papers in recent years, but the combined data do no more than hint at a trend. A broader survey revealed even smaller proportions: in 2009, Times Higher Education commissioned a survey by Thomson Reuters that counted 95 retractions among 1.4 million papers published in 2008. But the same survey showed that, since 1990 — during which time the number of published papers doubled — the proportion of retractions increased tenfold (see http://go.nature.com/vphd17).
So why the increase? More awareness of misconduct by journals and the community, an increased ability to create and to detect unduly manipulated images, and greater willingness by journals to publish retractions must account for some of this rise. One can also speculate about the increasing difficulty for senior researchers of keeping track of the detail of what is happening in their labs. This is of concern not just because of the rare instances of misconduct, but also because of the risk of sloppiness and of errors not being caught. Any lab with more than ten researchers may need to take special measures if a principal investigator is to be able to assure the quality of junior members' work.
The need for quality assurance and the difficulties of doing it are exacerbated when new techniques are rapidly taken up within what is often a highly competitive community. And past episodes have shown the risk that collaborating scientists — especially those who are geographically distant — may fail to check data from other labs for which, as co-authors, they are ultimately responsible.
If we at Nature are alerted to possibly false results by somebody who was not an author of the original paper, we will investigate. This is true even if the allegations are anonymous — some important retractions in the literature have arisen from anonymous whistle-blowing. However, we are well aware of the great damage that can be done to co-authors as a result of such allegations, especially when the claims turn out to be false. Such was the case with a recent e-mail alert widely distributed by a group calling itself Stem Cell Watch (see Nature 467, 1020; 2010) — an action that we deplore.
For our part, we are sensitive to such concerns and will bear in mind the need to protect the interests of authors until our obligation to the community at large becomes clear. But then we will publish a retraction promptly, and link to it prominently from the original papers. We will also list the retraction on our press release if the original paper was itself highlighted to the media.
Ultimately, it comes down to the researchers — those most affected by the acts — to remain observant and diligent in pursuing their concerns wherever they lead, and where necessary, to correct the literature promptly. Too often, such conscientious behaviour is not rewarded as it should be.

October 2, 2010

Understanding Publication Ethics

Geraldine S. Pearson* 
>>> A recent survey of 524 editors of Wiley-Blackwell science journals (including nursing journals) asked about the severity and frequency of ethical issues, editor confidence in handling these, and awareness of COPE guidelines (Wager, Fiack, Graf, Robinson, & Rowlands, 2009). Nearly half of the queried editors responded to the survey and, interestingly, most believed that misconduct occurred only rarely in their journals. Are editors too trusting in their belief that submissions will be ethical and free of publication misconduct? Is there denial that this issue actually occurs or “never in my journal”? Is it easier to assume that journal submissions will be free of ethical issues? The answers to these questions are unclear. I can share that at the yearly International Academy of Nursing Editors meetings, the issue of ethics and quality in publications is frequently and passionately discussed by attendees. I also know how painful it is to deal with an ethical issue around a journal submission.>>>

*Pearson, G. S. (2010), Understanding Publication Ethics. Perspectives in Psychiatric Care, 46: 253–254. doi: 10.1111/j.1744-6163.2010.00276.x

July 8, 2010

Plagiarism pinioned

NATURE/EDITORIAL  doi:10.1038/466159b Published online 07 July 2010
There are tools to detect non-originality in articles, but instilling ethical norms remains essential
It is both encouraging and disheartening to hear that major science publishers intend to roll out the CrossCheck plagiarism-screening service across their journals (see page 167).
What is encouraging is that many publishers are not only tackling plagiarism in a systematic way, but have agreed to do so by sharing the full text of their articles in a common database. This last was not a given, considering the conservatism of some companies, yet it was a necessary step for the service to function — the iThenticate software used by CrossCheck works by comparing submitted articles against a database of existing articles. CrossCheck's 83 members have already made available the full text of more than 25 million articles.
What is disheartening is that plagiarism seems pervasive enough to make such precautions necessary. In one notable pilot of the system on three journals, their publisher had to reject 6%, 10% and 23% of accepted papers, respectively.
Granted, there are reasons to believe that such levels of plagiarism are exceptional. Previous studies of samples on the physics arXiv preprint server (see Nature 444, 524–525; 2006) and of PubMed abstracts (see Nature doi:10.1038/news.2008.520; 2008) found much lower rates. But the reality is that data are sorely lacking on the true extent of plagiarism, whether its prevalence is growing substantially and what differences might exist between disciplines. The hope is that the roll-out of CrossCheck will eventually yield reliable data on such questions over wide swathes of the literature — while also acting as a powerful deterrent to would be plagiarists.
In the process, editors and publishers must remember that plagiarism comes in many varieties and degrees of severity, and that responses should be proportionate. For example, past studies suggest that self-plagiarism, in which a researcher copies his or her own words from a published paper, is far more common than plagiarism of the work of others. Arguably, self-plagiarism can sometimes be justified, as when a researcher is bringing similar ideas before readers of journals in a different field. All plagiarism can also involve honest errors or mitigating circumstances, such as a scientist with a poor command of English paraphrasing some sentences of the introduction from similar work.
Such examples underscore that plagiarism-detection software is an aid to, not a substitute for, human judgement. One rule of thumb used by Nature journals and others in considering an article's degree of similarity to past articles — in particular, for small amounts of self-plagiarism in review articles — is whether the paper is otherwise of sufficient originality and interest.
Nature Publishing Group is a member of CrossCheck and has been testing the service on submissions to its own journals. It has noted only trace levels of plagiarism in research articles, which are spot-checked, and often in only the supplementary methods. Plagiarism has been more common in submitted reviews, all of which are tested. This is particularly true in clinical reviews, although the rates are still far below the 1% mark, and in most instances concerned some level of self-plagiarism.
Although the ability to detect plagiarism is a welcome advance, addressing the problem at its source remains the key issue. More and more learned societies, research institutions and journals have in recent years adopted comprehensive ethical guidelines on plagiarism, many of which carefully distinguish between different levels of severity. It is crucial that research organizations in all countries, and particularly the mentors of young researchers, instil in their scientists the accepted norms of the international scientific community when it comes to plagiarism and publication ethics.

March 3, 2010

January 9, 2010

Scientific fraud: action needed in China - THE LANCET

THE LANCET, Volume 375, Issue 9709, Page 94, 9 January 2010

Editorial
On Dec 19, 2009, editors at Acta Crystallographica Section Ealerted the scientific community to a disgraceful pattern of fraud involving papers they had published in 2007. At least 70 false crystal structures were reported—mainly from two groups led by Hua Zhong and Tao Liu, both at Jinggangshan University, Jian, China. All authors have now agreed to retraction of 41 papers published by Zhong and 29 by Liu. It is rather surprising that wrongdoing on such a scale evaded detection during peer review and, considering that crystal structures are deposited in public databases upon publication, that the truth has been uncovered so slowly.
In China, the government controls almost all funding for research. As in other countries, to gain funding researchers need to publish as many papers in high impact journals as possible. According to Science Citation Index and other resources, Chinese authors published 271 000 papers in 2008, roughly 11·5% of the world's total. This incident is not the first time that scientific fraud has occurred in China. Regulations to monitor state-funded research projects were announced in 2006 by the Ministry of Science and Technology in response to six high-profile cases of scientific misconduct. A new circular was issued on March 19, 2009, aimed at preventing misconduct in higher education institutions—punishment for breaching the new rules could involve warnings, dismissal, or legal action. Research programmes could be suspended or terminated, funding could be withdrawn, or awards and honours revoked.
Such extensive fraud is disappointing—not only does it indicate a substantial waste of research time and money, but it is likely that, whatever punishments do result, damage to the reputations of the researchers, institutions, and journal concerned is likely to be disproportionately great. Clearly, China's Government needs to take this episode as a cue to reinvigorate standards for teaching research ethics and for the conduct of research itself, as well as establishing robust and transparent procedures for handling allegations of scientific misconduct to prevent further instances of fraud.
For Hu Jintao's goal of China becoming a research superpower by 2020 to be credible, China must assume stronger leadership in scientific integrity.

August 30, 2009

Self-plagiarism: unintentional, harmless, or fraud?

THE LANCET
Volume 374, Issue 9691, 29 August 2009-4 September 2009, Page 664
Editorial

The intense pressure to publish to advance careers and attract grant money, together with decreasing time available for busy researchers and clinicians, can create a temptation to cut corners and maximise scientific output. Journals are increasingly seeing submissions in which large parts of text have been copied from previously published papers by the same author.
Whereas plagiarism—copying from others—is widely condemned and regarded as intellectual theft, the concept of self-plagiarism is less well defined. Some have argued that it is impossible to steal one's own words. The excuse editors hear when confronting authors about self-plagiarism is that the same thing can only be said in so many words. This might sometimes be legitimate, perhaps for specific parts of a research paper, such as a methods section. However, when large parts of a paper are a word-for-word copy of previously published text, authors' claims that they have inadvertently used exactly the same wording stretch credibility.
There is a clear distinction between self-plagiarism of original research and review material. Republishing large parts of an original research paper is redundant or duplicate publication. Publishing separate parts of the same study with near identical introduction and methods sections in different journals is so-called salami publication. Both practices are unacceptable and will distort the research record. Self-plagiarism in review or opinion papers, one could argue, is less of a crime with no real harm done. It is still an attempt to deceive editors and readers, however, and constitutes intellectual laziness at best.
Deception is the key issue in all forms of self-plagiarism, including in reviews. Few editors will knowingly republish a paper that contains large parts of previously published material. Few readers will happily read the same material several times in different journals. An attempt to deceive amounts to fraud and should not be tolerated by the academic community.

July 12, 2009

The insider’s guide to plagiarism

Editorial
Nature Medicine, 707 (2009)

Scientific plagiarism—a problem as serious as fraud—has not received all the attention it deserves.

Reduced budgets are affecting research just as they are every sector of the economy. So, how can struggling scientists increase their chances of securing their share of financial resources in these tough times?Publish, of course!

What? You don’t have the resources to do the experiments? Don’t worry! A little creative writing might be all you need to sail through the financial crisis. Here’s how: use a solid paper as your base; carry out a parallel set of experiments in your favorite model; tweak the data so that the numbers are not identical but remain realistic; and, when you’re ready to write it all up, paraphrase the original paper ad libitum. Last, submit your new manuscript to a modest journal in the hopes that the authors of the paper you used as ‘inspiration’ won’t notice your ‘tribute’ to their work—even though imitation is supposed to be the sincerest form of flattery, their approval of your ‘reworking’ of their paper cannot be guaranteed. If all goes well, getting a couple of these manuscripts under your belt might make all the difference when you apply for that elusive grant.

Does this strategy work? Unfortunately, all too often it does, even though many eyes examine every paper before it ends up on a printed page. And when scrutiny identifies cases of potential plagiarism, serious corrective action doesn’t always take place. Consider a recent report (Science 323, 1293–1294, 2009) in which software tools and manual comparison helped identify cases of suspected plagiarism. When the authors of 163 suspicious studies were contacted, about 30% disavowed misconduct, and over 20% of coauthors claimed no involvement in writing the papers.
>>>

July 10, 2009

The truth will out

Editorial

Nature Physics 5, 449 (2009)

Fraud in science is difficult to spot immediately, but, as high-profile cases show, it does get found out. Tackling plagiarism is at least becoming an easier fight.

Introduction
Scientific misconduct comes in many forms. Fabrication lies at one extreme, but plagiarism and 'citation amnesia' are more common. Some have come to question the peer review system, especially following the spectacular cases of Hendrik Schön and Scott Reubens. Schön was a Bell Labs researcher whose organic field-effect transistors exhibited the fractional quantum Hall effect, superconductivity, lasing, you name it. That he didn't keep a lab book or any raw data during his PhD would already constitute bad practice, but then he went on to actually fabricate data. In 2002, a committee found him guilty of scientific misconduct on 16 out of 24 allegations, and at least 21 of his published papers have since been retracted (a new book chronicling Schön's rise and fall is reviewed on p451 of this issue). Reuben's case came to light in March 2009, when 21 of his papers containing faked data were retracted from anaesthesiology journals. Millions of patients have been treated according to his studies of combinations of drugs for pain relief. In many cases, the patients in his clinical trials were made up. >>>



March 10, 2009

Plagiarism and other scientific misconducts

EDITORIAL

Journal of Cancer Research and Clinical Oncology

K. Höffken and H. Gabbert

When we were young scientists we heard that: ‘‘games authors play’’ and learned that results of scientific work was published by the same authors in different order in different journals. However, the content of the publications differed only slightly from each other (e.g., by omitting one and adding another table or figure) and the conclusions were almost identical.

When we grew older, we encountered affairs of scientific misconduct ranging from copying text from other scientific papers up to faking results.

When we became editors of our journal, we hoped that we would be immune from such assaults. However, we had to learn that each of the above examples of plagiarism and of other scientific misconduct could happen to us. We met double publications, learned that authors sent manuscripts simultaneously to more than one journal or were informed that authors copied and pasted text (as can be seen from the example below).

Original version
Recent technologic advances now make it feasible to better tackle the methodological challenges of detecting EBV in breast cancers. Consequently, a critical next step in understanding this relationship is to apply detection strategies that are sensitive and specific for EBV and able to localize the EBV to particular benign or malignant cells within the tissue. A recent National Cancer Institute recommendation specifies an approach combining real-time quantitative PCR, which allows measurement of the amount of viral DNA in archival tissue samples, with laser capture microdissection to improve localization of viral DNA to benign or malignant components of a tissue sample (90).

Plagiarized version
Recent technological advances now make it feasible to better tackle the methodological challenges of detecting virus in breast cancers. A critical next step in understanding this relationship is to apply detection strategies that are sensitive and specific for virus and able to localize this agent to particular malignant cells within the tissue. A recent National Cancer Institute recommendation specifies an approach combining real-time quantitative PCR, which allows measurement of the amount of viral load in archival tissue samples, with laser capture microdissection to improve localization of viral nucleic acid to benign or malignant components of a tissue sample.


What did we learn from these facts?

1. Science is not immune from fraud, misconduct nor void of bad scientists. Fortunately, these are exemptions!

2. Journals are not protected against these assaults and

3. Even the best prevention system did not exclude that it happened to us and that it will happen again.

What can we do to improve our prevention mechanisms?

1. We count on the readiness and awareness of our readers.

2. We will relentlessly denounce the criminal methods and their originators.

3. We will put the persons on a black list and urge other journals to deny them the right for publication.

Please support us with our efforts. Do not hesitate to inform us about any irregularity, violation or infringement.

J Cancer Res Clin Oncol (2009) 135:327–328

March 9, 2009

Plagiarism in Scientific Publications

Editorial Article

Peter R. Mason

Biomedical Research & Training Institute, Harare, Zimbabwe

J Infect Developing Countries 2009; 3(1):1-4. >>>

March 7, 2009

Combating plagiarism

Editorial

Nature Photonics 3, 237 (2009)
doi:10.1038/nphoton.2009.48

Accountability of coauthors for scientific misconduct, guest authorship and deliberate or negligent citation plagiarism, highlight the need for accurate author contribution statements.>>>

March 1, 2009

Borrowing words, or claiming them?

Editorial

Nature Immunology 10, 225 (2009)
doi:10.1038/ni0309-225

Journals are taking steps to stem of the practice of plagiarism.

Have you ever experienced a sense of déjà vu after reading a colleague's manuscript or researching a topic of interest? A paragraph or entire section sounds eerily familiar—too familiar, perhaps, because it is a word-for-word, verbatim (or nearly so, with a few synonyms tossed in) replication of another piece written by different authors. Or maybe a result or hypothesis is claimed to be 'novel' but in fact others have reported such findings and the previous work is not cited. Or the same data are presented in both earlier and subsequent publications from an author, but the later publication fails to acknowledge the fact that the data were included in the earlier work. Are any of these situations acceptable? In fact they are not. All three scenarios represent examples of what can be considered plagiarism.>>>


September 6, 2008

Ethics in science: Are we losing the moral high ground?


Associate Editor,
Saudi J Gastroenterol 2008;14:107-8


In the competitive world of academia, a person's worth is often ostensibly gauged by one's scientific contribution, wherein the 'article count' has become the simplistic measure of this contribution. The number and frequency of publications reflect an academic's stature in the scientific community and hence the race to publish and increase this 'article count' has become an end unto itself. Sadly though, the overriding desire to publish sometimes defeats the very purpose of scientific contribution as, not unsurprisingly, even the learned may cheat.>>>

August 5, 2008

Editorial Announcement: Withdrawal of Chin. Phys. Lett. 24 (2007) 355

CHIN.PHYS.LETT.
Vol. 25, No. 8 (2008) 3094

This paper was submitted on 13 October 2006 and appeared in the February issue of 2007 in Chinese Physics Letters. Later it appeared also as arXiv: grqc/0704.0525 in April 2007.

As noted recently by the arXiv administrator, this paper plagiarized an earlier arXiv paper (grqc/0410004) by M. Sharif and T. Fatima, which also appeared in Int. J. Mod. Phys. A 20 (2005) 4309, and another arXiv paper (gr-qc/0603075) by R. M. Gad.

This article by S. Aygun et al. should not have been submitted for publication owing to such substantial replication of earlier papers. Chinese Physics Letters hereby declares the withdrawal of this paper ‘
On the Energy–Momentum Problem in Static Einstein Universe’ by S. Aygun, I. Tarhan, and H. Baysal published in Chinese Physics Letters 24 (2007) 355

It is unfortunate that this plagiarism was not detected before going to press. I apologize to the readers of Chinese Physics Letters and to Dr M. Sharif, Dr T. Fatima, and Dr R. M. Gad for such an oversight.

Editor: ZHU Bang-Fen

Editorial Announcement: Withdrawal of Chin. Phys. Lett. 24 (2007) 1821

CHIN.PHYS.LETT.
Vol. 25, No. 8 (2008) 3094

This paper was submitted on 1 February 2007 and appeared in the July issue of 2007 in Chinese Physics Letters. Later it appeared also as arXiv:grqc/ 0707.1776 in July 2007.

As noted recently by the arXiv administrator, this paper plagiarized an earlier arXiv paper (grqc/0508005) by I. Radinschi and Th. Grammenos, which also appeared in Int. J. Mod. Phys. A 21 (2006) 4309.

This article by M. Aygun et al. should not have been submitted for publication owing to such substantial replication of an earlier paper. Chinese Physics Letters hereby declares the withdrawal of this paper ‘Moller Energy–Momentum Complex in General Relativity for Higher Dimensional Universes’ by M. Aygun, S. Aygun, I. Yilmaz, H. Baysal, and I. Tarhan published in Chinese Physics Letters, 24 (2007) 1821.

It is unfortunate that this plagiarism was not detected before going to press. I apologize to the readers of Chinese Physics Letters and to Dr I. Radinschi and Dr Th. Grammenos for such an oversight.

Editor: ZHU Bang-Fen

July 1, 2008

Publish or perish, but at what cost?

J Clin Invest. 2008 July 1; 118(7): 2368. doi: 10.1172/JCI36371.

Ushma S. Neill, Executive Editor
The academic scientific enterprise rewards those with the longest CVs and the most publications. Under pressure to generate voluminous output, scientists often fall prey to double publishing, self plagiarism, and submitting the “minimal publishable unit.” Are these ethical gray areas, or true transgressions?
I’ve taken to the editorial page in the past to discuss what is and is not allowed in the JCI vis-à-vis manipulation of images. Here, I want to discuss a grayer area of potential violations — those that concern ethics in writing. Specifically, is publishing the same set of data twice acceptable (clearly not), is using the same text in several articles plagiarism (perhaps), and is publishing newly obtained data after the fact acceptable (maybe)?  >>>


Random Posts



.
.

Popular Posts