December 17, 2010

Top retractions of 2010 - The Scientist - Magazine of the Life Sciences

Jef Akst

Retractions are a scientist's worst nightmare. In the last 10 years, at least 788 scientific papers have been pulled from the literature, according to a study published this year in the Journal of Medical Ethics. Whether it is a result of research misconduct, duplicate publication, or simply sloppy data analysis, a retracted paper can devastate a scientist's research, or even impact a whole scientific field.
Here are 10 of the biggest retraction stories of the last year. >>>

December 8, 2010

Self-plagiarism case prompts calls for agencies to tighten rules - SCIENTIFIC AMERICAN

Eugenie Samuel Reich - Nature 468, 745 (2010)

Is plagiarism a sin if the duplicated material is one's own? Self-plagiarism may seem a smaller infraction than stealing another author's work, but the practice is under increasing scrutiny, as the eruption two weeks ago of a long-standing controversy at Queen's University in Kingston, Canada, makes clear.

Colleagues of Reginald Smith, an emeritus professor of mechanical and materials engineering at Queen's, say that up to 20 of Smith's papers contain material copied without acknowledgment from previous publications. University officials first learned of the duplications in 2005, and they eventually led to an investigation by the Natural Sciences and Engineering Research Council (NSERC), which funded some of Smith's work, including experiments on board the U.S. space shuttles. Although Smith avoided censure for research misconduct, three papers were subsequently retracted by the Annals of the New York Academy of Sciences and one by the Journal of Materials Processing Technology. The situation was recently made public in news reports and has led to calls for stronger powers by funding agencies in Canada to discipline researchers who engage in the practice.

"He was a very good scientist, but something happened and he got into this business of duplicating papers," says Chris Pickles, a metallurgist at Queen's who raised concerns about Smith's publication practices after spotting some duplications under Smith's name while searching an online database. Smith referred a request for comment to his lawyer, Ken Clark of law firm Aird and Berlis in Toronto, Canada, who notes that many of the republications duplicated material from conference proceedings, which in an earlier epoch would not usually have been published. He also notes that Smith is retired, and does not stand to gain financially from his republications.

Many researchers say that republication without citation violates the premise that each scientific paper should be an original contribution. It can also serve to falsely inflate a researcher's CV by suggesting a higher level of productivity. And although the repetition of the methods section of a paper is not necessarily considered inappropriate by the scientific community, "we would expect that results, discussion and the abstract present novel results," says Harold Garner, a bioinformatician at Virginia Polytechnic Institute and State University in Blacksburg. Garner's research group used an automated software tool to check the biomedical literature for duplicated text, and identified more than 79,000 pairs of article abstracts and titles containing duplicated wording. He says work on the database of partly duplicated articles--called Déjà vu--has led to close to 100 retractions by journal editors who found the reuse improper. An analysis by Garner in the press at Urologic Oncology shows that while the total quantity of biomedical literature has risen steadily since 2000, cases of republication stopped rising after 2003 and fell sharply between 2006 and 2008 (see graph). "It actually does look like it's getting better," says Garner. "People who would ordinarily step across the line are not doing it."

He credits increased vigilance by journal editors who are using his free tool or commercially available software to check submissions for repeated text and halt dubious papers before they reach publication.

NSERC's policy on integrity in research makes no specific reference to plagiarism or self-plagiarism, which has led to calls for tougher rules in the wake of the publicity over Smith's case. In the United States, the National Science Foundation (NSF) takes a strong stance on plagiarism in general, says Christine Boesz, who was inspector-general at the NSF from 1999 until 2008. "The NSF got into the plagiarism game early," she says. Numbers obtained by Nature under the US Freedom of Information Act show that, since 2007, the agency has found between 5 and 13 cases of plagiarism each year. In contrast, the U.S. Department of Health and Human Services's Office of Research Integrity (ORI), which is responsible for overseeing alleged plagiarism associated with National Institutes of Health research, has reported no cases of plagiarism of text over the past three years, but has found up to 14 scientists a year guilty of falsification or fabrication of data (see Table 1).

Ann Bradley, a spokeswoman for the ORI, says the office's working definition of plagiarism excludes minor cases. Nick Steneck, director of research ethics and integrity at the University of Michigan in Ann Arbor, says authorities worldwide should adopt a uniform misconduct policy that provides clear guidance not only on data falsification and fabrication but also on lesser ethical breaches--such as self-plagiarism.

November 30, 2010

Sultans of swap: Turkish researchers plagiarized electromagnetic fields-cancer paper, apparently others

The Bosnian Journal of Basic Medical Sciences has retracted a paper it published in August by Turkish researchers on the potential cancer risks associated with exposure to electromagnetic fields, or EMFs.
The reason: Other people wrote nearly all of it. >>>

November 24, 2010

Plagiarists plagiarized: A daisy chain of retractions at Anesthesia & Analgesia

Adam Marcus
If a plagiarist plagiarizes from an author who herself has plagiarized, do we call it a wash and go for a beer?
That scenario is precisely what Steven L. Shafer, MD, found himself facing recently. Dr. Shafer, editor-in-chief of Anesthesia & Analgesia (A&A), learned that authors of a 2008 case report in his publication had lifted two-and-a-half paragraphs of text from a 2004 paper published in the Canadian Journal of Anesthesia.
A contrite retraction letter, which appears in the December issue of A&A, from the lead author, Sushma Bhatnagar, MD, of New Delhi, India, called the plagiarism “unintended” and apologized for the incident. Straightforward enough.
But then things get sticky. Amazingly, the December issue of A&A also retracts a 2010 manuscript by Turkish researchers who, according to Dr. Shafer, plagiarized from at least five other published papers—one of which happens to have been a 2008 article by Dr. Bhatnagar in the Journal of Palliative Medicine.
“Dr. Bhatnagar’s paper in Anesthesia & Analgesia was retracted because it contained text taken from a paper by Dr. Munir,” Dr. Shafer told Anesthesiology News. “However, Dr. Bhatnagar’s paper in the Journal of Palliative Medicine is one of the source journals for the plagiarism by Dr. Memiş. To give you an idea how widespread this is, we recently rejected a paper that copied large blocks for text from a paper by Dr. Memiş.”>>>

November 4, 2010

A painful remedy - NATURE

EDITORIAL
Nature, Volume:468, Page:6, doi:10.1038/468006b
Published online 03 November 2010

The number of papers being retracted is on the rise, for reasons that are not all bad.
Few experiences can be more painful to a researcher than having to retract a research paper. Some papers die quietly, such as when other scientists find that the work cannot be replicated and simply ignore it. Yet, as highlighted by several episodes in recent years, the most excruciating revelation must be to find not only that a paper is wrong, but that it is the result of fraud or fabrication, which itself requires months or years of investigation. Where once the research seemed something to be exceptionally proud of, the damage caused by fraudulent work can spread much wider, as discovered by associates of the Austrian physicist Jan Hendrick Schön and the South Korean stem-cell biologist Woo Suk Hwang. But whatever the reason for a retraction, all of the parties involved — journals included — need to face up to it promptly.
This year, Nature has published four retractions, an unusually large number. In 2009 we published one. Throughout the past decade, we have averaged about two per year, compared with about one per year in the 1990s, excluding the pulse of retractions of papers co-authored by Schön.
Given that Nature publishes about 800 papers a year, the total is not particularly alarming, especially because only some of the retractions are due to proven misconduct. A few of the Nature research journals have also had to retract papers in recent years, but the combined data do no more than hint at a trend. A broader survey revealed even smaller proportions: in 2009, Times Higher Education commissioned a survey by Thomson Reuters that counted 95 retractions among 1.4 million papers published in 2008. But the same survey showed that, since 1990 — during which time the number of published papers doubled — the proportion of retractions increased tenfold (see http://go.nature.com/vphd17).
So why the increase? More awareness of misconduct by journals and the community, an increased ability to create and to detect unduly manipulated images, and greater willingness by journals to publish retractions must account for some of this rise. One can also speculate about the increasing difficulty for senior researchers of keeping track of the detail of what is happening in their labs. This is of concern not just because of the rare instances of misconduct, but also because of the risk of sloppiness and of errors not being caught. Any lab with more than ten researchers may need to take special measures if a principal investigator is to be able to assure the quality of junior members' work.
The need for quality assurance and the difficulties of doing it are exacerbated when new techniques are rapidly taken up within what is often a highly competitive community. And past episodes have shown the risk that collaborating scientists — especially those who are geographically distant — may fail to check data from other labs for which, as co-authors, they are ultimately responsible.
If we at Nature are alerted to possibly false results by somebody who was not an author of the original paper, we will investigate. This is true even if the allegations are anonymous — some important retractions in the literature have arisen from anonymous whistle-blowing. However, we are well aware of the great damage that can be done to co-authors as a result of such allegations, especially when the claims turn out to be false. Such was the case with a recent e-mail alert widely distributed by a group calling itself Stem Cell Watch (see Nature 467, 1020; 2010) — an action that we deplore.
For our part, we are sensitive to such concerns and will bear in mind the need to protect the interests of authors until our obligation to the community at large becomes clear. But then we will publish a retraction promptly, and link to it prominently from the original papers. We will also list the retraction on our press release if the original paper was itself highlighted to the media.
Ultimately, it comes down to the researchers — those most affected by the acts — to remain observant and diligent in pursuing their concerns wherever they lead, and where necessary, to correct the literature promptly. Too often, such conscientious behaviour is not rewarded as it should be.

October 30, 2010

Plagiarism and self-plagiarism: What every author should know

Miguel Roig
The scientific community is greatly concerned about the problem of plagiarism and self-plagiarism. In this paper I explore these two transgressions and their various manifestations with a focus on the challenges faced by authors with limited English proficiency.

Introduction
Evidence indicates that plagiarism amongst biomedical students is fairly common (1-3). Because the offenses in question usually involve academic assignments, they are typically classified as instances of academic dishonesty. Such transgressions can result in negative consequences for the student and these can range from failure for the assignment to expulsion from the university. When plagiarism occurs in the context of conducting scientific research, whether perpetrated by students or by professionals, it rises to the level of scientific misconduct; a much more serious crime.
Regrettably, a general consensus is now emerging that plagiarism in the biomedical sciences has become a matter of great concern. Consider the evidence, when searching the PubMed database for articles on plagiarism (4), the database yields over 700 entries (as of this writing) with more than half of them representing articles that were published within the last decade. Also, journals are increasingly expanding their instructions to authors to include guidelines on plagiarism and related matters of authorship. Yet, perhaps the most alarming development has been the availability of text similarity software, such as eTBLAST, that allows users to search for plagiarism in journal articles (5). Given these developments, it is not surprising that a recently published survey shows plagiarism as one of the areas of greatest concern for biomedical journal editors (6).
The causes underlying many cases of plagiarism are believed to be the same as those associated with the other two major forms of scientific misconduct, fabrication and falsification. For example, one major factor believed to operate is the pressure to publish. The reality is that for many working scientists, the number of published papers authored continues to be one of the primary means by which research productivity is measured. Moreover, the quality of a publication is another important factor that comes into play, for the most desirable outcome is for papers to appear in the so-called high-impact journals. Of course, carrying out scientific research can be very rewarding intrinsically and the joy we experience when we are engaged in this noble process is probably the very reason why many of us chose science as a career. However, as we all know, good science requires a lot of patience, hard work, and a good dose of creative, methodological skill. In addition, scientific research has become very costly in terms of human and laboratory resources. Our tenacity and dedication will usually pay off, as when we are able to obtain data that verifies our hypotheses. But as every scientist knows, such a happy ending does not always occur. For example, what at first might look like a promising avenue of investigation can sometimes end up being a dead-end. In a worst case scenario, months of toiling in the laboratory may only yield a limited payout as when results turn out marginal or null and, therefore, not likely to be publishable. Or perhaps a subtle mistake early in the experiment can render as useless months of otherwise meticulous laboratory work. These are some of the many scenarios that are thought to lead otherwise well-meaning scientists to tamper with their data.
Because plagiarism and self-plagiarism are thought to be far more common than fabrication and falsification, it is important to explore these transgressions in some detail. The reader should note that these offenses can sometimes have legal implications, as when they violate copyright law. However, because these cases rarely, if ever, reach the legal stage when they involve scholarly journals, I will confine my treatment of these malpractices within the ethical domain rather than within the legal one. My hope is that, by raising the readers’ awareness of these offenses, their occurrence can be prevented.
Plagiarism 
Writing journal articles is seldom an easy task and many of us do not exactly enjoy this part of the scientific process. To make matters worse, we often operate with the expectation that our manuscript will be returned with a myriad of criticisms and suggestions for improvement that are sometimes viewed by us as arbitrary and capricious. Although this feedback almost always results in an improved product, I suspect that most authors dread this aspect of the process and few of them genuinely welcome such efforts. In the end, however, most of us recognize that the peer review system is an integral part of the cycle of science.
Good writing is seldom easy to produce and effective scientific prose can take time and much mental effort to generate even for experienced authors. Thus, the temptation to look for short-cuts can arise particularly if the author is experiencing some form of ‘writers’ block’, a temporary inability to become inspired and produce new work. In these situations, the urge to ‘borrow’ others’ well-crafted prose may be irresistible. But, one might ask, what is the harm in such borrowing? After all, taking a couple of lines of text does not, in any way, affect the integrity of the data and it is the latter that is most important (7). Besides as an ethical offense in the sciences, plagiarism of text is arguably far less serious than plagiarism of ideas or plagiarism of data (8). Moreover, since there is no universally agreed-upon operational definition of plagiarism in terms of how many consecutive words can be copied without attribution, who is to say that it is wrong to appropriate a well-written sentence or two that elegantly conveys a very complex process or phenomenon? Other considerations seem to even favor such minor ‘borrowing’. For example, when describing a highly technical methodology and/or procedure commonly used by our peers, there is some risk that even a small change in the wording could result in subtle misinterpretations of the methods or procedure and that possibility is highly undesirable (9). Of course, the latter rationale is a poor excuse for the copy-pasting of large segments of methodology sections. Besides, in the quest for conciseness, these sections sometimes lack some important details and, therefore, can often benefit from rewriting for purposes of enhancing their clarity (10). Unfortunately, there are those, whose writing style is such that they take a liberal approach to using others’ text as their own (11). But, in the current climate of responsible research conduct, such writing practices now run a greater risk of being noticed and, at best, they will be judged with suspicion, for they certainly do not represent high standards of scholarship.
It is totally understandable when the main reason given for using others’ text is lack of language/writing proficiency (12). However, as much as we can empathize with such authors, the scientific community could not function properly with different scholarship criteria depending on one’s level of language proficiency (10). The reality of the situation is that English has become the lingua franca of science and most, if not all, of the high impact factor journals are published in English. Even some of the journals published in non English-speaking nations are published in English, i.e., Biochemia Medica, and the expectation is for scientists from these nations to also publish in English. This situation presents a unique challenge for the Limited English Proficiency (LEP) author, but even some of these authors recognize that it is a challenge that must be met (13). English is not an easy language to learn, especially for those whose native language is based on a different alphabet system. Moreover, while good skills in English are necessary for writing journal articles, they are not sufficient to do the job. To write effective scientific prose, not only do we need to be proficient in the language, we also need to have a thorough grasp of the technical language and the unique expressions and phraseology associated with the particular knowledge domain in question. In other words, we need to be able to understand what we are reading and also to convey that information using our own words and domain-consistent expressions; our own ‘voice’. In fact, evidence that I have collected in the past suggests that text readability is a strong predictor of misappropriation not only by students (14) but also by professors (15). Novice researchers and especially LEP authors will often encounter these types of reading/writing difficulties when dealing with unfamiliar technical literature in their disciplines. Therefore, I strongly believe that these are the very factors that are behind a significant amount of plagiarism.
Does ‘borrowing’ a few sentences here and there (i.e., patchwriting) rise to the level of plagiarism? I suppose that it depends on the circumstances, the number of sentences that have been misappropriated and on who is doing the judging. However, the fact remains that passing as one’s own the work of others, even if it is a small amount, is consistent with any definition of plagiarism. In addition, such practices are now more likely to be discovered given the availability of software programs designed to detect plagiarism. For example, consider the recent case in which a paper was retracted from a journal because merely two paragraphs from its introduction were found to be identical to paragraphs appearing in an earlier published paper by a different author (16). The message is clear: Using textual material without proper attribution is plagiarism, even when it is done in relatively small amounts.
Self-plagiarism 
Whereas plagiarism involves the presentation of others’ ideas, text, data, images, etc., as the products of our own creation, self-plagiarism, occurs when we decide to reuse in whole or in part our own previously disseminated ideas, text, data, etc without any indication of their prior dissemination. Perhaps the most commonly-known form of self-plagiarism is duplicate publication, but other forms exist and include redundant publication, augmented publication, also known as meat extender, and segmented publication, also known as salami, piecemeal, or fragmented publication. The key feature in all forms of self-plagiarism is the presence of significant overlap between publications and, most importantly, the absence of a clear indication as to the relationship between the various duplicates or related papers. Because of the latter, the word ‘covert’ should always be added to these designations (e.g., covert duplicate publication, covert redundant publication, etc.). As with traditional forms of plagiarism, a very likely cause of much self-plagiarism appears to be authors’ desire to add publications to their vita (17).
In a typical duplicate publication, authors of a previously published paper submit roughly the same manuscript to a different journal. The second submission may have a slightly different title, a different order of authorship, perhaps minor changes to the text of the manuscript, but the data and statistical analyses are largely the same. These instances of duplication are typically easy to spot because the identical text, formatting, data tables, etc., are usually recognized by the astute reader who is familiar with that specific area of research. A more harmful version of duplicate publication occurs when the authors make an effort to conceal the fact that the same data are being republished more than once. In these cases the perpetrator makes a concerted effort to make significant textual changes to various components of the paper, such as the literature review, discussion, etc., and they may do so by, for example, adding and/or deleting certain references. Furthermore, the formatting of tables of data and of graphs may also be changed, thus giving the appearance of a different set of data and a distinct paper. Again, the key component of this malpractice is that the new paper makes no reference to the previous publication, or if it cites the previous paper, it does so in such an ambiguous manner that the reader fails to recognize the exact relationship between the two papers, thus the term covert duplicate.
There can be various other permutations of this basic approach and von Elm and his colleagues have described a number of them (18). In one version, for example, authors of a previously published paper may reuse its data and carry out a different set of statistical analyses. The results of these analyses are then included in a paper whose title, abstract and portions of the introduction and discussion may now be somewhat different in the context of these new analyses. In another version, data from two or more previously published papers are presented together as new with perhaps additional statistical analyses included. In instances of augmented publication, or meat extender as this type of redundancy is sometimes called, authors simply add additional observations or data points to a previously published data set. They then reanalyze the augmented data set, and publish a paper based on the new results. Again, it is important to emphasize that such practices may be acceptable if the author provides the editor with a defensible rationale for his actions and makes it clear to the reader that the data are derived, in whole or in part, from a previous publication. However, because most journals only accept original research, such a clarification often renders the paper unsuitable for publication. Again, because publication of the new paper is the primary aim for the unscrupulous author, this fact tends to remains hidden from the editor and the reader.
Segmented or salami publication is a distinct publication practice that may, in theory, contain little if any self-plagiarized text and/or data. However, even in the absence of any text or data reuse, the practice is nevertheless, problematic and actively discouraged in the sciences. A typical case involves a complex experiment/study (i.e., the whole salami) that yields multiple measures or sets of measures from the same study sample. Rather than publishing the results of these various data sets together in a single publication, the investigators analyze and publish each data set separately (i.e., salami slices). In this way the single experiment can yield two or more articles thereby enhancing the investigators’ publication list. As in other forms of covert redundancy and covert duplication, this practice is considered unethical if each salami slice (i.e., segmented publication) fails to reveal the fact that its data are derived from the same experiment as data from other related publications that were part of the same salami.
There can be legitimate reasons for the various forms of redundancy. For example, with respect to salami publication, it is not uncommon in longitudinal-type studies, such as the Framingham Heart study (19), for different sets of authors to publish observations from the same longitudinal sample in separate journal articles. This is completely acceptable and even desirable when the interval of time between observations made from the sample spans years. Likewise, for other types of experiments there may be good reasons to report different results arising from a single experiment in two or three different journals as the various observations may be of interest to different audiences. However, authors must always inform readers about the exact origin their data and how their data are related to other published papers. Even duplicate publications may be totally acceptable as when a paper first appears in one language and it is then translated into another language and published in a different journal or edited volume. But, again, the second publication must always provide a clear indication as to its association with the earlier published version.
The major scientific organizations (e.g., Committee on Publication Ethics, World Association of Medical Editors) and even individual journals offer relevant guidelines to avert instances of self-plagiarism. For example, the Uniform Requirements for Manuscripts Submitted to Biomedical Journals” (20) published by The International Committee of Medical Journal Editors calls on authors to inform the editor of the journal, upon submission of a manuscript, to reveal other related published papers or manuscripts that have been prepared for other journals.
Obviously the primary issue in self-plagiarism (i.e., duplicate, redundant publication, and augmented publication) concerns the covert reuse of already published data that are being portrayed as new data. In the case of salami publication the main concern is the presentation of data sets that are portrayed as having been independently derived when in fact they come from a study from which other related data were collected. The problem with such misleading portrayals of data is that they are likely to mislead others by overestimating, or depending on the type of problem being addressed, underestimating a particular effect or process. For example, let’s assume that there exist various covert duplicates that show a certain drug to be highly effective as a cure for a disease. Someone conducting a meta-analysis on the efficacy of the drug may be unaware that some of the studies found are actually cleverly disguised covert duplicates of existing ones. The inclusion of these duplicates results in an inflated effect size, which in turn distorts researchers’ understanding of the true effectiveness of the drug (21).
One last form of self-plagiarism that must be discussed, and one that I believe to be most strongly related to language proficiency is what some refer as same-authored text recycling. A typical instance of this practice occurs when authors reuse large portions of text that they have already published in one or more journal articles and these are then reused in a new publication (9,22). For the native speaker/writer, the practice represents, at best, a case of intellectual laziness (23) or poor scholarly etiquette and is certainly discouraged by some journals (24). Text recycling, when practiced out of necessity by LEP authors, certainly does not merit such negative characterizations. However, it is still deemed as a problematic practice.
Why should we be discouraged from reusing textual material that we ourselves have produced? Here are some reasons. I believe that there is an underlying assumption on the part of the author who is engaged in these practices, that the previously written material is so well crafted and clear that it cannot benefit from improvement (10,25). In my experience as a reader of primary literature and as a journal reviewer, I often find that assumption to be totally unwarranted. In addition, merely relying on copy-pasting to create a methodology section runs the risk of failing to include or exclude crucial details unique to the new experiment being described. There is at least one editor that cautions potential authors against the mere recycling of previously published methods sections without modification (26) and already one study has uncovered evidence of important lapses when using copy-pasting techniques with medical records (27). Thus, relying on mere copying and pasting of text can be highly problematic when used in scientific articles. Equally important perhaps, is the fact that text recycling does not constitute scholarly excellence, for it violates a basic assumption of the implicit reader-writer contract. Accordingly, the reader operates under the assumption that 1) the author/s is the individual who produced the work, 2) any text, ideas, etc., that are taken from other available sources, even if produced by the same author, are identified with standard scholarly conventions, such as citations and quotations, and 3) that the ideas, data, etc. presented are accurate (28).
In sum, plagiarism and self-plagiarism can manifest themselves in a variety of forms. Depending on the circumstances, these transgressions can merit labels that range from poor or sloppy scholarship to scientific misconduct. Some LEP authors may be particularly vulnerable to excessive ‘borrowing’ from others’ work as well as from their own previously published papers. While their situation is totally understandable they should keep in mind that most of us in the scientific community regard science as highest form of scholarship. As such, we expect nothing but the highest standards of practice from those who are given the privilege of engaging is this most noble of activities. 

References:
1. Ryan G, Bonanno H, Krass I, Scouller K, Smith L. Undergraduate and postgraduate pharmacy students’ perceptions of plagiarism and academic honesty. Am J Pharm Educ 2009;73:105.
2. Rennie SC, Crosby JR. Are “tomorrow’s doctors” honest? Questionnaire study exploring medical students’ attitudes and reported behaviour on academic misconduct. BMJ 2001;322:274-5.
3. Billić-Zulle L, Frković V, Turk T, Petrovečki M. Prevalence of plagiarism among Medical students. Croat Med J 2005;46:126-31.
4. Aronson JK. Plagiarism - please don’t copy. Br J Clin Pharmacol 2007;64:403-5.
5. Errami M, Garner H. A tale of two citations. Nature 2008;451:397-99.
6. Wager E, Fiack S, Graf C, Robinson A, Rowlands I. J Med Ethics 2009;35:348-53.
7. Yilmaz I. Plagiarism? No, we’re just borrowing better English. Nature 2007;449:658.
8. Bouville M. Plagiarism: words and ideas. Sci Eng Ethics 2008;14:311-22.
9. Roig M. Re-using text from one’s own previously published papers: an exploratory study of potential self-plagiarism. Psychol Rep 2005;97:43-9.
10. Roig M. Plagiarism: Consider the context (letter to the editor). Science 2009;325:813-4.
11. Julliard K. Perceptions of plagiarism in the use of other author’s language. Fam Med 1994;26:356-60.
12. Vasconcelos S, Leta J, Costa L, Pinto A, Sorenson MM. Discussing plagiarism in Latin American science. EMBO reports 2009;10:677-82.
13. Afifi A. Plagiarism is not fair play. Lancet 2007;369:1428.
14. Roig M. When college students’ attempts at paraphrasing become instances of potential plagiarism. Psychol Rep 1999;84:973-82.
15. Roig M. Plagiarism and paraphrasing criteria of college and university professors. Ethics Behav 2001;11:307-23.
16. Science Insider. From the Science Policy Blog. Science 2009;325:527.
17. Yank V, Barnes D. Consensus and contention regarding redundant publications in clinical research: cross-sectional survey of editors and authors. J Med Ethics 2003;29:109-14.
18. von Elm E, Poglia G, Walder B, Tramér M R. Different patterns of duplicate publication: An analysis of articles used in systematic reviews. JAMA 2004;291:974-80.
19. Framingham Heart Study. Available at: http://www.framinghamheartstudy.org/. Accessed April 2nd 2010.
20. Redundant publication. Uniform Requirements For Manuscripts Submitted To Biomedical Journals: Writing and Editing For Biomedical Publication. Updated October 2007. Available at: http://www.icmje.org/. Accessed March 6th 2010.
21. Tramèr M, Reynolds DJM, Moore RA, McQuay HJ. Impact of covert duplicate publication on meta-analysis: a case study. BMJ 1997;315:635-40.
22. Bretag T, Carapiet S. A Preliminary Study to Identify the Extent of Self-Plagiarism in Australian Academic Research [Electronic version]. Plagiary 2007:2;92-103. Accessed January 5th, 2010, from http://hdl.handle.net/2027/spo.5240451.0002.010.
23. Editorial. Self-plagiarism: unintentional, harmless of fraud? Lancet 2009;374:664.
24. Griffin GC. Don’t plagiarize - even yourself! Postgrad Med 1991;89:15-6.
25. Roig M. The debate on self-plagiarism: inquisitional science or high standards of scholarship. J Cogn Behav Psychoter 2008;8:245-58.
26. Biros MH. Advice to Authors: Getting Published in Academic Emergency Medicine. Available at: http://www.saem.org/inform/aempub.htm. Accessed March 6th 2003.
27. Hammond KW, Helbig ST, Benson CC, Brathwaite-Sketoe BM. Are electronic records trustworthy? Observations on copying, pasting and duplication. AMIA Annu Symp Proc 2003:269-73.
28. Roig M. Avoiding plagiarism, self-plagiarism, and other questionable writing practices: A guide to ethical writing (U.S. Department of Health and Human Services, Office of Research Integrity). Available at: http://ori.hhs.gov/education/products/plagiarism/. Accessed January 5th 2010.

October 8, 2010

Rampant Fraud Threat to China’s Brisk Ascent - The New York Times

By ANDREW JACOBS
BEIJING — No one disputes Zhang Wuben’s talents as a salesman. Through television shows, DVDs and a best-selling book, he convinced millions of people that raw eggplant and immense quantities of mung beans could cure lupus, diabetes, depression and cancer.
For $450, seriously ill patients could buy a 10-minute consultation and a prescription — except Mr. Zhang, one of the most popular practitioners of traditional Chinese medicine, was booked through 2012.
But when the price of mung beans skyrocketed this spring, Chinese journalists began digging deeper. They learned that contrary to his claims, Mr. Zhang, 47, was not from a long line of doctors (his father was a weaver). Nor did he earn a degree from Beijing Medical University (his only formal education, it turned out, was the brief correspondence course he took after losing his job at a textile mill).
The exposure of Mr. Zhang’s faked credentials provoked a fresh round of hand-wringing over what many scholars and Chinese complain are the dishonest practices that permeate society, including students who cheat on college entrance exams, scholars who promote fake or unoriginal research, and dairy companies that sell poisoned milk to infants.
The most recent string of revelations has been bracing. After a plane crash in August killed 42 people in northeast China, officials discovered that 100 pilots who worked for the airline’s parent company had falsified their flying histories. Then there was the padded résumé of Tang Jun, the millionaire former head of Microsoft China and something of a national hero, who falsely claimed to have received a doctorate from the California Institute of Technology.
Few countries are immune to high-profile frauds. Illegal doping in sports and malfeasance on Wall Street are running scandals in the United States. But in China, fakery in one area in particular — education and scientific research — is pervasive enough that many here worry it could make it harder for the country to climb the next rung on the economic ladder.
A Lack of Integrity
China devotes significant resources to building a world-class education system and pioneering research in competitive industries and sciences, and has had notable successes in network computing, clean energy, and military technology. But a lack of integrity among researchers is hindering China’s potential and harming collaboration between Chinese scholars and their international counterparts, scholars in China and abroad say.
“If we don’t change our ways, we will be excluded from the global academic community,” said Zhang Ming, a professor of international relations at Renmin University in Beijing. “We need to focus on seeking truth, not serving the agenda of some bureaucrat or satisfying the desire for personal profit.”
Pressure on scholars by administrators of state-run universities to earn journal citations — a measure of innovation — has produced a deluge of plagiarized or fabricated research. In December, a British journal that specializes in crystal formations announced that it was withdrawing more than 70 papers by Chinese authors whose research was of questionable originality or rigor.
In an editorial published earlier this year, The Lancet, the British medical journal, warned that faked or plagiarized research posed a threat to President Hu Jintao’s vow to make China a “research superpower” by 2020.
“Clearly, China’s government needs to take this episode as a cue to reinvigorate standards for teaching research ethics and for the conduct of the research itself,” the editorial said. Last month a collection of scientific journals published by Zhejiang University in Hangzhou reignited the firestorm by publicizing results from a 20-month experiment with software that detects plagiarism. The software, called CrossCheck, rejected nearly a third of all submissions on suspicion that the content was pirated from previously published research. In some cases, more than 80 percent of a paper’s content was deemed unoriginal.
The journals’ editor, Zhang Yuehong, emphasized that not all the flawed papers originated in China, although she declined to reveal the breakdown of submissions. “Some were from South Korea, India and Iran,” she said.
The journals, which specialize in medicine, physics, engineering and computer science, were the first in China to use the software. For the moment they are the only ones to do so, Ms. Zhang said.
Plagiarism and Fakery
Her findings are not surprising if one considers the results of a recent government study in which a third of the 6,000 scientists at six of the nation’s top institutions admitted they had engaged in plagiarism or the outright fabrication of research data. In another study of 32,000 scientists last summer by the China Association for Science and Technology, more than 55 percent said they knew someone guilty of academic fraud.
Fang Shimin, a muckraking writer who has become a well-known advocate for academic integrity, said the problem started with the state-run university system, where politically appointed bureaucrats have little expertise in the fields they oversee. Because competition for grants, housing perks and career advancement is so intense, officials base their decisions on the number of papers published.
“Even fake papers count because nobody actually reads them,” said Mr. Fang, who is more widely known by his pen name, Fang Zhouzi, and whose Web site, New Threads, has exposed more than 900 instances of fakery, some involving university presidents and nationally lionized researchers.
When plagiarism is exposed, colleagues and school leaders often close ranks around the accused. Mr. Fang said this was partly because preserving relationships trumped protecting the reputation of the institution. But the other reason, he said, is more sobering: Few academics are clean enough to point a finger at others. One result is that plagiarizers often go unpunished, which only encourages more of it, said Zeng Guoping, director of the Institute of Science Technology and Society at Tsinghua University in Beijing, which helped run the survey of 6,000 academics.
He cited the case of Chen Jin, a computer scientist who was once celebrated for having invented a sophisticated microprocessor but who, it turned out, had taken a chip made by Motorola, scratched out its name, and claimed it as his own. After Mr. Chen was showered with government largess and accolades, the exposure in 2006 was an embarrassment for the scientific establishment that backed him.
But even though Mr. Chen lost his university post, he was never prosecuted. “When people see the accused still driving their flashy cars, it sends the wrong message,” Mr. Zeng said.
The problem is not confined to the realm of science. In fact many educators say the culture of cheating takes root in high school, where the competition for slots in the country’s best colleges is unrelenting and high marks on standardized tests are the most important criterion for admission. Ghost-written essays and test questions can be bought. So, too, can a “hired gun” test taker who will assume the student’s identity for the grueling two-day college entrance exam.
Then there are the gadgets — wristwatches and pens embedded with tiny cameras — that transmit signals to collaborators on the outside who then relay back the correct answers. Even if such products are illegal, students spent $150 million last year on Internet essays and high-tech subterfuge, a fivefold increase over 2007, according to a Wuhan University study, which identified 800 Web sites offering such illicit services.
Academic deceit is not limited to high school students. In July, Centenary College, a New Jersey institution with satellite branches in China and Taiwan, shuttered its business schools in Shanghai, Beijing and Taipei after finding rampant cheating among students. Although school administrators declined to discuss the nature of the misconduct, it was serious enough to withhold degrees from each of the programs’ 400 students. Given a chance to receive their M.B.A.’s by taking another exam, all but two declined, school officials said.
Nonchalant Cheating
Ask any Chinese student about academic skullduggery and the response is startlingly nonchalant. Arthur Lu, an engineering student who last spring graduated from Tsinghua University, considered a plum of the country’s college system, said it was common for students to swap test answers or plagiarize essays from one another. “Perhaps it’s a cultural difference but there is nothing bad or embarrassing about it,” said Mr. Lu, who started this semester on a master’s degree at Stanford University. “It’s not that students can’t do the work. They just see it as a way of saving time.”
The Chinese government has vowed to address the problem. Editorials in the state-run press frequently condemn plagiarism and last month, Liu Yandong, a powerful Politburo member who oversees Chinese publications, vowed to close some of the 5,000 academic journals whose sole existence, many scholars say, is to provide an outlet for doctoral students and professors eager to inflate their publishing credentials.
Fang Shimin and another crusading journalist, Fang Xuanchang, have heard the vows and threats before. In 2004 and again in 2006, the Ministry of Education announced antifraud campaigns but the two bodies they established to tackle the problem have yet to mete out any punishments.
In recent years, both journalists have taken on Xiao Chuanguo, a urologist who invented a surgical procedure aimed at restoring bladder function in children with spina bifida, a congenital deformation of the spinal column that can lead to incontinence, and when untreated, kidney failure and death.
In a series of investigative articles and blog postings, the two men uncovered discrepancies in Dr. Xiao’s Web site, including claims that he had published 26 articles in English-language journals (they could only find four) and that he had won an achievement award from the American Urological Association (the award was for an essay he wrote).
But even more troubling, they said, were assertions that his surgery had an 85 percent success rate. Of more than 100 patients interviewed, they said none reported having been cured of incontinence, with nearly 40 percent saying their health had worsened after the procedure, which involved rerouting a leg nerve to the bladder. (In early trials, doctors in the United States who have done the surgery have found the results to be far more promising.)
Wherever the truth may have been, Dr. Xiao was incensed. He filed a string of libel suits against Fang Shimin and told anyone who would listen that revenge would be his.
This summer both men were brutally attacked on the street in Beijing — Fang Xuanchang by thugs with an iron bar and Fang Shimin by two men wielding pepper spray and a hammer.
When the police arrested Dr. Xiao on Sept. 21, he quickly confessed to hiring the men to carry out the attack, according to the police report. His reason, he said, was vengeance for the revelations he blames for blocking his appointment to the prestigious Chinese Academy of Sciences.
Despite his confession, Dr. Xiao’s employer, Huazhong University of Science and Technology, appeared unwilling to take any action against him. In the statement they released, administrators said they were shocked by news of his arrest but said they would await the outcome of judicial procedures before severing their ties to him.
Li Bibo and Zhang Jing contributed research.
This article has been revised to reflect the following correction:
Correction: October 7, 2010
An earlier version of this article incorrectly named a member of the Politburo. She is Liu Yandong, not Liu Dongdong. An earlier version of this article also gave an incorrect name for a school where Xiao Chuanguo's appointment was blocked. Dr Xiao was blocked from an appointment to the Chinese Academy of Sciences, not the Chinese Academy of Social Sciences.

October 7, 2010

Opinion: How to prevent fraud - The Scientist - Magazine of the Life Sciences

Suresh Radhakrishnan
Thoughts on how to catch scientific misconduct early from a researcher recently convicted of the offense
Misconduct in science is increasing at an alarming rate, and is an issue that needs to be addressed. The constantly evolving technology, the arrival of online-only journals, and other significant scientific developments warrant a reconsideration of the existing procedures in place to prevent fraud and the development of novel verification techniques. Here, I propose four compelling approaches to nip this problem in the bud and limit the repercussions of scientific misconduct.

I: Funding for all ages
The number of PhDs in biology has increased exponentially over the past several years. Concurrently, the average age of principal investigators (PIs) when they obtain their first R01 research grant from the National Institutes of Health (NIH) has been rising, likely a result of the fact that all the PIs, regardless of stature, are competing for the same funding source. But established investigators have a clear advantage. Indeed, the NIH has identified this issue, and just last year instituted a policy to give Early Stage Investigators (those applicants within less than 10 years of experience) special consideration during grant review.
Despite this distinct advantage provided to junior PIs, no such effort has been made for mid-stage investigators, who are at a similar disadvantage to more senior researchers. Furthermore, even for junior PIs, I believe the NIH's effort is offset by the dramatic rise in applicants in this pool and the lack of a parallel increase in the total number of R01 grants. This increasingly competitive funding environment can result in undue pressure on less established PIs to publish in high impact journals, which can encourage falsification. A more effective way to counter the inherent unfairness in the funding process might be to divide funding into three groups according to career stage, such that PIs will be competing for funding against other scientists with similar experience levels. Such leveling of the competition could help reduce the pressure on younger PIs to falsify data.

II: Third party data verification
Experimental design, performance and analysis are getting more sophisticated, leading to an increasing pace of scientific discovery. However, those achievements are not matched by advancements in data-verification processes. It takes a long time to conclude a misconduct investigation, which minimizes the roles of agencies such as the Research Integrity Office at individual institutions and the Office of Research Integrity at the NIH. Furthermore, irrevocable damage has been already done before the dawn of a formal investigation.
Invoking an independent agency for data verification during the preliminary stages of a project could aid in generating stronger manuscripts, grant applications, and clinical trials while minimizing the occurrence of research misconduct. I propose that a third party facility, funded by groups such as the NIH, could provide such a service in an efficient and effective manner. Reagents could be submitted to the agency in a blinded fashion, and time spent on this process can be minimized by encouraging simplicity in experimental designs. For more complex experiments, such as those involving special animal models and biophysical studies, laboratories approved by their institutional Research Integrity Office can provide support, either by verifying the data themselves, or hosting a scientist from the central facility.
To ensure the integrity of funded research, funding agencies should insist upon the verification of preliminary data included in the grant to be completed before funding but after positive review. Journals can similarly choose to conditionally accept manuscripts prior to data verification, but withhold publication until the results have been validated.

III: Strong postdoctoral forums
Despite the rise in NIH applicants, the number of postdoctoral organizations has not increased significantly over the past decade. As a result, the supply-to-demand ratio of postdoctoral fellows is skewed against fellows, thereby making them dispensable for a laboratory. This can lead to self-inflicted pressure on the fellows for data delivery to help the lab obtain funding, as well as hesitancy to report any suspected unethical actions of their PIs.
To address these and other issues, National Postdoctoral Association (NPA) was founded in 2003. Despite their strong commitment to the welfare of the fellows, consistently addressing grassroot issues at an institutional level can be a major challenge. Moreover, awareness about NPA among new fellows arriving at an institution is very low. (I discovered NPA's existence just last year, despite having been a fellow for the past decade.) Invoking stronger institutional postdoc associations can directly increase the overall awareness of new fellows about NPA and provide additional support within the institution.
Socialization events hosted by institutional postdoc organizations, for example, can help relieve postdocs of prevailing undue stressors, and promote laboratory discussions, resulting in the prevention of data falsification either by the fellow (by increasing confidence and awareness of ethical science) or by the PI (by creating a whistleblower from an otherwise reluctant fellow). Furthermore, postdoc organizations could play a larger role in mediating cases of misconduct, granting fellows anonymity when they report such an occurrence, and relaying that information to the institutional Research Integrity Office for appropriate measures.

IV: Objective manuscript review
As the success of scientists depends largely on the number of manuscripts they publish, it can be extremely frustrating to have one's journal submissions rejected, particularly when the rejection does not appear to be scientifically justified -- an occurrence that is unfortunately not uncommon with the current peer review system. This, along with the enormous strain on researchers to publish the data rapidly, can potentially lead to compromises in the integrity of their research.
Recently, commendable novel approaches have been adopted by some journals, including revealing the names of the reviewers or blinding the names of the authors, to increase objectivity in scientific publishing (see The Scientist's recent feature for a review). These approaches minimize prejudices while encouraging constructive criticism, which shall serve to increase the quality of the work and reduce the occurrence of research misconduct.

Suresh Radhakrishnan worked at the Mayo Clinic in Rochester, Minn., as a senior research associate until he was fired for misconduct in May 2010.

October 2, 2010

Understanding Publication Ethics

Geraldine S. Pearson* 
>>> A recent survey of 524 editors of Wiley-Blackwell science journals (including nursing journals) asked about the severity and frequency of ethical issues, editor confidence in handling these, and awareness of COPE guidelines (Wager, Fiack, Graf, Robinson, & Rowlands, 2009). Nearly half of the queried editors responded to the survey and, interestingly, most believed that misconduct occurred only rarely in their journals. Are editors too trusting in their belief that submissions will be ethical and free of publication misconduct? Is there denial that this issue actually occurs or “never in my journal”? Is it easier to assume that journal submissions will be free of ethical issues? The answers to these questions are unclear. I can share that at the yearly International Academy of Nursing Editors meetings, the issue of ethics and quality in publications is frequently and passionately discussed by attendees. I also know how painful it is to deal with an ethical issue around a journal submission.>>>

*Pearson, G. S. (2010), Understanding Publication Ethics. Perspectives in Psychiatric Care, 46: 253–254. doi: 10.1111/j.1744-6163.2010.00276.x

September 28, 2010

Singapore Statement Urges Global Consensus on Research Integrity

Scientists, scientific journals, and research institutions must adhere to an international set of ethical standards and consider the social implications of their work, says a new statement from 2nd World Conference on Research Integrity, co-sponsored by AAAS.
The Singapore Statement on Research Integrity, released 22 September, acknowledges different cultural and national standards for scientific research. But, it concludes “there are also principles and professional responsibilities that are fundamental to the integrity of research wherever it is undertaken.”
The succinct one-page document lists 14 responsibilities for researchers. Individual scientists should share research findings openly and promptly, disclose conflicts of interest, and take responsibility for the “trustworthiness” of their own work, the statement says. Institutions should create policies and work environments that encourage research integrity and institutions and journals should have clear procedures for addressing research misconduct.
The statement also notes four principles that underlie the statement’s responsibilities:
  • honesty in all aspects of research;
  • accountability in the conduct of research;
  • professional courtesy and fairness in working with others; and
  • good stewardship of research on behalf of others
“The globalization of research requires the globalization of basic understandings of responsible behavior in research,” the members of the Singapore Statement Drafting Committee wrote in a news release accompanying the document. “The Singapore Statement is intended to encourage and further the development of these understandings.”
Policymakers, university leaders, publishers, and government ministers first drafted the statement at the conference, held 21-24 July in Singapore. The conference was supported by science associations from China, Japan, South Africa, Saudi Arabia, Australia, Korea, the United Kingdom, and the United States. More than 300 delegates from 51 countries contributed to the final statement.
Three officials represented AAAS at the conference: Mark S. Frankel, director of the AAAS Scientific Freedom, Responsibility and Law Program; Gerald Epstein, director of the AAAS Center for Science, Technology and Security Policy; and AAAS Senior Program Associate Deborah Runkle. Frankel and Epstein spoke to the conference about responsible advocacy and the ethics of dual-use research, respectively, while Runkle co-chaired a session on digital plagiarism.
The Singapore attendees sought a set of “international norms and standards related to research integrity that would accommodate national differences,” said Frankel, who helped organize this year’s conference.
The harmonization of these standards is part of a larger commitment by AAAS to support the international integration of scientific values. The effort also has included three years of top-level discussions between the China Association for Science and Technology (CAST) and AAAS to coordinate work on scientific ethics.
Seeking an international agreement on research integrity is one way to pursue harmonization, said Epstein. “Although many different groups have different conceptions of what a code of conduct should focus on,” he said, “there isn’t any culture in which making up data is good.”
Frankel added that the Singapore Statement “is a start to what we hope will be a global discussion of the issues raised at the conference and a basis for future national or regional ethics guidelines.”
Becky Ham
22 September 2010

September 27, 2010

SINGAPORE STATEMENT on RESEARCH INTEGRITY

Background
The principles and responsibilities set out in the Singapore Statement on Research Integrity represent the first international effort to encourage the development of unified policies, guidelines and codes of conduct, with the long-range goal of fostering greater integrity in research worldwide.
The Statement is the product of the collective effort and insights of the 340 individuals from 51 countries who participated in the 2nd World Conference on Research Integrity. These included researchers, funders, representatives of research institutions (universities and research institutes) and research publishers. The Statement was developed by a small drafting committee (listed below); discussed and commented upon before, during and after the 2nd World Conference; and then finalized for release and global use on 22 September 2010.

Purpose
Publication of the Singapore Statement on Research Integrity is intended to challenge governments, organizations and researchers to develop more comprehensive standards, codes and policies to promote research integrity both locally and on a global basis.
The principles and responsibilities summarized in the Statement provide a foundation for more expansive and specific guidance worldwide. Its publication and dissemination are intended to make it easier for others to provide the leadership needed to promote integrity in research on a global basis, with a common approach to the fundamental elements of responsible research practice.
The Statement is applicable to anyone who does research, to any organization that sponsors research and to any country that uses research results in decision-making. Good research practices are expected of all researchers: government, corporate and academic. To view and download copies of the Statement, click on the links to the right. >>>
_________________________________________________________________
Disclaimer. The Singapore Statement on Research Integrity was developed as part of the 2nd World Conference on Research Integrity, 21-24 July 2010, in Singapore, as a global guide to the responsible conduct of research. It is not a regulatory document and does not represent the official policies of the countries and organizations that funded and/or participated in the Conference. For official policies, guidance, and regulations relating to research integrity, appropriate national bodies and organizations should be consulted. Posted 22 September 2010;
Statement Drafting Committee:
Nicholas Steneck and Tony Mayer, Co-chairs, 2nd World Conference on Research Integrity
Melissa Anderson, Chair, Organizing Committee, 3rd World Conference on Research Integrity

Random Posts



.
.

Popular Posts