Showing posts with label NATURE. Show all posts
Showing posts with label NATURE. Show all posts

August 5, 2014

Researcher’s death shocks Japan - NATURE News

Yoshiki Sasai, one of Japan’s top stem-cell researchers, died this morning (5 August) in an apparent suicide. He was 52.
Sasai, who worked at the RIKEN Center for Developmental Biology (CDB) in Kobe, Japan, was famous for his ability to coax embryonic stem cells to differentiate into other cell types. In 2011, he stunned the world by mimicking an early stage in the development of the eye — a three-dimensional structure called an optical cupin vitro, using embryonic stem cells.
But lately he had been immersed in controversy over two papers, published in Nature in January, that claimed a simple method of creating embryonic-like cells, called stimulus-triggered acquisition of pluripotency (STAP). Various problems in the papers led to a judgement of scientific misconduct for their lead author, Haruko Obokata, also of the CDB. The papers were retracted on 2 July.
Sasai, who was a co-author of both papers, was cleared of any direct involvement in the misconduct. But he has been harshly criticized for failure of oversight in helping to draft the paper. Some critics, often on the basis of unsupported conjecture, alleged deeper involvement of  the CDB. An independent committee recommended on 12 June that the CDB, where Sasai was a vice-director, be dismantled. Sasai had been instrumental in launching the CDB and helped it to develop into one of the world’s premier research centres.
Just after 9 a.m., Sasai was found hanging in a stairwell of the Institute of Biomedical Research and Innovation, next to the CDB, where he also had a laboratory. He was pronounced dead just after 11 a.m., according to reports by Japanese media.  A bag found at the scene contained three letters: one addressed to CDB management, one to his laboratory members and one to Obokata.
In a brief statement released this morning, RIKEN president Ryoji Noyori mourned the death of the pioneering researcher. “The world scientific community has lost an irreplaceable scientist,” he said.

July 3, 2014

Research integrity: Cell-induced stress - NATURE News

As a much-hailed breakthrough in stem-cell science unravelled this year, many have been asking: ‘Where were the safeguards?’
It seemed almost too good to be true — and it was. Two papers1, 2 that offered a major breakthrough in stem-cell biology were retracted on 2 July, mired in a controversy that has damaged the reputation of several Japanese researchers. >>>
 
Haruko Obokata tearfully faces the media after she was found guilty of misconduct in April.

November 8, 2012

Higher education: Call for a European integrity standard - NATURE


The global market for diplomas and academic rankings has had the unintended consequence of stimulating misconduct, from data manipulation and plagiarism, to sheer fraud. If incentives for integrity prove too hard to create, then at least some of the reasons for cheating must be obliterated through an acknowledgement of the problem in Europe-wide policy initiatives.
At the Second World Conference on the Right to Education this week in Brussels, we shall propose that the next ministerial communiqué of the Bologna Process in 2015 includes a clear reference to integrity as a principle. The Bologna Process is an agreement between European countries that ensures comparability in the standards and quality of higher-education qualifications.
Furthermore, the revised version of the European Standards and Guidelines for Quality Assurance, to be adopted by the 47 Bologna Process ministers in 2015, should include a standard that is linked to academic integrity (with substantive indicators), which could be added to all national and institutional quality-assurance systems.
We believe that an organization such as the Council of Europe has enforcement capabilities that can create momentum for peer pressure and encourage integrity. A standard-setting text, such as a recommendation by the Council of Ministers, or even a convention on this topic, would be timely given the deepening lack of public trust in higher-education credentials.
We do not expect that a few new international rules alone can change much. But we aim to create ways for institutions to become entrepreneurs of integrity in their own countries, as some models already exist (A. Mungiu-Pippidi and A. E. Dusu Int. J. Educ. Dev. 31, 532546; 2011).

May 15, 2012

Plagiarism charge for Romanian minister - NATURE

Romania’s new government was thrown into turmoil last week after its education and research minister, Ioan Mang, was accused of extensive plagiarism in at least eight of his academic papers.
The allegations first began circulating on 7 May, just hours after Prime Minister Victor Ponta, a Social Democrat, announced the appointment of Mang and other ministers of the new government. Last week, former prime minister Emil Boc, of the Democratic Liberals, called for Mang’s resignation, dramatically waving the allegedly plagiarized articles and the original papers in front of television cameras.
The scandal has dismayed many Romanian scientists, who are already nervous that the incoming centre-left coalition government might reverse some of the energizing reforms that were introduced by the previous centre-right coalition to improve the country’s sluggish research system.
The radical education and research laws approved last year were designed to introduce competition for positions and research funds, and to eliminate endemic nepotism and other corrupt practices in Romanian academia (see Nature 469, 142–143; 2011). That government also passed a new anti-plagiarism law, which created a Research Ethics Council comprising high-ranking scientists selected by the research minister, and stated that any academic found guilty of such misconduct would automatically lose their job. >>>

April 12, 2012

Honest work - NATURE

Nature 484, 141 (12 April 2012),  doi:10.1038/484141b
The plagiarism police deserve thanks for defending the honour of the PhD.
Last week, Hungary's President Pál Schmitt was forced to resign because of plagiarism detected in his 1992 PhD thesis on physical education. Tivadar Tulassay, rector of Budapest's prestigious Semmelweis University, showed admirable courage by standing up to the Hungarian establishment to revoke the thesis a few days earlier, after experts appointed by the university declared that Schmitt's thesis “failed to meet scientific and ethical standards”. Tulassay, a cardiovascular researcher, has since assumed personal responsibility for his university's decision to revoke Schmitt's title.
The affair has remarkable parallels with that of Germany's former defence minister, Karl-Theodor zu Guttenberg, who resigned in March last year after his own PhD thesis, in law, had been revoked by the University of Bayreuth.
Like Schmitt, zu Guttenberg tried at first to deny plagiarism charges, then to underplay them, and he enjoyed powerful political support — until protests by a movement of honest PhD holders made his situation untenable. Plagiarism hunters have other prominent personalities in their sights, and are not necessarily going to be stopped just because a thesis is not in electronic form — if suspicion is high, they will digitize it themselves.
In many central European countries, an academic title is a decided advantage for a political career; clearly, some ambitious politicians think nothing of obtaining such a title by cheating. We can thank the plagiarism hunters — whatever their individual motives — for exposing dishonesty among those who govern us, and for defending the honour of a PhD. The only safe doctorate these days is an honestly acquired one.

October 5, 2011

Science publishing: The trouble with retractions - NATURE

A surge in withdrawn papers is highlighting weaknesses in the system for handling them.
This week, some 27,000 freshly published research articles will pour into the Web of Science, Thomson Reuters' vast online database of scientific publications. Almost all of these papers will stay there forever, a fixed contribution to the research literature. But 200 or so will eventually be flagged with a note of alteration such as a correction. And a handful — maybe five or six — will one day receive science's ultimate post-publication punishment: retraction, the official declaration that a paper is so flawed that it must be withdrawn from the literature.

It is reassuring that retractions are so rare, for behind at least half of them lies some shocking tale of scientific misconduct — plagiarism, altered images or faked data — and the other half are admissions of embarrassing mistakes. But retraction notices are increasing rapidly. In the early 2000s, only about 30 retraction notices appeared annually. This year, the Web of Science is on track to index more than 400 (see 'Rise of the retractions') — even though the total number of papers published has risen by only 44% over the past decade. 
Perhaps surprisingly, scientists and editors broadly welcome the trend. "I don't think there's any doubt that we're detecting more fraud, and that systems are more responsive to misconduct. It's become more acceptable for journals to step in," says Nicholas Steneck, a research ethicist at the University of Michigan in Ann Arbor. But as retractions become more commonplace, stresses that have always existed in the system are starting to show more vividly.
When the UK-based Committee on Publication Ethics (COPE) surveyed editors' attitudes to retraction two years ago, it found huge inconsistencies in policies and practices between journals, says Elizabeth Wager, a medical writer in Princes Risborough, UK, who is chair of COPE. That survey led to retraction guidelines that COPE published in 2009. But it's still the case, says Wager, that "editors often have to be pushed to retract".
Other frustrations include opaque retraction notices that don't explain why a paper has been withdrawn, a tendency for authors to keep citing retracted papers long after they've been red-flagged (see 'Withdrawn papers live on') and the fact that many scientists hear 'retraction' and immediately think 'misconduct' — a stigma that may keep researchers from coming forward to admit honest errors.
Perfection may be too much to expect from any system that has to deal with human error in all its messiness. As one journal editor told Wager, each retraction is "painfully unique".
But as more retractions hit the headlines, some researchers are calling for ways to improve their handling. Suggested reforms include better systems for linking papers to their retraction notices or revisions, more responsibility on the part of journal editors and, most of all, greater transparency and clarity about mistakes in research.
The reasons behind the rise in retractions are still unclear. "I don't think that there is suddenly a boom in the production of fraudulent or erroneous work," says John Ioannidis, a professor of health policy at Stanford University School of Medicine in California, who has spent much of his career tracking how medical science produces flawed results.
In surveys, around 1–2% of scientists admit to having fabricated, falsified or modified data or results at least once (D. Fanelli PLoS ONE 4, e5738; 2009). But over the past decade, retraction notices for published papers have increased from 0.001% of the total to only about 0.02%. And, Ioannidis says, that subset of papers is "the tip of the iceberg" — too small and fragmentary for any useful conclusions to be drawn about the overall rates of sloppiness or misconduct.
Instead, it is more probable that the growth in retractions has come from an increased awareness of research misconduct, says Steneck. That's thanks in part to the setting up of regulatory bodies such as the US Office of Research Integrity in the Department of Health and Human Services. These ensure greater accountability for the research institutions, which, along with researchers, are responsible for detecting mistakes.
The growth also owes a lot to the emergence of software for easily detecting plagiarism and image manipulation, combined with the greater number of readers that the Internet brings to research papers. In the future, wider use of such software could cause the rate of retraction notices to dip as fast as it spiked, simply because more of the problematic papers will be screened out before they reach publication. On the other hand, editors' newfound comfort with talking about retraction may lead to notices coming at an even greater rate.
"Norms are changing all the time," says Steven Shafer, editor-in-chief of the journal Anesthesia & Analgesia, who has participated in two major misconduct investigations — one of which involved 11 journals and led to the retraction of some 90 papers.

It's none of your damn business!
But willingness to talk about retractions is hardly universal. "There are a lot of publishers and a lot of journal editors who really don't want people to know about what's going on at their publications," says New York City-based writer Ivan Oransky, executive editor at Reuters Health. In August 2010, Oransky co-founded the blog Retraction Watch with Adam Marcus, managing editor at Anesthesiology News. Since its launch, Oransky says, the site has logged 1.1 million page views and has covered more than 200 retractions.
In one memorable post, the reporters describe ringing up one editor, L. Henry Edmunds at the Annals of Thoracic Surgery, to ask about a paper withdrawn from his journal (see go.nature.com/ubv261). "It's none of your damn business!" he told them. Edmunds did not respond to Nature 's request to talk for this article.
The posts on Retraction Watch show how wildly inconsistent retractions practices are from one journal to the next. Notices range from informative and transparent to deeply obscure. A typically unhelpful example of the genre would be: "This article has been withdrawn at the request of the authors in order to eliminate incorrect information." Oransky argues that such obscurity leads readers to assume misconduct, as scientists making an honest retraction would, presumably, try to explain what was at fault.
To Drummond Rennie, deputy editor of the Journal of the American Medical Association, there are two obvious reasons for obscure retraction notices: "fear and work."
The fear factor, says Wager, is because publishers are very frightened of being sued. "They are incredibly twitchy about publishing anything that could be defamatory," she says.
'Work' refers to the phenomenal effort required to sort through authorship disputes, concerns about human or animal subjects, accusations of data fabrication and all the other ways a paper can go wrong. "It takes dozens or hundreds of hours of work to get to the bottom of what's going on and really understand it," says Shafer. Because most journal editors are scientists or physicians working on a voluntary basis, he says, that effort comes out of their research and clinical time.
But the effort has to be made, says Steneck. "If you don't have enough time to do a reasonable job of ensuring the integrity of your journal, do you deserve to be in business as a journal publisher?" he asks. Oransky and Marcus have taken a similar stance. This summer, for example, Retraction Watch criticized the Journal of Neuroscience for a pair of identical retraction notices it published on 8 June: "At the request of the authors, the following manuscript has been retracted."
But the journal's editor-in-chief, neuroscientist John Maunsell of Harvard Medical School in Boston, Massachusetts, argues that such obscurity is often the most responsible course to take. "My feeling is that there are far fewer retractions than there should be," says Maunsell, who adds that he has conducted 79 ethics investigations in more than 3 years at the journal — 1 every 2–3 weeks. But "authors are reluctant to retract papers", he says, "and anything we put up in the way of a barrier or disincentive is a bad thing. If authors are happier posting retractions without extra information, I'd rather see that retraction go through than provide any discouragement."
At the heart of these arguments, says Steneck, lie shifting norms of how responsible journal editors should be for the integrity of the research process. In the past, he says, "they felt that institutions and scientists ought to do it". More and more journal editors today are starting to embrace the gatekeeper role. But even now, Shafer points out, they have only limited authority to challenge institutions that are refusing to cooperate. "I have had institutions, where I felt there was very clear misconduct, come back and tell me there was none," Shafer says. "And I have had a US institution tell me that they would look into allegations of misconduct only if I agreed to keep the results confidential."

The blame game
Discussions on Retraction Watch make it clear that many scientists would like to separate two aspects of retraction that seem to have become tangled together: cleaning up the literature, and signalling misconduct. After all, many retractions are straightforward and honourable. In July, for example, Derek Stein, a physicist at Brown University in Providence, Rhode Island, retracted a paper in Physical Review Letters on DNA in nanofluidic channels when he found that a key part of the analysis had been performed incorrectly. His thoroughness and speed — the retraction came just four months after publication — were singled out for praise on Retraction Watch.
But because almost all of the retractions that hit the headlines are dramatic examples of misconduct, many researchers assume that any retraction indicates that something shady has occurred. And that stigma may dissuade honest scientists from doing the right thing. One American researcher who talked to Nature about his own early-career retraction said he hoped that his decision would be seen as a badge of honour. But, even years later and with his career established, he still did not want Nature to use his name or give any details of the case.
There is no general agreement about how to reduce this stigma. Rennie suggests reserving the retraction mechanism exclusively for misconduct, but that would require the creation of a new term for withdrawals owing to honest mistakes. At the other extreme, Thomas DeCoursey, a biologist at Rush University Medical Center in Chicago, argues for retraction of any paper that publishes results that are not reproducible. "It does not matter whether the error was due to outright fraud, honest mistakes or reasons that simply cannot be determined," he says.
A better vocabulary for talking about retractions is needed, says Steneck — one acknowledging that retractions are just as often due to mistakes as to misconduct. Also useful would be a database for classifying retractions. "The risk for the research community is that if it doesn't take these problems more seriously, then the public — journalists, outsiders — will come in and start to poke at them," he points out.
The only near-term solution comes back to transparency. "If journals told readers why a paper was retracted, it wouldn't matter if one journal retracted papers for misconduct while another retracted for almost anything," says Zen Faulkes, a biologist at the University of Texas–Pan American in Edinburg, Texas.
Oransky agrees. "I think that what we're advocating is part of a much larger phenomenon in public life and on the Web right now," he says. "What scientists should be doing is saying, 'In the course of what we do are errors, and among us are also people that commit misconduct or fraud. Look how small that number is! And here's what we're doing to root that out.'"
Richard Van Noorden is an assistant news editor for Nature in London. For more analysis of retraction statistics, click here.

May 25, 2011

Copy and paste - NATURE

Editorial
Nature 473, Pages:419–420 , Date published:(26 May 2011)
A slow university investigation into serious accusations of misconduct benefits no one.
As retractions go, it may not look like a big deal. Earlier this month, a statistics journal decided to pull a little-cited 2008 paper on the social networks of author–co-author relationships after it emerged that sections were plagiarized from textbooks and Wikipedia. The fact that this caused a wave of glee to ripple through the climate-change blogosphere takes some explaining.
Two of the paper's authors, Yasmin Said and Edward Wegman, both of George Mason University in Fairfax, Virginia, are also authors of an infamous 2006 report to Congress, co-written with statistician David Scott of Rice University in Houston, Texas. That report took aim at climatologist Michael Mann of Pennsylvania State University in University Park, suggesting that he was working in an isolated social network separated from “mainstream statisticians”, and that he had such close ties with the rest of the field that truly independent peer review of his work was not possible. This report came to be known as the Wegman report, and has been frequently cited by climate-change sceptics.
This social-network analysis of Mann and his co-authors — with Mann's name removed — was cut down to an academic paper and published two years later in the journal Computational Statistics & Data Analysis. It is this paper that the journal has decided to retract. So it seems likely that the plagiarism in the 2008 paper is also present in the 2006 Congress report. Still not look like a big deal?
That doubts about the 2006 report have resulted in concrete action is mainly down to the sterling work of an anonymous climate blogger called Deep Climate. His website first reported plagiarism in a different section of the congressional report in December 2009. One of those whose work was plagiarized is Raymond Bradley, director of the Climate System Research Center at the University of Massachusetts, Amherst. Ironically, Bradley was one of the co-authors of the climate reconstructions criticized by the Wegman report. Bradley, alerted by Deep Climate, complained to George Mason University on 5 March last year.
Wegman has blamed a graduate student for the plagiarism. Daniel Walsch, spokesperson for George Mason University, says that an internal review of the matter began in the autumn. He cannot estimate when that review will be complete, and, until it is, he says, the university regards it as a “personnel matter” and will not comment further. He adds that the review is still in the “inquiry” phase to ascertain whether a full investigation should be held. “Whether it is fast or slow is not as important as it being thorough and fair,” says Walsch.
The fact that 14 months have passed since Bradley's complaint without it being resolved is disheartening but not unusual. An examination of George Mason University's misconduct policies suggests that investigations should be resolved within a year of the initial complaint, including time for an appeal by the faculty member in question. According to the university's own timeline, the initial inquiry should have been complete within 12 weeks of the initial complaint — in May 2010. But there are loopholes galore for extensions, and, like many universities, George Mason seems content to drag its feet.
Long misconduct investigations do not serve anyone, except perhaps university public-relations departments that might hope everyone will have forgotten about a case by the time it wraps up. But in cases such as Wegman's, in which the work in question has been cited in policy debates, there is good reason for haste. Policy informed by rotten research is likely to have its own soft spots. Those who have been wronged deserve resolution of the matter. And one can hardly suppose that those who have been wrongfully accused enjoy living under a cloud for months.
So, what incentives do universities have to pick up the pace? Agencies such as the US Office of Research Integrity and ethics offices at funding bodies should take universities to task for slow investigations and demand adherence to the schedules listed in university policies. However, the agencies themselves haven't exactly been models of swift justice. The most recent annual report from the Office of Research Integrity — for 2008 — reported that the cases closed in that year spent a mean of 14.1 months at the agency. Perhaps it should fall to accreditation agencies to push for speedy investigations. Tom Benberg, vice-president of the Commission on Colleges of the Southern Association of Colleges and Schools — the agency that accredits George Mason University — says that his agency might investigate if the university repeatedly ignored its own policies on the timing of misconduct inquiries. To get the ball rolling, he says, someone would have to file a well-documented complaint.
Even if funding and accreditation agencies fail to apply pressure, universities should take the initiative to move investigations along as speedily as possible while allowing time for due process. Once an investigation is complete, the institution should be as transparent as it can about what happened. Especially when public funds are involved, or at public universities, the taxpayer has a right to know what happened when papers are retracted — even if the faculty member in question is eventually exonerated. This tidies the scientific record, clears the air and kicks the legs out from under any conspiracy theories. Over to you, George Mason University.

March 14, 2011

Notes on a scandal

EDITORIAL
Nature Volume: 471, Pages: 135–136 , doi:10.1038/471135b

How an organism is affected by a particular gene mutation, as every geneticist knows, depends on that organism's genetic background. Although an obesity mutation introduced into one strain of mouse might produce a fat animal with diabetes, the same mutation in a mouse strain of slightly different genetic background could create a fat but otherwise healthy animal.
Similarly, the effects of a cry of academic distress seem to depend on a community's societal background. How else to explain the contrasting results of two academic revelations: the plagiarism affair that consumed Germany for two weeks until academic disapproval forced the resignation of the defence minister, Karl-Theodor zu Guttenberg, on 1 March — and an exposé of comparable wrongdoing by the Italian minister of education, Mariastella Gelmini, in 2008, which had zero impact.
The German scandal broke on 16 February, when the daily newspaper Süddeutsche Zeitung revealed that the hugely popular Guttenberg had apparently taken a short cut to his doctorate in law by copying other published works without attribution in his thesis. The report sparked an intense reaction hard to imagine in countries such as the United States and Britain, where the academic achievements (if any) or failures of politicians are not considered serious issues.
German citizens looked to the Internet to discover the extent of Guttenberg's plagiarism, which turned out to be quite shameless. The University of Bayreuth withdrew his PhD and is now investigating whether he had just been careless or had intended to deceive. At first, Guttenberg attempted to underplay the importance of “inadequate footnotes” in a thesis; the issue faded to insignificance, he implied, next to his momentous political mission of reorganizing the German armed forces and controlling their presence in Afghanistan. His popularity among the general public remained undiminished, and Chancellor Angela Merkel, herself a PhD physicist, tried to limit damage to her government by saying that she had “hired a politician, not a scientific assistant”. That was a fatal mistake. Within days, tens of thousands of PhD holders had signed a letter deploring her “mockery” of an academic system that represented decency, honour and responsibility — attributes that they insisted should be reflected in a democratic government. Crushed by this attack of righteousness, Guttenberg finally resigned.
Like Guttenberg, Gelmini was a graduate in law. And like him, she felt that her driving ambition justified taking short cuts in academic procedures to get the degree that would help her political career. In 2001 she travelled from her home town of Brescia in the north of Italy to Reggio Calabria, in the far south, to sit her bar exams. At the time, pass rates in the north were below 10%, compared with a rate of suspiciously more than 90% in Reggio Calabria, a city otherwise known for low academic standards. After the press revealed the Reggio Calabria bar exam to be a scam, the Italian academic community called for Gelmini's resignation — to no avail. The irony of having a minister with responsibility for universities who herself cheerfully admits to having dodged academic rules is not lost on the community.
In Germany, Italy and neighbouring countries in Europe, politicians are frequently drawn from academia. Credentials help political careers, and nearly 20% of the German parliament hold PhDs. But then, almost 9% of Italian parliamentarians are university professors, so the differing reactions to calls for resignation prompted by scholastic misdemeanours cannot be down to ignorance about how universities work. Instead, the difference seems to be based on how large a threat each government considers the weapon of moral correctness to be — and how dangerous is the academic community wielding that weapon.
Should anyone really have expected the government of Silvio Berlusconi to fear such a weapon?
It is more surprising, and gratifying, to find that in Germany, one of the world's richest and most powerful countries, rage against an academic cheat can provoke serious consequences. Not only was Guttenberg popular, but he hadn't previously made any serious political errors that would have seen charges of plagiarism considered the last straw.
Still, there may not be a lesson for many other countries here. Germany is known as the 'country of poets and philosophers' — a rare societal background, and one apparently conducive to propagation of honourable academic values. Like our more fortunate mutant mouse, all there seems plump and healthy, even as it remains unfathomably mysterious to those on the outside.

November 4, 2010

A painful remedy - NATURE

EDITORIAL
Nature, Volume:468, Page:6, doi:10.1038/468006b
Published online 03 November 2010

The number of papers being retracted is on the rise, for reasons that are not all bad.
Few experiences can be more painful to a researcher than having to retract a research paper. Some papers die quietly, such as when other scientists find that the work cannot be replicated and simply ignore it. Yet, as highlighted by several episodes in recent years, the most excruciating revelation must be to find not only that a paper is wrong, but that it is the result of fraud or fabrication, which itself requires months or years of investigation. Where once the research seemed something to be exceptionally proud of, the damage caused by fraudulent work can spread much wider, as discovered by associates of the Austrian physicist Jan Hendrick Schön and the South Korean stem-cell biologist Woo Suk Hwang. But whatever the reason for a retraction, all of the parties involved — journals included — need to face up to it promptly.
This year, Nature has published four retractions, an unusually large number. In 2009 we published one. Throughout the past decade, we have averaged about two per year, compared with about one per year in the 1990s, excluding the pulse of retractions of papers co-authored by Schön.
Given that Nature publishes about 800 papers a year, the total is not particularly alarming, especially because only some of the retractions are due to proven misconduct. A few of the Nature research journals have also had to retract papers in recent years, but the combined data do no more than hint at a trend. A broader survey revealed even smaller proportions: in 2009, Times Higher Education commissioned a survey by Thomson Reuters that counted 95 retractions among 1.4 million papers published in 2008. But the same survey showed that, since 1990 — during which time the number of published papers doubled — the proportion of retractions increased tenfold (see http://go.nature.com/vphd17).
So why the increase? More awareness of misconduct by journals and the community, an increased ability to create and to detect unduly manipulated images, and greater willingness by journals to publish retractions must account for some of this rise. One can also speculate about the increasing difficulty for senior researchers of keeping track of the detail of what is happening in their labs. This is of concern not just because of the rare instances of misconduct, but also because of the risk of sloppiness and of errors not being caught. Any lab with more than ten researchers may need to take special measures if a principal investigator is to be able to assure the quality of junior members' work.
The need for quality assurance and the difficulties of doing it are exacerbated when new techniques are rapidly taken up within what is often a highly competitive community. And past episodes have shown the risk that collaborating scientists — especially those who are geographically distant — may fail to check data from other labs for which, as co-authors, they are ultimately responsible.
If we at Nature are alerted to possibly false results by somebody who was not an author of the original paper, we will investigate. This is true even if the allegations are anonymous — some important retractions in the literature have arisen from anonymous whistle-blowing. However, we are well aware of the great damage that can be done to co-authors as a result of such allegations, especially when the claims turn out to be false. Such was the case with a recent e-mail alert widely distributed by a group calling itself Stem Cell Watch (see Nature 467, 1020; 2010) — an action that we deplore.
For our part, we are sensitive to such concerns and will bear in mind the need to protect the interests of authors until our obligation to the community at large becomes clear. But then we will publish a retraction promptly, and link to it prominently from the original papers. We will also list the retraction on our press release if the original paper was itself highlighted to the media.
Ultimately, it comes down to the researchers — those most affected by the acts — to remain observant and diligent in pursuing their concerns wherever they lead, and where necessary, to correct the literature promptly. Too often, such conscientious behaviour is not rewarded as it should be.

September 9, 2010

Chinese journal finds 31% of submissions plagiarized

Yuehong Zhang
Nature 467 , Page: 153  Date published: (09 September 2010)
doi:10.1038/467153d


Since October 2008, we have detected unoriginal material in a staggering 31% of papers submitted to the Journal of Zhejiang University–Science (692 of 2,233 submissions). The publication, designated as a key academic journal by the National Natural Science Foundation of China, was the first in China to sign up for CrossRef's plagiarism-screening service CrossCheck (Nature 466, 167; 2010).
We are therefore campaigning for authors, researchers and editors to be on the alert for plagiarism and to work against cultural misunderstandings. In ancient China, for example, students were typically encouraged to copy the words of their masters.
To this end, we have given lectures and written three papers (including Y. H. Zhang Learn. Publ. 23, 9–14; 2010) that have been widely publicized in China's media (see http://go.nature.com/dPey7X; in Chinese) and reported in CrossRef's quarterly online news magazine (see http://go.nature.com/icUwvh). Our website displays the CrossCheck logo to remind authors of their responsibilities.
Other Chinese journals are also policing plagiarism, using software launched in 2008 by China's Academic Journals Electronic Publishing House and Tongfang Knowledge Network Technology in Beijing.

July 8, 2010

Journals step up plagiarism policing

Cut-and-paste culture tackled by CrossCheck software.
Declan Butler
Major science publishers are gearing up to fight plagiarism. The publishers, including Elsevier and Springer, are set to roll out software across their journals that will scan submitted papers for identical or paraphrased chunks of text that appear in previously published articles. The move follows pilot tests of the software that have confirmed high levels of plagiarism in articles submitted to some journals, according to an informal survey by Nature of nine science publishers. Incredibly, one journal reported rejecting 23% of accepted submissions after checking for plagiarism.
Over the past two years, many publishers (including Nature Publishing Group) have been trialling CrossCheck, a plagiarism checking service launched in June 2008 by CrossRef, a non-profit collaboration of 3,108 commercial and learned society publishers. The power of the service — which uses the iThenticate plagiarism software produced by iParadigms, a company in Oakland, California — is the size of its database of full-text articles, against which other articles can be compared. Publishers subscribing to CrossCheck must agree to share their own databases of manuscripts with it. So far, 83 publishers have joined the database, which has grown to include 25.5 million articles from 48,517 journals and books.
Catching copycats
As publishers have expanded their testing of CrossCheck in the past few months, some have discovered staggering levels of plagiarism, from self-plagiarism, to copying of a few paragraphs or the wholesale copying of other articles. Taylor & Francis has been testing CrossCheck for 6 months on submissions to three of its science journals. In one, 21 of 216 submissions, or almost 10%, had to be rejected because they contained plagiarism; in the second journal, that rate was 6%; and in the third, 13 of 56 of articles (23%) were rejected after testing, according to Rachael Lammey, a publishing manager at Taylor & Francis's offices in Abingdon, UK.
The three journals were deliberately selected because they had seen instances of plagiarism in the past, says Lammey. "My suspicion is that when we roll this out to other journals the numbers would be significantly lower." Mary Ann Liebert, a publishing company in New Rochelle, New York, has found that 7% of accepted articles in one of its journals had to be rejected following testing, says Adam Etkin, director of online and Internet services at the company.
CrossRef's product manager for CrossCheck, Kirsty Meddings, based in Oxford, UK, says that publishers are now checking about 8,000 articles a month, but many say that they have few hard statistics on the levels of plagiarism they are finding. Most are delegating CrossCheck testing to journal editors, and have not yet compiled detailed results. "We leave the use of the service to the discretion of the editor-in-chief of the journal, with some choosing to check every submission, but most use it only to check articles they consider suspicious," says Catriona Fennell, director of journal services at Elsevier in Amsterdam. "We are seeing a really wide variety of usage."
Publishers are unsure whether plagiarism is on the increase, whether it is simply being discovered more often, or both. "Not so many years ago, we got one or two alleged cases a year. Now we are getting one or two a month," says Bernard Rous, director of publications at the Association for Computing Machinery in New York, the world's biggest learned society for scientific computing, which is in the early stages of implementing CrossCheck. "There probably is more plagiarism than people have been aware of," adds Lammey.
Casting the net wider
The levels of plagiarism uncovered by CrossCheck have been more than enough to persuade publishers to embrace the software. "As you can see, CrossCheck is having an effect both on the papers we review and those we accept for publication, and with this in mind, we're keen to roll this trial out to our other journals," says Lammey. Most of the publishers interviewed by Nature said they had similar plans.
Using the CrossCheck software brings extra costs and overheads for journals. Publishers seem to find the fees reasonable, which start out at $0.75 per article checked and decrease with volume. The bigger overhead, they say, is the time needed for editors to check papers flagged by the software as suspiciously similar.
Establishing plagiarism requires "expert interpretation" of both articles, says Fennell. The software gives an estimate of the percentage similarity between a submitted article and ones that have already been published, and highlights text they have in common. But similar articles are sometimes false positives, and some incidents of plagiarism are more serious than others.
Self-plagiarism of materials and methods can sometimes be valid, for example, says Fennell. "There are only so many different ways you can describe how to run a gel," she says. "Plagiarism of results or the discussion is a greater concern." Sorting out acceptable practice from misconduct can often take a lot of time, says Lammey.
Overall, publishers say that they are delighted to have a tool to police submissions. "We are using CrossCheck on about a dozen journals, and it has spotted things that we would otherwise have published," says Aldo de Pape, manager of science and business publishing operations at Springer in Rotterdam, the Netherlands. "Some were very blatant unethical cases of plagiarism. It has saved us a lot of embarrassment and trouble."

Plagiarism pinioned

NATURE/EDITORIAL  doi:10.1038/466159b Published online 07 July 2010
There are tools to detect non-originality in articles, but instilling ethical norms remains essential
It is both encouraging and disheartening to hear that major science publishers intend to roll out the CrossCheck plagiarism-screening service across their journals (see page 167).
What is encouraging is that many publishers are not only tackling plagiarism in a systematic way, but have agreed to do so by sharing the full text of their articles in a common database. This last was not a given, considering the conservatism of some companies, yet it was a necessary step for the service to function — the iThenticate software used by CrossCheck works by comparing submitted articles against a database of existing articles. CrossCheck's 83 members have already made available the full text of more than 25 million articles.
What is disheartening is that plagiarism seems pervasive enough to make such precautions necessary. In one notable pilot of the system on three journals, their publisher had to reject 6%, 10% and 23% of accepted papers, respectively.
Granted, there are reasons to believe that such levels of plagiarism are exceptional. Previous studies of samples on the physics arXiv preprint server (see Nature 444, 524–525; 2006) and of PubMed abstracts (see Nature doi:10.1038/news.2008.520; 2008) found much lower rates. But the reality is that data are sorely lacking on the true extent of plagiarism, whether its prevalence is growing substantially and what differences might exist between disciplines. The hope is that the roll-out of CrossCheck will eventually yield reliable data on such questions over wide swathes of the literature — while also acting as a powerful deterrent to would be plagiarists.
In the process, editors and publishers must remember that plagiarism comes in many varieties and degrees of severity, and that responses should be proportionate. For example, past studies suggest that self-plagiarism, in which a researcher copies his or her own words from a published paper, is far more common than plagiarism of the work of others. Arguably, self-plagiarism can sometimes be justified, as when a researcher is bringing similar ideas before readers of journals in a different field. All plagiarism can also involve honest errors or mitigating circumstances, such as a scientist with a poor command of English paraphrasing some sentences of the introduction from similar work.
Such examples underscore that plagiarism-detection software is an aid to, not a substitute for, human judgement. One rule of thumb used by Nature journals and others in considering an article's degree of similarity to past articles — in particular, for small amounts of self-plagiarism in review articles — is whether the paper is otherwise of sufficient originality and interest.
Nature Publishing Group is a member of CrossCheck and has been testing the service on submissions to its own journals. It has noted only trace levels of plagiarism in research articles, which are spot-checked, and often in only the supplementary methods. Plagiarism has been more common in submitted reviews, all of which are tested. This is particularly true in clinical reviews, although the rates are still far below the 1% mark, and in most instances concerned some level of self-plagiarism.
Although the ability to detect plagiarism is a welcome advance, addressing the problem at its source remains the key issue. More and more learned societies, research institutions and journals have in recent years adopted comprehensive ethical guidelines on plagiarism, many of which carefully distinguish between different levels of severity. It is crucial that research organizations in all countries, and particularly the mentors of young researchers, instil in their scientists the accepted norms of the international scientific community when it comes to plagiarism and publication ethics.

January 13, 2010

Publish or perish in China

The latest in a string of high-profile academic fraud cases in China underscores the problems of an academic-evaluation system that places disproportionate emphasis on publications, critics say. Editors at the UK-based journal Acta Crystallographica Section E last month retracted 70 published crystal structures that they allege are fabrications by researchers at Jinggangshan University in Jiangxi province. Further retractions, the editors say, are likely.>>>

December 10, 2009

Plagiarism scandal grows in Iran

Investigation finds more cases of duplication in publications co-authored by ministers and senior officials.
EXCLUSIVE
Nature has uncovered further instances of apparent plagiarism in papers co-authored by government ministers and senior officials in Iran. The spate of new examples raises questions about whether such incidents are symptomatic of conditions also common in other developing countries — such as difficulties with English or pressure to acquire academic credentials as a prerequisite for promotion — or whether they are also linked specifically to the Iranian regime, where growth of a merit-based university culture has been undermined by political appointments and purges of reform-minded scientists (see page 699).>>>

October 12, 2009

Analysis of retractions puts spotlight on academia

Nicola Jones

Nature Medicine 15, 1101 (2009)
doi:10.1038/nm1009-1101


About half of the medical papers retracted over the past few decades were pulled because of misconduct rather than an innocent mistake, according to two new studies. And that fraction is on the increase.
Yet although drug companies are often portrayed by the popular press as the source of all evil in biomedical publishing, just 4% of retractions due to misconduct had declared pharmaceutical sponsorship.>>>

January 1, 2009

Problems with anti-plagiarism database

Mauno Vihinen

SIR — Sophisticated tools have been developed to detect duplicate publication and plagiarism, as noted in M. Errani and H. Garner’s Commentary ‘A tale of two citations’ (Nature 451, 397–399; 2008) and in your News story ‘Entire-paper plagiarism caught by software’ (Nature 455, 715;2008). >>>

October 9, 2008

Entire-paper plagiarism caught by software - NATURE

Thousands of 'similarities' found between papers.
>>>>
Many of the duplicates in Deja Vu come from non-English-speaking countries, and some scientists have asserted that a degree of plagiarism is justified as a way of improving the English of their texts (see Nature 449, 658; 2007). "There definitely is a cultural component," says Garner, "but this appears to be an equal-opportunity behaviour, with scientists from across the world involved."
When confronted with their plagiarism, some researchers can be brazen. One offender, whose paper shared 99% of its text with an earlier report, wrote to Garner: "I seize the opportunity to congratulate [the authors of the original paper] for their previous and fundamental paper — in fact that article inspired our work."

June 19, 2008

Repairing research integrity : COMMENTARY: NATURE

A survey suggests that many research misconduct incidents in the United States go unreported to the Office of Research Integrity. Sandra L. Titus, James A. Wells and Lawrence J. Rhoades say it’s time to change that.>>>

Scientific misconduct: Tip of the iceberg?

Editor's Summary

A survey of US researchers suggests that scientific misconduct is greatly under-reported. The Office of Research Integrity was told of only 201 instances of likely misconduct relating to work funded by the Department of Health and Human Services in three years. Yet extrapolation from the survey predicts that over 2,300 observations of potential misconduct are made yearly. Sandra Titus, James Wells and Lawrence Rhoades argue that science can and should clean up its act, and recommend six strategies to that end.

Random Posts



.
.

Popular Posts