Showing posts with label Ars Technica. Show all posts
Showing posts with label Ars Technica. Show all posts

July 19, 2012

Epic fraud: How to succeed in science (without doing any) - ArsTechnica

Running scientific experiments is, frankly, a pain in the ass. Sure, it's incredibly satisfying when days or weeks of hard work produce a clean-looking result that's easy to interpret. But often as not, experiments simply fail for no obvious reason. Even when they work, the results often leave you scratching your head, wondering "what in the world is that supposed to tell me?"
The simplest solution to these problems is obvious: don't do experiments. (Also, don't go out into the field to collect data, which adds the hazards of injury, sunburn, and exotic disease to the mix.) Unfortunately, data has somehow managed to become the foundation of modern science—so you're going to need to get some from somewhere if you want a career. A few brave souls have figured out a way to liberate data from the tyranny of experimentation: they simply make it up.
Dr. Yoshitaka Fujii seems well on his way to becoming the patron saint of scientific fraudsters, setting a record for the most extensive output of fake data. As near as anyone can work out, Fujii started making up data with abandon some time in the 1990s. By 2000, his fellow researchers were already on to him, publishing a comment in which they noted, "We became skeptical when we realized that side effects were almost always identical in all groups."
But you can't let such skepticism from your peers slow you down—and Fujii certainly didn't. Even after the comment was published, two different medical schools hired him as a faculty member. He continued to publish, generally using faked data, racking up an eventual record of 200+ bogus papers. 
Nobody took any responsibility for investigating the prospect of fraud, despite requests made by other researchers who suspected something was amiss. It took until 2011 for the editors of several journals that were victimized by Fujii to band together and hire an outside investigator, who found extensive evidence that the data reported by Fujii was unlikely to have resulted from actual experiments. That finally prompted Toho University, his current employer, to launch its own investigation (PDF). Conclusion: almost none of Fujii's publications were free of falsified data.
Decades of scientific fraud simply shouldn't be this easy. Yet Fujii, along with a few other serial fraudsters, have somehow managed it year after year. In tribute to his staggering success, Ars presents this handy guide on how to get away with faking your data, based on the most popular techniques used in the biggest cases of scientific fraud (so far). Hopefully, it will help answer one of the key questions looming over the Fujii story: In a world of hard data and peer review, just how was such a colossal fraud even possible?

June 20, 2012

The ethics of recycling content: Jonah Lehrer accused of self-plagiarism - Op-ed: Is it OK to reuse old work? That's a loaded question with many variables. - ArsTechnica

Jonah Lehrer has long been one of the rising stars of the science writing world. I was a huge fan of his work when he wrote for Wired (a sister publication of Ars) and was happy when he recently left for the New Yorker full-time (again, another Conde Nast publication). That continued rise might be imperiled now, however, after the discovery of several instances of Lehrer re-using earlier work he did for a different publication.
Yesterday morning, Jim Romenesko, a well-known media watcher, noticed striking similarities between a piece by Lehrer published last week in the New Yorker, and one that Lehrer wrote for the Wall Street Journal last October. The blogosphere being what it is, it wasn't long before others were digging.  More than a handful of other instances of this happening were quickly uncovered—to the extent that this should be seen as carelessness rather than misfortune. Writers beware: in the age of crowdsourcing, this sort of investigation is child's play.
A day later, and the Twittersphere being what it is, there's been much discussion on the topic. Can you really plagiarize yourself? Is it plagiarism to get paid to give talks that rehash work you've written? Is it plagiarism to give the same talk to different audiences? >>>

May 10, 2011

Research ethics: science faces On Fact and Fraud (Ars Technica)


David Goodstein has a unique perspective on scientific fraud, having pursued a successful career in research physics before becoming the provost of Caltech, one of the world's premier research institutions. As an administrator, he helped formulate Caltech's first policy for scientific misconduct and applied it to a number of prominent cases—all of which should put him in an excellent position to provide a rich and comprehensive overview of scientific frauds and other forms of research misconduct.
Unfortunately, his book On Fact and Fraud doesn't quite live up to this promise. Goodstein devotes most of the book to case studies of fraud or potential misconduct. Although many of the individual chapters are excellent, they don't come together to form a coherent picture of what constitutes misconduct or how to recognize it.>>>

July 27, 2010

Scientists informally intervene in cases of sloppy research - Ars Technica

John Timmer
Most people involved in scientific research are well aware of the big three ethical lapses: fabrication, falsification, and plagiarism. These acts are considered to have such a large potential for distorting the scientific record that governments, research institutions, and funding bodies generally have formal procedures to investigate incidents, and formal sanctions for those found to have infringed. But the big three are hardly a complete list of all the problems that can produce misleading results; anything from poor record-keeping to sloppy techniques can cause errors to creep into the scientific literature, and there are rarely formal procedures to deal with them.
That doesn't mean they're not dealt with, however. A survey published by Nature has found that researchers regularly engage in informal interventions with colleagues if they suspect that there's any form of misconduct going on—even if they think the problems are inadvertent.
The survey asked about what its authors term "acts that could corrupt the scientific record," and defined them very broadly to include things like "poor supervision of assistants, carelessness, authorship disputes, failure to follow the rules of science, conflicts of interest, incompetence, and hostile work environments that impact on research quality." To get a sense for how these are dealt with, they looked up several thousand researchers who have received funding from the National Institutes of Health, and asked them to fill out an online survey.
The questions in the survey, as well as the responses of those queried, have been posted in a PDF at the authors' website.

Good news and bad news
The majority of the 2,600 researchers who responded had experienced a case where they suspected scientific errors were occurring—84 percent, in total. The authors ascribe this number, which is much higher than most other estimates, to the loose definition of misconduct that they provided. An alternate explanation might be that the self-selecting group that responded was more did so in part because they were aware of these issues. The authors omitted the 400 or so who had never noticed misconduct from most of their further analysis.
The good news for the scientific community is that, when researchers became aware of potential problems, they were fairly likely to do something about it. Almost two-thirds reported taking some type of action about the issues they noticed. Of the remainder, most felt that either action was already underway, or were too removed from the lab with issues to have a good sense of how to intervene.
Over 30 percent of those who acted went straight to the source, and had a discussion with the person they felt was having troubles. Another eight percent sent a message of concern to that individual (90 percent of these were signed), while 16 percent alerted someone in a position of authority about the trouble.
In about 21 percent of the cases where someone chose to intervene, the issue got bumped up to formal proceedings. Some of these may have been the result of denial on the part of the people involved (19 percent of the responses) or cases where the individuals failed to act at all (another 14 percent). Still, there were some good outcomes; in about 30 percent of the cases, the problem was either corrected or it was recognized that it was too late to do anything about it. One striking number here was that, out of all these instances, only a fraction of a percent turned out to be cases where the worries about problems were unwarranted.
About equal numbers of those polled expressed satisfaction and dissatisfaction with the results. Over half also felt that the incident had either had no effect on their career, or had even enhanced it. Still, that would seem to leave a lot of individuals who were dissatisfied and suffered some form of negative impact from the event.
There are a lot of interesting details in the numbers, as well. For example, many of those who chose to act did so in part because they considered their institutions unlikely to do anything. Those who were satisfied with the outcomes were also more likely to have been in a situation where the problems were inadvertent.
Overall, there are some promising aspects to these results. Scientists clearly feel that their ethics compel them to intervene in cases where the potential to distort the scientific record doesn't rise to the level of actual fraud. And many of these interventions appear to end in a satisfactory manner. But there are clearly still cases where institutions don't take the issues seriously, and the scientists who try to do the right thing feel that they suffer consequences as a result.
There's no obvious way to force institutions to take scientific errors and misconduct seriously. But the institutions that do so may want to consider the evidence that this informal policing of scientific ethics takes place. Providing support and advice on how to manage these situations, which can easily devolve into conflict, could significantly improve the scientific community's ability to police itself.

August 12, 2008

What are the consequences of scientific misconduct?

Yun Xie


What happens after a scientist has been found guilty of misconduct such as plagiarism, data manipulation, or fabrication of results? Does a guilty verdict mean permanent exile from the scientific community, or is there room for forgiveness? >>>

January 23, 2008

Something rotten in the state of scientific publishing

By Jonathan M. Gitlin


There is an interesting commentary in this week's Nature1 that takes a look at the subject of plagiarism within the scientific literature. It's certainly a contentious subject; from day one as an undergraduate it was drilled into us that there could be no greater sin than plagiarism, and I assume most other universities are the same. However, just because it's bad, doesn't mean that no one will do it, and, as we know from high-profile fraud cases like Woo Suk Hwang, there will always be scientists out there who bend and break the rules.
These days, just about every scientific paper resides in a an online database, whether it be something like arXiv or PubMed, and that means it's now much easier to scan them for duplications of results and text. Officially, duplicate papers aren't supposed to be a big problem; PubMed claims less than 1,000 instances out of more than 17 million papers. But an anonymous survey of scientists suggest that rate of plagiarism is higher than that; 4.7 percent admitted to submitting the same results more than once, and 1.4 percent to plagiarizing the work of others.
The authors of the article, scientists at UT Southwestern in Texas, have been using a search engine called eTBLAST to search through scientific abstracts in the same way you might search through genome data for specific sequences. Any duplicates are then uploaded to a searchable database, Deja Vu. As might be expected, they managed to find quite a few examples of duplicate work. Out of a preliminary search of 62,000 abstracts, 421 were flagged. Some of these are papers that have been published in two languages, while others are all but identical, including the same authors, but have been submitted to different journals (a practice that is forbidden by every journal I've ever come across).
The article also looks at the nationalities behind such duplicate work; both China and Japan appear twice as often as their publication output suggests they ought to. This may be in part a language issue, as one of the people involved in the plagiarism cases identified by Turkish academics has claimed (subscription only) that, "For those of us whose mother tongue is not English, using beautiful sentences from other studies on the same subject in our introductions is not unusual." Unfortunately, in most of these cases, the copying goes well beyond individual sentences.
Although plagiarism is inexcusable, it can perhaps be said to be explainable. An academic's career depends upon their publication record: it's used to evaluate their performance for tenure, job applications, and funding, and entire departments are rated on their publications. All of this is determined by the ranking, or impact factor, of the journals for each of the publications. That impact factor is decided by Thomson ISI (the makers of the program Endnote), which has been criticized in the past for the way that it is calculated.
Now that criticism has been renewed, following the publication of an editorial in the Journal of Cell Biology2. The authors of that editorial went as far as buying the data that Thomson uses to calculate impact factors, whereupon they found that they couldn't arrive at the same numbers. Thompson have responded to the editorial, and things have been going back and forth since then. A long time ago, I wrote about a proposed alternative to Thomson's impact factors using Google's PageRank algorithm, but I must confess I've heard nothing more on that subject since then. Perhaps it's time for a renewed interest?
1: Nature, 2008. DOI: 10.1038/451397a

August 8, 2007

Plagiarism and falsified data slip into the scientific literature: a report By John Timmer

The challenges of scientific integrity 
Scientific progress is conveyed primarily through peer-reviewed publications. These publications are the primary source of information for everyone involved in scientific research, allowing them to understand the current scientific models and consensus and making them aware of new ideas and new techniques that may influence the work they do. Because of this essential role, the integrity of the peer review process is essential. When misinformation makes its way into the literature, it may not only influence career advancement and funding decisions; it can actually influence which experiments get done and how they are interpreted. Bad information can also cause researchers to waste time in fruitless attempts to replicate results that never actually existed. >>>

December 6, 2006

Trolling the arXiv for plagiarism

John Timmer
In a subscription-only report on an upcoming conference presentation, Nature spills the beans on what may be our best handle yet on plagiarism in the world of academic science. Most research into this area has been limited by the inaccessibility of many of the peer-reviewed journals, which require subscription access. As such, it's hard to build a global picture of the literature. In physics and astronomy, however, many publications appear in the arXiv database, which typically hosts them in advance of publication.>>>

Random Posts



.
.

Popular Posts