Is the pressure of the publish-or-perish mentality driving more researchers to commit misconduct? By Tia Ghose  
After six articles from a single research group—the laboratory of Naoki Mori at the University of the Ryukyus in Japan—were retracted from Infection and Immunity earlier this year, Editor-in-Chief Ferric Fang did some soul searching. He and Arturo Casadevall, editor-in-chief of the American Society for Microbiology journal mBio  and Fang’s long-time friend and colleague, decided to explore the issue  more deeply in an editorial published this week (August 8) in Infection and Immunity.
Fang, a bacteriologist at the University of Washington, recently talked with The Scientist  about the rising number of retractions, why high profile journals may  have more retractions, and what pressures lead some scientists to fudge  their data.
The Scientist: Tell me a little more about the retractions in the Infection and Immunity articles.
Ferric Fang: [An investigation by the investigator’s  institution found that] gel pictures had been cut and pasted, and then  misrepresented to be different things. We reviewed all the manuscripts  and came to the conclusion that the institution was correct. At this  point we notified the investigator of our findings and we invited him to  reply and try to explain the findings. Through this discussion, we  reached our conclusion that in fact there had been inappropriate  manipulation of these figures.
This led us to do some soul searching about why misconduct occurs and  whether retractions are really all there is to it—and they’re pretty  rare—or whether there’s a lot more misconduct going on, and retractions  are the tip of the iceberg. And I’m sorry to say I’ve come more or less  to the latter conclusion.
TS: In your editorial, you note that retractions are on the rise. Why is that, and is there any way to reverse the trend?
FF: I think it behooves scientists to take a look at  the way we have organized the scientific community and the kinds of  pressure we put on scientists. We have a situation now where people’s  careers are on the line, it’s very difficult to get funding, and getting  funding is dependent on publication. They’re human beings and if we put  them under extraordinary pressures, they may in some cases yield to bad  behavior.
TS: You also developed the “retraction  index,” a measure of a given journal’s retraction rate, which showed the  rate of retraction was positively correlated with the impact factor of  the journal. Why do you think that is?
FF: The idea to look at the correlation between the  number of retractions and journal impact factor was first suggested by  my co-author, Arturo Casadevall. One of the reasons we devised this  retraction index is the idea that maybe the pressures to try to get  papers in prestigious journals was a driving force in encouraging people  to engage in misconduct. I’m not excusing the behavior by any means at  all.  But I know of cases, for example, where scientists have committed  misconduct, who if they’re not successful in their research, they’ll  lose their job and they might be deported from the country. So these are  extraordinary pressures that are being put on people. I don’t think  it’s going to bring out the best science—it’s going to discourage a lot  of things we want to have in science, like people feeling free to  explore and take chances.
TS: Is it possible that there are more people looking at those top-tier journals, so the mistakes are just caught more?
FF: That’s certainly a possibility. Extraordinary  claims require a higher bar before the scientific community accepts  them, and I think some of this work that’s published in the glamour mag  journals—Science, Nature, Cell—are in those journals because they’re sensational: things like the arsenic using bacterium for example, or the novel murine virus that was associated with chronic fatigue syndrome.  These claims, because they have such enormous implications and because  they’re so sensational, they’re going to be subjected to a very high  level of scrutiny. If that claim was made in an obscure journal, it  might take a longer time [to] attract attention.
TS: Reviewers are the main route to catch misconduct before publication, but retractions are on the rise. Is there a better system?
FF: I don’t know that there is a better system…  We’ve had a number of times where questions have been raised about  whether data are fishy or not, and we haven’t been able to conclusively  establish that. And you don’t have access to the primary data, right?  You don’t have the lab notebook, you’re not there at the bench when the  person is doing that experiment.
Reviewers may call into question certain observations, but if you  have a single lane in a gel that’s beautifully spliced in but is  actually lifted from another paper in another field, from the same lab  four years earlier in a completely different journal, it will just take  dumb luck for the reviewer to realize that.
TS: What if people just submitted their raw data when they submitted a paper?
FF: I think it would make the job of reviewing  incredibly more challenging. But I don’t think even that can completely  solve the problem. You don’t have any way of knowing that what is sent  to you is really complete or accurate. If somebody is bound and  determined to commit misconduct, they’re going to be very difficult to  detect.
F. Fang, A. Casadevall, “Retracted science and the retraction index,” Infection and Immunity, doi:10.1128/IAI.05661-11, 2011.