Showing posts with label SCIENTIFIC AMERICAN. Show all posts
Showing posts with label SCIENTIFIC AMERICAN. Show all posts

August 9, 2014

Some thoughts about the suicide of Yoshiki Sasai - Scientific American ( Doing Good Science )

In the previous post I suggested that it’s a mistake to try to understand scientific activity (including misconduct and culpable mistakes) by focusing on individual scientists, individual choices, and individual responsibility without also considering the larger community of scientists and the social structures it creates and maintains. That post was where I landed after thinking about what was bugging me about the news coverage and discussions about recent suicide of Yoshiki Sasai, deputy director of the Riken Center for Developmental Biology in Kobe, Japan, and coauthor of retracted papers on STAP cells.
I went toward teasing out the larger, unproductive pattern I saw, on the theory that trying a more productive pattern might help scientific communities do better going forward.
But this also means I didn’t say much about my particular response to Sasai’s suicide and the circumstances around it. I’m going to try to do that here, and I’m not going to try to fit every piece of my response into a larger pattern or path forward.
The situation in a nutshell:
Yoshiki Sasai worked with Haruko Obokata at the Riken Center on “stimulus-triggered acquisition of pluripotency”, a method by which exposing normal cells to a stress (like a mild acid) supposedly gave rise to pluripotent stem cells. It’s hard to know how closely they worked together on this; in the papers published on STAP. Obokata was the lead-author and Sasai was a coauthor. It’s worth noting that Obokata was some 20 years younger than Sasai, an up-and-coming researcher. Sasai was a more senior scientist, serving in a leadership position at the Riken Center and as Obokata’s supervisor there.
The papers were published in a high impact journal (Nature) and got quite a lot of attention. But then the findings came into question. Other researchers trying to reproduce the findings that had been reported in the papers couldn’t reproduce them. One of the images in the papers seemed to be a duplicate of another, which was fishy. Nature investigated, Riken investigated, the papers were retracted, Obokata continued to defend the papers and to deny any wrongdoing.
Meanwhile, a Riken investigation committee said “Sasai bore heavy responsibility for not confirming data for the STAP study and for Obokata’s misconduct”. This apparently had a heavy impact on Sasai:
Sasai’s colleagues at Riken said he had been receiving mental counseling since the scandal surrounding papers on STAP, or stimulus-triggered acquisition of pluripotency, cells, which was lead-authored by Obokata, came to light earlier this year.
Kagaya [head of public relations at Riken] added that Sasai was hospitalized for nearly a month in March due to psychological stress related to the scandal, but that he “recovered and had not been hospitalized since.”
Finally, Sasai hanged himself in a Riken stairwell. One of the notes he left, addressed to Obokata, urged her to reproduce the STAP findings.
So, what is my response to all this?
I think it’s good when scientists take their responsibilities seriously, including the responsibility to provide good advice to junior colleagues.
I also think it’s good when scientists can recognize the limits. You can give very, very good advice — and explain with great clarity why it’s good advice — but the person you’re giving it to may still choose to do something else. It can’t be your responsibility to control another autonomous person’s actions.
I think trust is a crucial part of any supervisory or collaborative relationship. I think it’s good to be able to interact with coworkers with the presumption of trust.
I think it’s awful that it’s so hard to tell which people are not worthy of our trust before they’ve taken advantage of our trust to do something bad.
Finding the right balance between being hands-on and giving space is a challenge in the best of supervisory or mentoring relationships.
Bringing an important discovery with the potential to enable lots of research that could ultimately help lots of people to one’s scientific peers — and to the public — must feel amazing. Even if there weren’t a harsh judgment from the scientific community for retraction, I imagine that having to say, “We jumped the gun on the ‘discovery’ we told you about” would not feel good.
The danger of having your research center’s reputation tied to an important discovery is what happens if that discovery doesn’t hold up, whether because of misconduct or mistakes. And either way, this means that lots of hard work that is important in the building of the shared body of scientific knowledge (and lots of people doing that hard work) can become invisible.
Maybe it would be good to value that work on its own merits, independent of whether anyone else judged it important or newsworthy. Maybe we need to rethink the “big discoveries” and “important discoverers” way of thinking about what makes scientific work or a research center good.
Figuring out why something went wrong is important. When the something that went wrong includes people making choices, though, this always seems to come down to assigning blame. I feel like that’s the wrong place to stop.
I feel like investigations of results that don’t hold up, including investigations that turn up misconduct, should grapple with the question of how can we use what we found here to fix what went wrong? Instead of just asking, “Whose fault was this?” why not ask, “How can we address the harm? What can we learn that will help us avoid this problem in the future?”
I think it’s a problem when a particular work environment makes the people in it anxious all the time.
I think it’s a problem when being careful feels like an unacceptable risk because it slows you down. I think it’s a problem when being first feels more important than being sure.
I think it’s a problem when a mistake of judgment feels so big that you can’t imagine a way forward from it. So disastrous that you can’t learn something useful from it. So monumental that it makes you feel like not existing.
I feel like those of us who are still here have a responsibility to pay attention.
We have a responsibility to think about the impacts of the ways science is done, valued, celebrated, on the human beings who are doing science — and not just on the strongest of those human beings, but also on the ones who may be more vulnerable.
We have a responsibility to try to learn something from this.
I don’t think what we should learn is not to trust, but how to be better at balancing trust and accountability.
I don’t think what we should learn is not to take the responsibilities of oversight seriously, but to put them in perspective and to mobilize more people in the community to provide more support in oversight and mentoring.
Can we learn enough to shift away from the Important New Discovery model of how we value scientific contributions? Can we learn enough that cooperation overtakes competition, that building the new knowledge together and making sure it holds up is more important than slapping someone’s name on it? I don’t know.
I do know that, if the pressures of the scientific career landscape are harder to navigate for people with consciences and easier to navigate for people without consciences, it will be a problem for all of us.

April 25, 2012

Journal Publishers in China Vow to Clamp Down on Academic Fraud - SCIENTIFIC AMERICAN

By David Cyranoski of Nature magazine
China's roughly 5,300 home-grown journals have been a receptacle for much of the nation's research that has resulted from misconduct
The China Association for Science and Technology (CAST) in Beijing has taken the lead among the country's publishers in trying to clamp down on academic misconduct. This month, it issued a declaration from the 1,050 journals it oversees -- part of increasingly aggressive nationwide efforts to purge China's corpulent scientific publishing industry and bring its home-grown journals, in both English and Chinese, up to international standards.
In the declaration, journal editors in chief and affiliated society presidents commit to following CAST guidelines issued in 2009. The document defines many types of fraud and lists possible penalties for miscreant authors -- from written warnings to blacklisting or informing home institutions and funding agencies about the misconduct. Reviewers who abuse their privilege by, for example, plagiarizing an article, can also face blacklisting and public disclosure.
That is a step in the right direction, says Chun-Hua Yan, associate editor-in-chief of the CAST-administered Journal of Rare Earths, based in Beijing. Yan says that many editors had not been aware that some subtle forms of wrongdoing -- such as favoring papers on the basis of personal relations or offering honorary authorship -- were types of misconduct. "There are some soft or grey areas. These are now more clear to all the editors," he says. Suning You, president of the Chinese Medical Association Publishing House in Beijing, which has 126 journals administered by CAST, agrees. "The declaration will purify the academic environment to create first-class medical journals, thus achieving social and economic benefits," he says.
Clampdown on misconduct
China's academia and government alike have taken measures to curb misconduct in recent years, with institutions such as Zhejiang University in Hangzhou taking the lead (see Nature 481, 134-136; 2012). The CAST declaration itself follows the announcement of rules from China's education ministry that require universities to monitor misconduct closely (see Nature 483, 378-379; 2012).
The country's roughly 5,300 home-grown journals have been a receptacle for much of the research that has resulted from misconduct. Two years ago, the government vowed to get rid of the most problematic publications (see Nature 467, 261; 2010), but that weeding process hasn't happened yet.
Yan says that the latest declaration will put pressure on journals to fall in line. "Many are just commercial journals, just there to make money," he says. "We cannot make an announcement that `these are bad journals' but we can show the right way to publish."
A stronger incentive -- money -- might force the issue. According to Yan, China's finance ministry is starting a program that will spend 100 million renminbi (US$16 million) per year to improve journals. By the end of 2012, a committee will rank the country's publications into three tiers on the basis of their international and Chinese impact factors and other measures of international influence, such as the number of overseas subscriptions and the number of foreign editorial-board members. Journals ranked in the first tier will get a bonus of 100,000 renminbi per year, and those in the second, 50,000 renminbi. Third-tier publications will get nothing.
Yan says that the money could as much as double his journal's current budget, and allow the publication to waive publishing fees for top papers, train young researchers in how to write scientific papers, invite international advisory-board members to China to discuss possible improvements and enhance software for electronic submission and review systems. He hopes that some Chinese-language journals will become internationally relevant, "followed by scientists around the world".
But there are skeptics. Cong Cao, a science-policy researcher specializing in China at the University of Nottingham, UK, says that neither the extra funding nor the editors' declaration will have much of an impact. China's 5,300 journals account for roughly one-third of the world's science and technology journals and, by Cao's estimate, publish around 600,000 papers per year. That, he says, "represents a huge business". The journals attract "those who have to fill institutionally set publication requirements", adds Cao. "The real question that China's scientific leadership as well as scientific publishers have to consider is: does China really need that many journals in the first place?"
This article is reproduced with permission from the magazine Nature. The article was first published on April 25, 2012.
Nature

December 8, 2010

Self-plagiarism case prompts calls for agencies to tighten rules - SCIENTIFIC AMERICAN

Eugenie Samuel Reich - Nature 468, 745 (2010)

Is plagiarism a sin if the duplicated material is one's own? Self-plagiarism may seem a smaller infraction than stealing another author's work, but the practice is under increasing scrutiny, as the eruption two weeks ago of a long-standing controversy at Queen's University in Kingston, Canada, makes clear.

Colleagues of Reginald Smith, an emeritus professor of mechanical and materials engineering at Queen's, say that up to 20 of Smith's papers contain material copied without acknowledgment from previous publications. University officials first learned of the duplications in 2005, and they eventually led to an investigation by the Natural Sciences and Engineering Research Council (NSERC), which funded some of Smith's work, including experiments on board the U.S. space shuttles. Although Smith avoided censure for research misconduct, three papers were subsequently retracted by the Annals of the New York Academy of Sciences and one by the Journal of Materials Processing Technology. The situation was recently made public in news reports and has led to calls for stronger powers by funding agencies in Canada to discipline researchers who engage in the practice.

"He was a very good scientist, but something happened and he got into this business of duplicating papers," says Chris Pickles, a metallurgist at Queen's who raised concerns about Smith's publication practices after spotting some duplications under Smith's name while searching an online database. Smith referred a request for comment to his lawyer, Ken Clark of law firm Aird and Berlis in Toronto, Canada, who notes that many of the republications duplicated material from conference proceedings, which in an earlier epoch would not usually have been published. He also notes that Smith is retired, and does not stand to gain financially from his republications.

Many researchers say that republication without citation violates the premise that each scientific paper should be an original contribution. It can also serve to falsely inflate a researcher's CV by suggesting a higher level of productivity. And although the repetition of the methods section of a paper is not necessarily considered inappropriate by the scientific community, "we would expect that results, discussion and the abstract present novel results," says Harold Garner, a bioinformatician at Virginia Polytechnic Institute and State University in Blacksburg. Garner's research group used an automated software tool to check the biomedical literature for duplicated text, and identified more than 79,000 pairs of article abstracts and titles containing duplicated wording. He says work on the database of partly duplicated articles--called Déjà vu--has led to close to 100 retractions by journal editors who found the reuse improper. An analysis by Garner in the press at Urologic Oncology shows that while the total quantity of biomedical literature has risen steadily since 2000, cases of republication stopped rising after 2003 and fell sharply between 2006 and 2008 (see graph). "It actually does look like it's getting better," says Garner. "People who would ordinarily step across the line are not doing it."

He credits increased vigilance by journal editors who are using his free tool or commercially available software to check submissions for repeated text and halt dubious papers before they reach publication.

NSERC's policy on integrity in research makes no specific reference to plagiarism or self-plagiarism, which has led to calls for tougher rules in the wake of the publicity over Smith's case. In the United States, the National Science Foundation (NSF) takes a strong stance on plagiarism in general, says Christine Boesz, who was inspector-general at the NSF from 1999 until 2008. "The NSF got into the plagiarism game early," she says. Numbers obtained by Nature under the US Freedom of Information Act show that, since 2007, the agency has found between 5 and 13 cases of plagiarism each year. In contrast, the U.S. Department of Health and Human Services's Office of Research Integrity (ORI), which is responsible for overseeing alleged plagiarism associated with National Institutes of Health research, has reported no cases of plagiarism of text over the past three years, but has found up to 14 scientists a year guilty of falsification or fabrication of data (see Table 1).

Ann Bradley, a spokeswoman for the ORI, says the office's working definition of plagiarism excludes minor cases. Nick Steneck, director of research ethics and integrity at the University of Michigan in Ann Arbor, says authorities worldwide should adopt a uniform misconduct policy that provides clear guidance not only on data falsification and fabrication but also on lesser ethical breaches--such as self-plagiarism.

August 17, 2010

Scientific misconduct estimated to drain millions each year - SCIENTIFIC AMERICAN

Katherine Harmon
As speculation swirls around the status of possible investigations into research by the prolific Harvard psychologist Marc Hauser, a new study drills down to figure out the true cost of scientific misconduct.
Neither Harvard nor the federal government, which has funded some of Hauser's work that has been retracted or amended, has come forward with statements about the status of the scholar's work. But in the meantime, any investigation is likely costing the university—and possibly the government—a pretty penny, according to the new work, published online August 17 in PLoS Medicine.
Scientific misconduct is defined as "fabrication, falsification or plagiarism in proposing, performing or reviewing research or in reporting research results," according to the U.S. Office of Research Integrity (ORI). A 2009 meta-analysis of misconduct studies found that about 14 percent of responding scientists reported having witnessed falsification by others—and 2 percent confessed (anonymously) to having been involved in fabrication, falsification or modification of data themselves.
An inquiry into scientific misconduct often leads to research disruption, evidence confiscation and lengthy meetings, all of which can add up quickly in terms of expenses such as faculty and staff labor. A typical case might run in the neighborhood of half a million dollars, concluded the authors of the new case study, led by Arthur Michalek of the Roswell Park Cancer Institute in Buffalo, N.Y. Taking as an example a real case from their own institution, they estimated the direct costs of that instance of misconduct to be about $525,000.
Michalek and his colleagues break down the costs into three categories: fraudulent research (grants, investments and equipment), investigation (faculty, personnel and external assistance) and remediation (loss of current or pending grants and others in the affected lab). These calculations don't take into account other potential costs (such as lawsuits and loss of future funding) and intangibles (such as loss of trust, demoralization of associates and any research conducted on the basis of fraudulent results).
In the case of the example at their institution, the researchers estimated that the most expensive aspect of the internal investigation was the demands on faculty, who spent hundreds of hours assessing the case both during and outside of formal meetings.
The new policy forum paper does not aim to be a universal measure of misconduct costs. "Our experience will likely not be wholly representative of other institutions," the researchers noted, acknowledging that their estimates thus far "amount to a 'best guess' scenario."
By their calculations, however, the 217 U.S. cases of misconduct reported to the OIR in 2009 would add up to more than $110 million each year. And the actual rate of misconduct remains uncertain, "owing largely to its clandestine nature as well as to the problem of underreporting," the researchers noted.
Steps to avoid wrongdoing in the first place—such as education, training, mentoring and monitoring—are not free either. But, Michalek and his colleagues estimate that, "the costs of these proactive activities pale in comparison to the costs of a single case of scientific misconduct."

Random Posts



.
.

Popular Posts