December 11, 2014

Breaking news and analysis from the world of science policy : Study of massive preprint archive hints at the geography of plagiarism - ScienceInsider

New analyses of the hundreds of thousands of technical manuscripts submitted to arXiv, the repository of digital preprint articles, are offering some intriguing insights into the consequences—and geography—of scientific plagiarism. It appears that copying text from other papers is more common in some nations than others, but the outcome is generally the same for authors who copy extensively: Their papers don’t get cited much.
Since its founding in 1991, arXiv has become the world's largest venue for sharing findings in physics, math, and other mathematical fields. It publishes hundreds of papers daily and is fast approaching its millionth submission. Anyone can send in a paper, and submissions don’t get full peer review. However, the papers do go through a quality-control process. The final check is a computer program that compares the paper's text with the text of every other paper already published on arXiv. The goal is to flag papers that have a high likelihood of having plagiarized published work.
"Text overlap" is the technical term, and sometimes it turns out to be innocent. For example, a review article might quote generously from a paper the author cites, or the author might recycle and slightly update sentences from their own previous work. The arXiv plagiarism detector gives such papers a pass. "It's a fairly sophisticated machine learning logistic classifier," says arXiv founder Paul Ginsparg, a physicist at Cornell University. "It has special ways of detecting block quotes, italicized text, text in quotation marks, as well statements of mathematical theorems, to avoid false positives."
Only when there is no obvious reason for an author to have copied significant chunks of text from already published work—particularly if that previous work is not cited and has no overlap in authorship—does the software affix a “flag” to the article, including links to the papers from which it has text overlap. That standard “is much more lenient" than those used by most scientific journals, Ginsparg says.
To explore some of the consequences of "text reuse," Ginsparg and Cornell physics Ph.D. student Daniel Citron compared the text from each of the 757,000 articles submitted to arXiv between 1991 and 2012. The headline from that study, published Monday in the Proceedings of the National Academy of Sciences (PNAS) is that the more text a paper poaches from already published work, the less frequently that paper tends to be cited. (The full paper is also available for free on arXiv.) It also found that text reuse is surprisingly common. After filtering out review articles and legitimate quoting, about one in 16 arXiv authors were found to have copied long phrases and sentences from their own previously published work that add up to about the same amount of text as this entire article. More worryingly, about one out of every 1000 of the submitting authors copied the equivalent of a paragraph's worth of text from other people's papers without citing them.
So where in the world is all this text reuse happening? Conspicuously missing from the PNAS paper is a global map of potential plagiarism. Whenever an author submits a paper to arXiv, the author declares his or her country of residence. So it should be possible to reveal which countries have the highest proportion of plagiarists. The reason no map was included, Ginsparg told ScienceInsider, is that all the text overlap detected in their study is not necessarily plagiarism.
Ginsparg did agree, however, to share arXiv’s flagging data with ScienceInsider. Since 1 August 2011, when arXiv began systematically flagging for text overlap, 106,262 authors from 151 nations have submitted a total of 301,759 articles. (Each paper can have many more co-authors.) Overall, 3.2% (9591) of the papers were flagged. It's not just papers submitted en masse by a few bad apples, either. Those flagged papers came from 6% (6737) of the submitting authors. Put another way, one out of every 16 researchers who have submitted a paper to arXiv since August 2011 has been flagged by the plagiarism detector at least once.
The map above, prepared by ScienceInsider, takes a conservative approach. It shows only the incidence of flagged authors for the 57 nations with at least 100 submitted papers, to minimize distortion from small sample sizes. (In Ethiopia, for example, there are only three submitting authors and two of them have been flagged.)
Researchers from countries that submit the lion's share of arXiv papers—the United States, Canada, and a small number of industrialized countries in Europe and Asia—tend to plagiarize less often than researchers elsewhere. For example, more than 20% (38 of 186) of authors who submitted papers from Bulgaria were flagged, more than eight times the proportion from New Zealand (five of 207). In Japan, about 6% (269 of 4759) of submitting authors were flagged, compared with over 15% (164 out of 1054) from Iran.
Such disparities may be due in part to different academic cultures, Ginsparg and Citron say in their PNAS study. They chalk up scientific plagiarism to "differences in academic infrastructure and mentoring, or incentives that emphasize quantity of publication over quality."
*Correction, 11 December, 4:57 p.m.:  The map has been corrected to reflect current national boundaries.

October 25, 2014

intihal - Plagiarism in Turkey - Copy, Shake & Paste

Debora Weber-Wulff
 
I was recently invited to speak at a symposium organized by the Inter-Universities Ethics Platform and held at the Eurasian Institute of the University of Istanbul on October 17, 2014. They kindly organized two interpreters who took turns interpreting the talks given in Turkish for me, and my talk into Turkish for those who had need of it. Apparently, even in academic circles English is not a common language. I will describe the talks as far as I was able to understand them here. The conference was focused on intihal, the Turkish word for plagiarism. 
The deputy rector of the Istanbul University welcomed the 60-70 people present (more would come and go during the course of the day), noting that he himself is the editor of an international journal that tests articles submitted for plagiarism. They reject half of the articles submitted for this reason.
The first speaker was Hasan Yazıcı, a retired professor of rheumatology who sued the Turkish government in the European Court of Human Rights and won. He first described his case, which was recently decided (April 2014) and is available online. Since he was speaking to a room of people who had followed the case more or less closely, he did not go into details, but they are given in the judgement:
In 1997 Yazıcı had informed the Turkish Academy of Sciences that a book by a Turkish professor (I.D.) and the founder and former president of the Higher Education Council of Turkey (YÖK) entitled Mother's Book was basically a plagiarism of the popular US book on rearing children by Dr. Spock, Baby and Childcare. In 2000 Yazıcı  published an article about the plagiarism in the Turkish Journal of Physical Medicine and Rehabilitation and a shortened version in a Turkish daily newspaper.
In the article Yazıcı praised YÖK for establishing a committee to examine the scientific ethics of candidates for associate professorships, and proposed that YÖK start the conversation about plagiarism by asking their founder to apologize for the plagiarism in his book. In response, I.D. filed charges against Yazıcı, stating that this publication violated his personality rights. In the following six years the case wound its way back and forth through the court system, with expert witnesses who were close colleagues of I.D. stating that they found no plagiarism in the book, but that the passages in question were "anonymous" information regarding child health and care and that this was a handbook without bibliography or sources, not a scientific work. Yazıcı was found guilty of defamation because his allegations were thus untrue and fined. Yazıcı challenged the selection of experts, and the Court of Cassation kept referring the case back to the lower courts. Again and again close friends were appointed experts, found no plagiarism, and thus Yazıcı was found to be guilty.
Yazıcı finally gave up on the Turkish courts, paid the fine, but took took his case to the European Court of Human Rights, stating that his right to freedom of expression—here stating that he found the book to be a plagiarism—had been interfered with and that the Turkish courts had not properly dealt with the case. He noted that due to the plagiarism, there was outdated information on baby sleeping positions in the book that had been updated by Dr. Spock in his 1998 edition, but was not changed by I.D. The European court found in its judgement that it is indeed necessary in a democratic society for persons to be able to state value judgements, which are impossible to prove either true or false. However, there must exist a sufficient factual basis, so the court (p. 13), to support the value judgement. In this case, the court found sufficient factual basis for the allegations, and ordered the fine paid by Yazıcı to be refunded and his costs for the court cases to be reimbursed.
Yazıcı made the point in his speech that the extent of plagiarism in a country correlates strongly with a lack of freedom of speech. He sees Turkey in the same league as China on this aspect. He noted that everyone knows about plagiarism, but no one speaks about it.

In order to decrease plagiarism we have to speak about plagiarism. He stated in later discussions that it is imperative that Turkish judges understand what plagiarism is, most particularly because there is a law in Turkey now declaring that plagiarism is a crime punishable by prison, but it is still not clear what exactly constitute plagiarism. 
The second talk on "Plagiarism and Philosophy of Law" was given by Sevtap Metin. She described the Turkish legal situation, in particular the law of intellectual property. She noted that there are many sanctions for plagiarism, for example academics can be cut off from their university jobs or from funding. She also described the process for application for a professorship and noted that the committees are currently not doing their job in vetting the publications provided by the applicants. The reason for this is that if they note a suspicion of plagiarism that they cannot prove, they can be sued for defamation of character by the applicant. This discourages people from looking closely at publication lists. However, with Yazıcı recently winning his case in the EU, it must now be possible to speak freely about plagiarism. Citing Kant's categorical imperative, she feels that we must not plagiarize unless we want everyone to plagiarize. And if we tell our children not to lie, but lie ourselves, they will follow our actions and not our words. 
The third talk was by Mustafa Kıcalıoğlu, a former judge now retired from the Court of Cassation, on "Plagiarism in Turkish Law." He spoke about the problems that occur in plagiarism cases in which personality rights have to be weighed against intellectual property rights. He noted that Ernst Eduard Hirsch, a German legal expert who taught at the University of Ankara, was instrumental in drafting the Turkish Copyright Act. Kıcalıoğlu went into some detail on copyright and intellectual property, I noted in the discussion that plagiarism and violation of copyright are not the same things: there is plagiarism that does not violate copyright law and violations of copyright law that are not plagiarisms. Kıcalıoğlu also discussed another long, drawn out plagiarism case of a business management professor who plagiarized on 65 out of 500 pages in a book. He was demoted from the faculty after YÖK found that he had plagiarized, and he sued YÖK, but lost. This person is now a high government official. The discussion on this talk was quite long and emotional, as many people in the audience wanted to relate a story or call for all academic institutions to take action against plagiarism.
After a lunch and tea break I photographed this fine stature of a dervish before we got into the technical part of the symposium. Altan Gürsel of TechKnowledge, the Turkey and Middle East representatives of iParadigms (the company that markets Turnitin and iThenticate), spoke about that software. He first gave the definition of intihal from the Turkish Wikipedia, showed a few cases of cheating that made the news, and then launched into the standard Turnitin talk. He did note, however, that the reports have to be interpreted by and expert and cannot determine plagiarism, so it appears that my constant repeating of this has at least been understood by the software companies themselves, if not all of the users of such systems. He reported on some new features of Turnitin, for example that now also Excel sheets can be checked, and Google Drive and Dropbox can be used for submitting work. In answering a question, he noted that YÖK now scans all dissertations handed in to Turkish universities with iThenticate, but not those from the past. They are planning on including open access dissertations in the future in their database. 
I gave my standard talk on the "Chances and Limits of Plagiarism Software", noting that software cannot determine plagiarism, it can only indicate possible plagiarism, and that there are many false positives and false negatives. During questions a number of people were perplexed that there were so many plagiarisms documented in doctoral dissertations in Germany, since dissertations need to be original research and Germany has a reputation as having a solid academic tradition. They had only heard about the politicians being forced to resign, and wanted to know what was different in Germany that a politician would actually resign on the basis of plagiarism found in his dissertation. They wanted to know if judges in Germany understand plagiarism. I noted that indeed, they understand plagiarism much better than many universities and persons suing their universities because their doctoral degree have been rescinded. The judgements of the VG Cologne and the VG Düsseldorf are very clear and very exact in their application of law to plagiarism cases, as are the judgements in many other cases. 
After a tea break Tayfun Akgül, a professor of Electrical Engineering at the Technical University of Istanbul and the Ethics and Member Conduct Committee of the IEEE spoke on "Plagiarism in Science." Akgül is also a professional cartoonist, with a lively presentation peppered with cartoons that kept the audience laughing and caused the interpreters to apologize for not being able to translate them. He outlined the IEEE organizations and policies for dealing with scientific misconduct on the part of its members. He spoke at length about the case of Turkish physicists having to retract almost 70 papers from the preprint server arXiv. Nature reported on the case in 2007, the authors complained thereafter that they were just borrowing better English. 
Özgür Kasapçopur, the speaker of the ethics committee of the Istanbul University gave the facts and figures of the committee itself and the cases that it has looked at since it was set up in 2010. They have had 29 cases submitted to the committee, but only determined plagiarism in 3 cases. 
Nuran Yıldırım spoke about YÖK and plagiarism. She is a former prefect who was on the ethical boards of both the University of Istanbul and YÖK. The Higher Education Council was established in 1981. From 1998 plagiarism was added to the cases that are investigated there, as plagiarism is considered a crime that can incur a sanction. However, there was only a 2 year statute of limitations in place. This has been since removed, and all applications for assistant professor need to be investigated by YÖK. If they find plagiarism, they have a process to follow and if plagiarism is the final decision, the person applying for a professorship is removed from the university. However, this harsh sentence has now been changed to "more reasonable punishments", whatever that is. She noted that at small universities it is hard to have only a local hearing, as often the members of the committee to investigate a case are relatives of the accused. She had some fascinating stories, especially from the military universities, including one about a General Prof. Dr. found to have plagiarized. She also noted that people do accuse their rivals of plagiarism just to try and get them out of the way. Her final story was about someone who published a dissertation, and eventually found that all of his tables and data were being used in a paper by someone else. He informed YÖK, and the second researcher defended himself by saying that he had used the same laboratory, the lab must have confused the results and given him the results from the other person instead. YÖK then requested the lab notebooks from both parties, only the author of the dissertation could produce them. Since the journal paper author couldn't find his, he was found guilty of plagiarism.   
In the final round, İlhan İlkılıç, a professor of medical ethics at the University of Istanbul, on leave from the University of Mainz and a member of the German national ethics committee, presented a to-do list that included setting out better definitions of plagiarism and academic misconduct and finding ways of objectively looking at plagiarism without personal hostilities or ideologies getting in the way. Discussion about plagiarism is essential, even if it won't prevent plagiarism or scientific misconduct from happening.   
Sadat Murat, chairman of the Turkish national ethics committee, spoke about their work which is to investigate complaints about state servants. However, exempt from this are low-level state servants, as well as the top-ranking politicians. They only report on violations, however, they cannot sanction. They also try to disseminate ethical culture in Turkey by providing ethics training.  
I especially want to thank the interpreters for their work—any errors here are mine for not paying exact attention, they did a great job permitting me to understand a small portion of what is happening in the area of intihal in Turkey.

October 8, 2014

Science fiction? Why the long-cherished peer-review system is under attack

Mathematicians have been studying the number pi for thousands of years, so it might seem startling to learn that a gentleman in Athens, Wisconsin suddenly changed its value.
His revised pi is a bit bigger than the one everyone else uses. And it stops after 12 digits instead of running on forever.
To any trained mathematician, this isn’t even worth a second look. It’s the work of an amateur who doesn’t understand pi, the ratio of the circumference of a circle to the diameter.
But his paper, published in a fake science journal that will print anything for a fee, now shows up in Google Scholar, including footnotes citing himself, himself, himself and himself again. Oh, and Pythagoras — once.
Google Scholar is a search engine that looks for scientific articles and theses, the meat and potatoes of scientific literature.
But Google Scholar is not discerning. It also turns up a new paper from an Egyptian engineer who decided to rewrite Einstein and claims to have discovered the nature of dark energy at the same time.
Again, that’s an eye-roller for anyone in the physics business, yet there it is in a search of scholarly journals, muddying up the intellectual waters.
It wasn’t supposed to happen this way, of course.
The peer review system was designed to ensure that before research is published, it’s of good quality, whether everyone agrees with its conclusions or not.
Under the system, a researcher who makes a discovery sends it to a science journal to publish. The journal sends it to a group of experts in the field to check it out to see whether the work is well done. If the peers approve, it is published — often with changes requested by these experts.
But peer review is under assault, from both the outside and the inside.
Thousands of “predatory” publishers that imitate science journals are undermining scientists’ ability to distinguish good from bad.
The predators skip peer review. A series of tests by the Citizen, Science magazine and others found that many predators will print anything verbatim, allowing a flood of low-quality work to appear in online journals.
Researchers in Canada often maintain they are skilled enough to recognize and ignore worms in the scientific apple. But there’s a harder question: if simply godawful papers are getting published, what about the whole blurry spectrum ranging from substandard through so-so to slightly plagiarized?
It’s not just an academic question. Our daily use of technology, and even the medical treatments we receive, depend on the ability of researchers to trade information honestly.
Many academics see the rise of a two-tier science system, a stronger one found in developed countries, and researchers with fewer resources in the developing world who depend on predators. A Turkish newspaper recently wrote that whoever pays the predators “climbs the career steps two by two.”
There are also small open-access journals struggling to set themselves up with insufficient resources. One in Ottawa claims to publish 23 international journals, using freelancers, from a single room in Billings Bridge. Another operates from a house in Burnaby, B.C. Staff who work there won’t reveal their last names.
Scam scientific conferences are also a growing assault on the peer review process. These events allow anyone to register, for a fee, to present a paper on anything, good or bad, on topic or off.
The speaker can then go to the dean of his or her faculty and say, “Look, I’m the keynote speaker at a conference in Paris!” It seems like a career-boosting move, especially if the name of the conference mimics that of a real scientific gathering.
It’s all about blurring.
WASET, the World Association of Science, Engineering and Technology, organizes conferences somewhere in the world every few days, but they’re low-quality affairs at which anyone can register a paper on anything.
To dress it up, WASET offers conferences with names the same or similar to real conferences organized by real scientific groups. Recently, WASET put on an International Conference on Educational Data Mining. The real version belongs to the International Educational Data Mining Society.
In October alone, the Turkey-based WASET has set up conferences in Bali, Brussels, Osaka, London, Paris, Dubai, Barcelona and Istanbul. Each will cover dozens of fields — aviation, agriculture, business management, linguistics, mathematics, pedagogy, biology, law, medicine, computer engineering, nanoscience, history, civil engineering, geology, chemistry, ecology and on and on.
Anyone who presents a paper pays 500 euros, roughly $700, (and 100 euros more if they want the paper published.)
It is tough to shut down groups like WASET. It is not against the law to stage a conference with the same name as another, and even if authorities were to go after such operators, they are hard to identify and could easily pop up under a new name a week later. WASET has scheduled 103 conferences for next year, mostly in Western Europe and two in Canada. In most cases, universities pay for the travel.
Canadian academics like to say that we don’t get fooled by scams. But at the University of King’s College in Halifax, science historian Gordon McOuat noted that his university has had to cancel travel plans of faculty wanting to attend scam conferences.
Jeffrey Beall of the University of Colorado found a new publisher of 107 online journals — sprintjournals.com — that advertises an affiliation with the established Elsevier publishing group. There’s no actual connection, but anyone reading Sprint’s website will see the familiar Elsevier logo, and may trust it.
Real journals are “hijacked” by impostors using the same name. So an article published in Afinidad can be either in a reputable journal from Spain or an impostor using Afinidad’s name to trick authors. This is a widespread practice. An Indian publisher has a website mimicking the legitimate BioMed Central.
The fake journal Experimental & Clinical Cardiology used to be real until it was sold and new owners took a more profitable path. It blurs its identity by using the same name as before and by claiming to be the official journal of a very real and legitimate cardiology society, although there is no connection.
Legitimate peer review is far from dead. But it has a nasty cough that isn’t clearing up.
The papers on pi and dark energy are just two of many that show up in academic index services.
“For example, Google Scholar does not screen for quality, and it indexes many articles that contain pseudo-science in them,” Beall says.
“(It) is the world’s most popular index for scholarly content. This index and many other abstracting and indexing services do not sufficiently screen for quality and allow much scientific junk to be included in their databases. This affects the cumulative nature of science, where new research builds on the research already recorded in the academic record.”
And the predators have other ways to gain acceptance.
“Some scholarly publishing organizations do not screen applicants for membership. Thus some predatory publishers apply and are granted membership. Then the predatory publishers use these memberships to argue that they are legitimate publishers.”
Even the top ranks of peer review have their problems, though.
Two U.S. psychologists ran a test of peer review back in 1982, taking 12 papers that had been published in high-ranking psychology journals and re-submitting them to the same journals with changed titles and different authors’ names.
All the originals came from famous institutions. The re-submitted ones carried names of fictional authors and institutions, some on the hippy-dippy side: the “Tri-Valley Center for Human Potential,” for instance.
The same body of work should be accepted again, the two researchers felt. It wasn’t. Only three of 38 reviewers and editors spotted the duplicates. That allowed nine papers to continue, and eight were turned down for allegedly poor quality — evidence of bias, the authors concluded, against academic institutions with lower pedigree.
(Giving preference to well-known academics is known as the Matthew Effect, from a passage in the Gospel of Matthew: “For to everyone who has, more shall be given…”) The journal Science flagged it as a problem as far back as 1968.
Yes, but people are smarter now, right?
Not so fast. In 2006, the editor of the British Medical Journal (BMJ), Richard Smith, listed problems with peer review. It’s inconsistent, he found: two reviewers of the same paper can come to “laughably” opposite conclusions. Sometimes it’s dishonest (as when a reviewer rejected a paper, but stole chunks of it for his own work.) It rarely catches fraud. And it’s tilted in favour of male researchers.
Smith, who was also chief executive of the BMJ Publishing Group, calls peer review “little better than tossing a coin” and “a flawed process, full of easily identified defects with little evidence that it works.”
And he cites a colleague with similar misgiving: “That is why Robbie Fox, the great 20th century editor of the Lancet, who was no admirer of peer review, wondered whether anybody would notice if he were to swap the piles marked ‘publish’ and ‘reject.'”
Its nature, he writes, is impossible to define precisely. “Peer review is thus like poetry, love, or justice.” (A side note: Smith rented a 15th-century palazzo in Venice to write this as part of a longer analysis, and he clearly had the arts on his mind, comparing a medical study at one stage with an altarpiece by Tintoretto.)
Still, Smith notes, the scientific establishment believes in peer review, and concludes wryly: “How odd that science should be rooted in belief.”
This summer the British Medical Journal published a study of how well peer review worked in 93 recent medical trials.
It explained: “Despite the widespread use of peer review little is known about its impact on the quality of reporting of published research articles.”
It concluded that “peer reviewers often fail to detect important deficiencies in the reporting of the methods and results of randomized trials,” and they “requested relatively few changes for reporting of trial methods and results.” Most of their suggestions were helpful but a few were not, it added.
Sometimes, even with the names removed, reviewers may recognize an author’s research because everyone works in the same field. Christine Wenneras and Agnes Wold, two Swedes analyzing peer review in Nature in 1997, wrote of the ”friendship bonus” and “nepotism” that can occur.
As a widely quoted 2006 opinion piece in Nature by Charles G. Jennings, one of the journal’s former editors, noted: “Scientists understand that peer review per se provides only a minimal assurance of quality, and that the public conception of peer review as a stamp of authentication is far from the truth.” Jennings went on to argue for harder, more quantifiable factors to make peer review more dependable — work currently under way by an international group called EQUATOR which promotes strict reporting guidelines.
Medical professor Roger Pierson of the University of Saskatchewan points to the flip side: rivalry.
“Human nature being what it is, professional jealousy and egos flare from time to time just because people don’t like each other — they’ll trash each others’ manuscripts to be spiteful,” he wrote in an email.
“There are also constant cases of people in races to claim ‘First!!!’ for whatever that’s worth,” and giving a negative or delayed review to one’s rival “can give them an advantage,” he wrote. “The science community and the university system have no real way to respond or ensure that their members are playing nicely with one another. C’est la vie.”
These days, some scientists skip the whole traditional publishing process, at least for some of their work. The Internet beckons, and they go straight to their audience, cutting out the middleman.
This is what University of Ottawa biologist Jules Blais calls “the blogification of science.” It doesn’t replace traditional journal publishing, but “this is something that we have been seeing with social media. The volume has gone way up and the quality is coming down. We have to be very careful in how we preserve our highly regarded peer-reviewed publications because we need them desperately.”
Another way to bypass peer review is to post work directly online at arXiv (pronounced “archive,”) hosted by Cornell University. It takes papers in mathematics and some sciences, including physics and astronomy. The system is called “preprint,” implying that papers can go online at arXiv while awaiting peer review somewhere else. But the second stage isn’t mandatory, and there are now more than 8,000 papers a month posted on arXiv.
Still, they can’t ignore the traditional journals entirely. Careers are built there. Nature estimates academics worldwide publish more than one million papers a year.
“Everything we do is really judged on publications, and if we want grant funding (to keep a lab running), people look at your CV,” says Joyce Wilson, a virus researcher and relatively new associate professor at the University of Saskatchewan. “And if you have published well in the past they assume that you will publish well in the future, and they will give money.”
All of this worries Saskatchewan’s Pierson. “Peer review doesn’t catch many things, even patent fraud,” he says. “There is a growing literature on this one. Fraud and deceit in the halls of science have been around forever and now that careerism seems to have become more important than the search for truth that many, if not most, of us actually entered the biz to pursue…. well, things have progressed.
“Even the biggest, most prestigious journals are not immune. There have been some particularly egregious cases in the past decade. So, cutting the garbage? Perhaps not as much as we would like to believe.”
Still, he concludes, that while peer review is far from perfect, “right now it’s the best we’ve got. I think that the system needs an overhaul and perhaps this issue (bogus science) is a good stimulus.”

August 9, 2014

Some thoughts about the suicide of Yoshiki Sasai - Scientific American ( Doing Good Science )

In the previous post I suggested that it’s a mistake to try to understand scientific activity (including misconduct and culpable mistakes) by focusing on individual scientists, individual choices, and individual responsibility without also considering the larger community of scientists and the social structures it creates and maintains. That post was where I landed after thinking about what was bugging me about the news coverage and discussions about recent suicide of Yoshiki Sasai, deputy director of the Riken Center for Developmental Biology in Kobe, Japan, and coauthor of retracted papers on STAP cells.
I went toward teasing out the larger, unproductive pattern I saw, on the theory that trying a more productive pattern might help scientific communities do better going forward.
But this also means I didn’t say much about my particular response to Sasai’s suicide and the circumstances around it. I’m going to try to do that here, and I’m not going to try to fit every piece of my response into a larger pattern or path forward.
The situation in a nutshell:
Yoshiki Sasai worked with Haruko Obokata at the Riken Center on “stimulus-triggered acquisition of pluripotency”, a method by which exposing normal cells to a stress (like a mild acid) supposedly gave rise to pluripotent stem cells. It’s hard to know how closely they worked together on this; in the papers published on STAP. Obokata was the lead-author and Sasai was a coauthor. It’s worth noting that Obokata was some 20 years younger than Sasai, an up-and-coming researcher. Sasai was a more senior scientist, serving in a leadership position at the Riken Center and as Obokata’s supervisor there.
The papers were published in a high impact journal (Nature) and got quite a lot of attention. But then the findings came into question. Other researchers trying to reproduce the findings that had been reported in the papers couldn’t reproduce them. One of the images in the papers seemed to be a duplicate of another, which was fishy. Nature investigated, Riken investigated, the papers were retracted, Obokata continued to defend the papers and to deny any wrongdoing.
Meanwhile, a Riken investigation committee said “Sasai bore heavy responsibility for not confirming data for the STAP study and for Obokata’s misconduct”. This apparently had a heavy impact on Sasai:
Sasai’s colleagues at Riken said he had been receiving mental counseling since the scandal surrounding papers on STAP, or stimulus-triggered acquisition of pluripotency, cells, which was lead-authored by Obokata, came to light earlier this year.
Kagaya [head of public relations at Riken] added that Sasai was hospitalized for nearly a month in March due to psychological stress related to the scandal, but that he “recovered and had not been hospitalized since.”
Finally, Sasai hanged himself in a Riken stairwell. One of the notes he left, addressed to Obokata, urged her to reproduce the STAP findings.
So, what is my response to all this?
I think it’s good when scientists take their responsibilities seriously, including the responsibility to provide good advice to junior colleagues.
I also think it’s good when scientists can recognize the limits. You can give very, very good advice — and explain with great clarity why it’s good advice — but the person you’re giving it to may still choose to do something else. It can’t be your responsibility to control another autonomous person’s actions.
I think trust is a crucial part of any supervisory or collaborative relationship. I think it’s good to be able to interact with coworkers with the presumption of trust.
I think it’s awful that it’s so hard to tell which people are not worthy of our trust before they’ve taken advantage of our trust to do something bad.
Finding the right balance between being hands-on and giving space is a challenge in the best of supervisory or mentoring relationships.
Bringing an important discovery with the potential to enable lots of research that could ultimately help lots of people to one’s scientific peers — and to the public — must feel amazing. Even if there weren’t a harsh judgment from the scientific community for retraction, I imagine that having to say, “We jumped the gun on the ‘discovery’ we told you about” would not feel good.
The danger of having your research center’s reputation tied to an important discovery is what happens if that discovery doesn’t hold up, whether because of misconduct or mistakes. And either way, this means that lots of hard work that is important in the building of the shared body of scientific knowledge (and lots of people doing that hard work) can become invisible.
Maybe it would be good to value that work on its own merits, independent of whether anyone else judged it important or newsworthy. Maybe we need to rethink the “big discoveries” and “important discoverers” way of thinking about what makes scientific work or a research center good.
Figuring out why something went wrong is important. When the something that went wrong includes people making choices, though, this always seems to come down to assigning blame. I feel like that’s the wrong place to stop.
I feel like investigations of results that don’t hold up, including investigations that turn up misconduct, should grapple with the question of how can we use what we found here to fix what went wrong? Instead of just asking, “Whose fault was this?” why not ask, “How can we address the harm? What can we learn that will help us avoid this problem in the future?”
I think it’s a problem when a particular work environment makes the people in it anxious all the time.
I think it’s a problem when being careful feels like an unacceptable risk because it slows you down. I think it’s a problem when being first feels more important than being sure.
I think it’s a problem when a mistake of judgment feels so big that you can’t imagine a way forward from it. So disastrous that you can’t learn something useful from it. So monumental that it makes you feel like not existing.
I feel like those of us who are still here have a responsibility to pay attention.
We have a responsibility to think about the impacts of the ways science is done, valued, celebrated, on the human beings who are doing science — and not just on the strongest of those human beings, but also on the ones who may be more vulnerable.
We have a responsibility to try to learn something from this.
I don’t think what we should learn is not to trust, but how to be better at balancing trust and accountability.
I don’t think what we should learn is not to take the responsibilities of oversight seriously, but to put them in perspective and to mobilize more people in the community to provide more support in oversight and mentoring.
Can we learn enough to shift away from the Important New Discovery model of how we value scientific contributions? Can we learn enough that cooperation overtakes competition, that building the new knowledge together and making sure it holds up is more important than slapping someone’s name on it? I don’t know.
I do know that, if the pressures of the scientific career landscape are harder to navigate for people with consciences and easier to navigate for people without consciences, it will be a problem for all of us.

August 8, 2014

Yoshiki Sasai: A tribute to an outstanding scientist - The Guardian

The scientific community was shocked to hear of the death earlier this week of stem cell researcher Yoshiki Sasai, who apparently committed suicide in the wake of a high profile case of scientific fraud at the RIKEN Center for Developmental Biology (CDB) in Kobe, Japan, where he had worked.
Two papers from the RIKEN CDB, co-authored by Sasai and published in the journal Nature in late January, described a simple method for converting mature cells into embryonic stem cells, called stimulus-triggered acquisition of pluripotency (STAP).
It seemed to good to be true – and it was. The findings were challenged and other labs tried but failed to replicate the method. Lead researcher Haruko Obokata was found guilty of scientific misconduct and in July both of the papers were retracted. Sasai himself was cleared of any involvement in the misconduct, but Obokata did the work under his supervision, and so he was criticised for oversights while the papers were being written up.
I had been working on a feature article about Sasai’s own work for Mosaic, and travelled to Japan earlier this year to visit his lab, as part of my reporting for the article. By coincidence, I arrived the day the STAP method hit the news - the Daily Telegraph had accidentally published their story about it too early - and so found myself competing with several film crews for his attention.
As a result, my visit to the lab was cut short, and I spent far less time there than had been planned, but nevertheless I managed to interview Sasai and two of his colleagues and take a look around.
The story was originally scheduled for publication on 26th August, and my editors at the Wellcome Trust have decided to go ahead and publish it on the scheduled date. They felt that it should mention of these tragic events, without letting them overshadow the real focus of the story, and so, apart from several small changes to the main story, and the addition of a brief epilogue, it is unchanged.
I spent very little time with Sasai but he struck me as a very proud man, and the remarkable work being done in his lab gave him every reason to be, so I do not doubt reports that he had felt “deeply ashamed” about the STAP cell papers and the disrepute they had brought to RIKEN, in the weeks leading up to his death. During this time, an independent committee had recommended that the CDB be dismantled, and Sasai’s mental and physical health had by then suffered considerably, so I feel doubly honoured to have visited him there when I did.
Sadly, many of the news stories about his death have focused on the unfortunate circumstances that mired the last few months of his life. We would like to send our deepest condolences to Sasai’s family and friends and hope that that the Mosaic story will serve as a sensitive and timely tribute to the pioneering work of an outstanding scientist.
This is an unedited version of an article I wrote for the Mosaic blog.

August 5, 2014

Researcher’s death shocks Japan - NATURE News

Yoshiki Sasai, one of Japan’s top stem-cell researchers, died this morning (5 August) in an apparent suicide. He was 52.
Sasai, who worked at the RIKEN Center for Developmental Biology (CDB) in Kobe, Japan, was famous for his ability to coax embryonic stem cells to differentiate into other cell types. In 2011, he stunned the world by mimicking an early stage in the development of the eye — a three-dimensional structure called an optical cupin vitro, using embryonic stem cells.
But lately he had been immersed in controversy over two papers, published in Nature in January, that claimed a simple method of creating embryonic-like cells, called stimulus-triggered acquisition of pluripotency (STAP). Various problems in the papers led to a judgement of scientific misconduct for their lead author, Haruko Obokata, also of the CDB. The papers were retracted on 2 July.
Sasai, who was a co-author of both papers, was cleared of any direct involvement in the misconduct. But he has been harshly criticized for failure of oversight in helping to draft the paper. Some critics, often on the basis of unsupported conjecture, alleged deeper involvement of  the CDB. An independent committee recommended on 12 June that the CDB, where Sasai was a vice-director, be dismantled. Sasai had been instrumental in launching the CDB and helped it to develop into one of the world’s premier research centres.
Just after 9 a.m., Sasai was found hanging in a stairwell of the Institute of Biomedical Research and Innovation, next to the CDB, where he also had a laboratory. He was pronounced dead just after 11 a.m., according to reports by Japanese media.  A bag found at the scene contained three letters: one addressed to CDB management, one to his laboratory members and one to Obokata.
In a brief statement released this morning, RIKEN president Ryoji Noyori mourned the death of the pioneering researcher. “The world scientific community has lost an irreplaceable scientist,” he said.

July 15, 2014

Taiwan’s education minister resigns in wake of SAGE peer review scandal - Retraction Watch

Taiwan’s education minister, Chiang Wei-ling, whose name appeared on several of 60 retracted articles by Peter Chen — apparently the architect of a peer review and citation syndicate we were first to report on last week — has resigned over the publishing scandal.
According to the University World News:
Chiang said in a statement that the decision to resign was made to uphold his own reputation and avoid unnecessary disturbance of the work of the education ministry, after the incident ignited a wave of public criticism.
The UWN reports that Chaing’s resignation on Monday came after Taiwan’s premier, Jiang Yi-huah, instructed the Ministry of Science and Technology to investigate the Chen case.
What’s more, according to the UWN — in news that, we humbly submit, hammers home the point of our New York Times op-ed last Friday:
The Ministry of Science said this week that it may have funded the research for 40 of Peter Chen’s questionable papers amounting to some NT$5.08 million (US$169,164), according to Lin Yi-Bing, vice-minister of science and technology.
He said in remarks released last Sunday that if Chen was found to have violated academic ethics, the science ministry would demand a return of any research funds awarded to him and bar him for life from applying for such funding.
The relationship between Chiang and Peter Chen is a bit complicated, but may hinge on the researcher’s twin brother C.W. Chen, the UWN reports.
Five of the 60 papers, written by CW Chen – Peter’s twin brother – bore Chiang’s name as a co-writer but also listed Peter Chen as one of the writers.
Chiang was CW Chen’s former thesis advisor. In a statement issued this week CW Chen acknowledged that the papers in question bore Chiang’s name without Chiang having been informed in advance because they were a continuation of research on subjects related to his thesis. “It was my decision,” CW Chen said.
He said he had also sought the opinion of his twin brother on some of the papers and therefore had listed him as a co-author but had not informed Chiang. His academic advisor and his brother had never met to discuss the papers, CW Chen said.
At an earlier press conference, CW Chen insisted that the minister did not have any links to his brother. Peter Chen and the minister had met on only two occasions, once in 2004 when CW Chen graduated from the doctoral programme at National Central University where the minister was teaching, and at a science forum.

July 10, 2014

Scholarly journal retracts 60 articles, smashes ‘peer review ring’ - The Washington Post

Every now and then a scholarly journal retracts an article because of errors or outright fraud. In academic circles, and sometimes beyond, each retraction is a big deal.
Now comes word of a journal retracting 60 articles at once.
The reason for the mass retraction is mind-blowing: A “peer review and citation ring” was apparently rigging the review process to get articles published.
You’ve heard of prostitution rings, gambling rings and extortion rings. Now there’s a “peer review ring.”
The publication is the Journal of Vibration and Control (JVC). It publishes papers with names like “Hydraulic engine mounts: a survey” and “Reduction of wheel force variations with magnetorheological devices.”
The field of acoustics covered by the journal is highly technical:
Analytical, computational and experimental studies of vibration phenomena and their control. The scope encompasses all linear and nonlinear vibration phenomena and covers topics such as: vibration and control of structures and machinery, signal analysis, aeroelasticity, neural networks, structural control and acoustics, noise and noise control, waves in solids and fluids and shock waves.
JVC is part of the SAGE group of academic publications.
Here’s how it describes its peer review process:
[The journal] operates under a conventional single-blind reviewing policy in which the reviewer’s name is always concealed from the submitting author.
All manuscripts are reviewed initially by one of the Editors and only those papers that meet the scientific and editorial standards of the journal, and fit within the aims and scope of the journal, will be sent for peer review.  Generally, reviews from two independent referees are required.
An announcement from SAGE published July 8 explained what happened, albeit somewhat opaquely.
In 2013, the editor of JVC, Ali H. Nayfeh, became aware of people using “fabricated identities” to manipulate an online system called SAGE Track by which scholars review the work of other scholars prior to publication.
Attention focused on a researcher named Peter Chen of the National Pingtung University of Education (NPUE) in Taiwan and “possibly other authors at this institution.”
After a 14-month investigation, JVC determined the ring involved “aliases” and fake e-mail addresses of reviewers — up to 130 of them — in an apparently successful effort to get friendly reviews of submissions and as many articles published as possible by Chen and his friends. “On at least one occasion, the author Peter Chen reviewed his own paper under one of the aliases he created,” according to the SAGE announcement.
The statement does not explain how something like this happens. Did the ring invent names and say they were scholars? Did they use real names and pretend to be other scholars? Doesn’t anyone check on these things by, say, picking up the phone and calling the reviewer?
In any case, SAGE and Nayfeh confronted Chen to give him an “opportunity to address the accusations of misconduct,” the statement said, but were not satisfied with his responses.
In May, “NPUE informed SAGE and JVC that Peter Chen had resigned from his post on 2 February 2014.”
Each of the 60 retracted articles had at least one author and/or one reviewer “who has been implicated in the peer review” ring, said a separate notice issued by JVC.
Efforts by The Washington Post to locate and contact Chen for comment were unsuccessful.
The whole story is described in a publication called Retraction Watch” under the headline: “SAGE Publications busts ‘peer review and citation ring.’”
“This one,” it said, “deserves a ‘wow.’”
Update: Some additional information from the SAGE statement: “As the SAGE investigation drew to a close, in May 2014 Professor Nayfeh’s retirement was announced and he resigned his position as Editor-in-Chief of JVC….Three senior editors and an additional 27 associate editors with expertise and prestige in the field have been appointed to assist with the day-to-day running of the JVC peer review process. Following Professor Nayfeh’s retirement announcement, the external senior editorial team will be responsible for independent editorial control for JVC.”
Note to readers: Thanks for pointing out my grammatical error. No excuses.
There’s a follow to this story here.

July 3, 2014

Research integrity: Cell-induced stress - NATURE News

As a much-hailed breakthrough in stem-cell science unravelled this year, many have been asking: ‘Where were the safeguards?’
It seemed almost too good to be true — and it was. Two papers1, 2 that offered a major breakthrough in stem-cell biology were retracted on 2 July, mired in a controversy that has damaged the reputation of several Japanese researchers. >>>
 
Haruko Obokata tearfully faces the media after she was found guilty of misconduct in April.

January 4, 2014

Guest Post: Plagiarism has been left unpunished - Copy, Shake & Paste

This guest post is from Kayhan Kantarlı, a retired professor of physics from the University of Ege in Turkey. He published a first version of the article on his blog on December 10. I edited the article somewhat and am publishing this version here with his permission, as I do not read Turkish and am unable to verify the sources. -- dww >>>

Random Posts



.
.

Popular Posts