December 17, 2012

Top Science Scandals of 2012 - The Scientist

A widely discussed research study published this year showed that more than sloppy mistakes or accidental omissions, retracted papers are most likely to be withdrawn from publication because of scientific misconduct or knowlingly publishing false data. In fact, more than 65 percent of the 2,000 or so papers studied were retracted because of poor ethical judgment. According to that report, high impact journals have been hardest hit by the increasing rate of retractions over the past decade.
In light of these findings, researchers and other observers have proposed several initiatives to help the scientific community with its apparent honesty issues. One suggestion was the creation a Retraction Index. Unlike the Impact Factor, which is based on a journal’s citation rate, the Retraction Index would indicate the number of retractions a journal has for every 1,000 papers published. Following suit, Adam Marcus and Ivan Oransky at Retraction Watch blog suggested creating a Transparency Index, which could include a score for how well a journal controls its manuscript review process, including how it conducts peer review, whether supporting data are also reviewed, whether the journal uses plagiarism detecting software, and a number of other measures. Finally, the lab-services start-up Science Exchange and the open access journal PLOS ONE have collaborated to suggest the Reproducibility Initiative, which would provide a platform for researchers to submit their studies for replication by other labs for a fee. Studies that are successfully reproduced will win a certificate of reproducibility.
Still, The Scientist found no shortage of stories to discuss in this year’s roundup of misconduct stories. Here are a few of the most glaring examples of scientific fraud in 2012:
10 years of fabrication
This year, University of Kentucky biomedical researcher Eric Smart was discovered to have falsified or fabricated 45 figures over the course of 10 years. His research on the molecular mechanisms behind cardiovascular disease and diabetes was well regarded, despite his having used data from knockout mouse models that never existed. “Dr. Smart’s papers were highly cited in the specific caveolae/cardiovascular research field,” Philippe Frank of Thomas Jefferson University in Philadelphia told The Scientist. Smart resigned from his university post in 2011, when the investigation in his misconduct started, and agreed to exclude himself from federal grant applications for the next 7 years. He now teaches chemistry at a local school.
Record-setting retractions
Setting the record for the most publications up for retraction by a single author, Japanese anesthesiologist Yoshitaka Fujii fabricated data in a whopping 172 papers. Beginning his career in falsification in 1993 while at the Tokyo Medical and Dental University, he continued it at the University of Tsukuba, and at Toho University in Tokyo, where he was finally dismissed in February 2012. According to investigations, Fujii never actually saw the patients he reported in his clinical studies, failed to get ethical review board approval for his research, and misled co-authors, sometimes including their names without their permission or knowledge. Although the retractions are not expected to have a large impact on the field—many of them had low citation rates—Fujii used the publications to further his career, publishing a total of 249 papers.
False forensics
The results from roughly 34,000 criminal drug cases were put into question earlier this year, when forensic chemist Annie Dookhan at the shuttered Department of Public Health Lab in Massachusetts was discovered to have falsified records on samples she was assigned to process. Instead, she forged signatures and did not perform tests she recorded as complete, according to investigations. Suspicions may have first arisen due to her impressive output—she claimed to have processed 9,000 samples in a year, whereas colleagues only averaged around 3,000. As a result of her actions, a number of defendants may have been wrongly imprisoned, while others who may have been rightly accused were freed. This month, Boston police warned of an expected spike in crimes due to the large number of convicted drug offenders who will be released because of Dookhan’s misconduct.
Creative reviewing strategies
Rather than falsify data in order to get published, researchers have taken a new tack this year by writing glowing expert reviews for their own papers. When asked by journal editors to suggest names of experts in their field who were not involved in their research, at least four submitting authors suggested names and emails that then forwarded back to their own inboxes. The trend, first reported by Retraction Watch, was caught by one journal editor when author Hyung-In Moon, assistant professor at Dong-A University in Busan, South Korea, offered up names of reviewers with Google and Yahoo rather than university email accounts. “It should be a wake-up call to any journals that don’t have rigorous reviewer selection and screening in place,” Irene Hames, a member of the Committee on Publication Ethics, told The Chronicle of Higher Education.

December 11, 2012

Elsevier editorial system hacked, reviews faked, 11 retractions follow - Retraction Watch

For several months now, we’ve been reporting on variations on a theme: Authors submitting fake email addresses for potential peer reviewers, to ensure positive reviews. In August, for example, we broke the story of a Hyung-In Moon, who has now retracted 24 papers published by Informa because he managed to do his own peer review.
Now, Retraction Watch has learned that the Elsevier Editorial System (EES) was hacked sometime last month, leading to faked peer reviews and retractions — although the submitting authors don’t seem to have been at fault. As of now, eleven papers by authors in China, India, Iran, and Turkey have been retracted from three journals.
Here’s one of two identical notices that have just run in Optics & Laser Technology, for two unconnected papers: >>>

November 17, 2012

Plagiarism and Essay Mills

Sometimes as I decide what kind of papers to assign to my students, I can’t help but think about their potential to use essay mills.

Essay mills are companies whose sole purpose is to generate essays for high school and college students (in exchange for a fee, of course).  Sure, essay mills claim that the papers are meant just to help the students write their own original papers, but with names such as echeat.com, it’s pretty clear what their real purpose is.

Professors in general are very worried about essay mills and their impact on learning, but not knowing exactly what essay mills are or the quality of their output, it is hard to know how worried we should be. So together with Aline Grüneisen, I decided to check it out.  We ordered a typical college term paper from four different essay mills, and as the topic of the paper we chose…  (surprise!) Cheating.

Here is the description of the task that we gave the four essay mills:

“When and why do people cheat? Consider the social circumstances involved in dishonesty, and provide a thoughtful response to the topic of cheating. Address various forms of cheating (personal, at work, etc.) and how each of these can be rationalized by a social culture of cheating.”

We requested a term paper for a university level social psychology class, 12 pages long, using 15 sources (cited and referenced in a bibliography), APA style, to be completed in the next 2 weeks, which we felt was a pretty basic and conventional request. The essay mills charged us in advance, between $150 to $216 per paper. >>>

November 8, 2012

Higher education: Call for a European integrity standard - NATURE


The global market for diplomas and academic rankings has had the unintended consequence of stimulating misconduct, from data manipulation and plagiarism, to sheer fraud. If incentives for integrity prove too hard to create, then at least some of the reasons for cheating must be obliterated through an acknowledgement of the problem in Europe-wide policy initiatives.
At the Second World Conference on the Right to Education this week in Brussels, we shall propose that the next ministerial communiqué of the Bologna Process in 2015 includes a clear reference to integrity as a principle. The Bologna Process is an agreement between European countries that ensures comparability in the standards and quality of higher-education qualifications.
Furthermore, the revised version of the European Standards and Guidelines for Quality Assurance, to be adopted by the 47 Bologna Process ministers in 2015, should include a standard that is linked to academic integrity (with substantive indicators), which could be added to all national and institutional quality-assurance systems.
We believe that an organization such as the Council of Europe has enforcement capabilities that can create momentum for peer pressure and encourage integrity. A standard-setting text, such as a recommendation by the Council of Ministers, or even a convention on this topic, would be timely given the deepening lack of public trust in higher-education credentials.
We do not expect that a few new international rules alone can change much. But we aim to create ways for institutions to become entrepreneurs of integrity in their own countries, as some models already exist (A. Mungiu-Pippidi and A. E. Dusu Int. J. Educ. Dev. 31, 532546; 2011).

November 2, 2012

Scientific fraud is rife: it's time to stand up for good science - The Guardian

The way we fund and publish science encourages fraud. A forum about academic misconduct aims to find practical solutions
 A meeting room 
Peer review happens behind closed doors, with anonymous reviews only seen by editors and authors. This means we have no idea how effective it is. Photo: Alamy
Science is broken. Psychology was rocked recently by stories of academics making up data, sometimes overshadowing whole careers. And it isn't the only discipline with problems - the current record for fraudulent papers is held by anaesthesiologist Yoshitaka Fujii, with 172 faked articles.
These scandals highlight deeper cultural problems in academia. Pressure to turn out lots of high-quality publications not only promotes extreme behaviours, it normalises the little things, like the selective publication of positive novel findings – which leads to "non-significant" but possibly true findings sitting unpublished on shelves, and a lack of much needed replication studies.
Why does this matter? Science is about furthering our collective knowledge, and it happens in increments. Successive generations of scientists build upon theoretical foundations set by their predecessors. If those foundations are made of sand, though, then time and money will be wasted in the pursuit of ideas that simply aren't right.
A recent paper in the journal Proceedings of the National Academy of Sciences shows that since 1973, nearly a thousand biomedical papers have been retracted because someone cheated the system. That's a massive 67% of all biomedical retractions. And the situation is getting worse - last year, Nature reported that the rise in retraction rates has overtaken the rise in the number of papers being published.
This is happening because the entire way that we go about funding, researching and publishing science is flawed. As Chris Chambers and Petroc Sumner point out, the reasons are numerous and interconnecting:
• Pressure to publish in "high impact" journals, at all research career levels;
• Universities treat successful grant applications as outputs, upon which continued careers depend;
• Statistical analyses are hard, and sometimes researchers get it wrong;
• Journals favour positive results over null findings, even though null findings from a well conducted study are just as informative;
• The way journal articles are assessed is inconsistent and secretive, and allows statistical errors to creep through.
Problems occur at all levels in the system, and we need to stop stubbornly arguing that "it's not that bad" or that talking about it somehow damages science. The damage has already been done – now we need to start fixing it.
Chambers and Sumner argue that replication is critical to keeping science honest, and they are right. Replication is a great way to verify the results of a given study, and its widespread adoption would, in time, act as a deterrent for dodgy practices. The nature of statistics means that sometimes positive findings arise by chance, and if replications aren't published, we can't be sure that a finding wasn't simply a statistical anomaly.
But replication isn't enough: we need to enact practical changes at all levels in the system. The scientific process must be as open to scrutiny as possible – that means enforcing study pre-registration to deter inappropriate post-hoc statistical testing, archiving and sharing data online for others to scrutinise, and incentivising these practices (such as guaranteeing publications, regardless of findings).
The peer-review process needs to be overhauled. Currently, it happens behind closed doors, with anonymous reviews only seen by journal editors and manuscript authors. This means we have no real idea how effective peer review is – though we know it can easily be gamed. Extreme examples of fake reviewers, fake journal articles, and even fake journals have been uncovered.
More often, shoddy science and dodgy statistics are accepted for publication by reviewers with inadequate levels of expertise. Peer review must become more transparent. Journals like Frontiers already use an interactive reviewing format, with reviewers and authors discussing a paper in a real-time, forum-like setting.
A simple next step would be to make this system open and viewable by everyone, while maintaining the anonymity of the reviewers themselves. This would allow young researchers to be critical of a senior academic's paper without fear of career suicide.
On 12 November, we are hosting a session on academic misconduct at SpotOn London, Nature's conference about all things science online.
The aim of the session is to find practical solutions to these problems that science faces. It will involve scientific researchers, journalists and journal editors. We've made some suggestions here, but we want more from you. What would you like to see discussed? Do you have any ideas, opinions or solutions?
We'll take the best points and air them at the session, so speak up now! Let's stop burying our heads in the sand and stand up for good science.
Pete Etchells is a biological psychologist and Suzi Gage is a translational epidemiology PhD student. Both are at the University of Bristol

October 24, 2012

Write My Essay, Please! - The Atlantic

These days, students can hire online companies to do all their coursework, from papers to final exams. Is this ethical, or even legal?
A colleague tells the following story. A student in an undergraduate course recently submitted a truly first-rate term paper. In form, it was extremely well crafted, exhibiting a level of writing far beyond the typical undergraduate. In substance, it did a superb job of analyzing the text and offered a number of trenchant insights. It was clearly A-level work. There was only one problem: It markedly exceeded the quality of any other assignment the student had submitted all semester.
The instructor suspected foul play. She used several plagiarism-detection programs to determine if the student had cut and pasted text from another source, but each of these searches turned up nothing. So she decided to confront the student. She asked him point blank, "Did you write this, or did someone else write it for you?" The student immediately confessed. He had purchased the custom-written paper from an online essay-writing service.
The teacher believed this conduct represented a serious breach of academic ethics. The student had submitted an essay written by someone else as his own. He had not indicated that he hadn't written it. He hadn't given any credit to the essay's true author, whose name he did not know. And he was prepared to accept credit for both the essay and the course, despite the fact that he had not done the required work. The instructor severely admonished the student and gave him an F for the assignment.
But the roots of this problem go far deeper than an isolated case of ghostwriting. Essay writing has become a cottage industry premised on systematic flaunting of the most basic aims of higher education. The very fact that such services exist reflects a deep and widespread misunderstanding of why colleges and universities ask students to write essays in the first place.
These services have names such as WriteMyEssay.com, College-paper.org, and Essayontime.com. Bestessays.com claims that "70% of Students use Essay Writing service at least once [sic]" and boasts that all its writers have M.A. and Ph.D. degrees. Some of these Web sites offer testimonials from satisfied customers. One crows that he received a B+ on a ghostwritten history essay he submitted at a prestigious Ivy League institution. Another marvels at the scholarly standards and dedication of the essay writers, one of whom actually made two unsolicited revisions "absolutely free." Another customer pledges, "I will use your essay writing service again, and leave the essay writing to the professionals."
Such claims raise troubling questions. First, is the use of these services a form of plagiarism? Not exactly, because plagiarism implies stealing someone else's work and calling it one's own. In this case, assuming the essay-writing services are actually providing brand-new essays, no one else's work is being stolen without consent. It is being purchased. Nevertheless, the work is being used without attribution, and the students are claiming credit for work they never did. In short, the students are cheating, not learning.
Most essay-writing services evince little or no commitment to helping their customers understand their essay topics or hone their skills as thinkers and writers. They do not ask students to jot down preliminary ideas or submit rough drafts for editing and critique. They do not even encourage them to pose questions about the subject matter. Instead, the services do all the work for them, requesting only three things: the topic, the deadline, and the payment.
Second, how do these essays manage to slip past an instructor undetected? If most institutions knew their students were using essay-writing services, they would undoubtedly subject them to disciplinary proceedings. But the use of such services can be difficult to detect, unless the instructor makes the effort to compare the content and quality of each essay with other work the student has submitted over the course of a semester. But what if the entire semester's work has been ghostwritten?
Another disturbing question concerns the writers who produce such essays. Why would someone who has earned a master's degree or Ph.D. participate in such ethically an dubious activity? One answer may be that many academics find themselves in dead-end, part-time teaching positions that pay so poorly that they cannot make ends meet, and essay writing can be quite a lucrative business. For students who can wait up to 5 days, one service charges $20 per page, but for those who need the essay within 16 hours, the price quadruples to $80 per page. The "works cited" portion of essays can generate additional revenue. The same service provides one reference per page at no additional cost, but if students feel that they need more citations, the charge is $1 per source. Some struggling academics may also view ghostwriting as a form of vengeance on an educational system that saddled them with huge debts and few prospects for a viable academic career.
A far deeper question is this: Why aren't the students who use these services crafting their own essays to begin with? Some may simply be short on time and juggling competing commitments. As the cost of college continues to escalate, more and more students need to hold down part-time or even full-time jobs. Some are balancing school with marriage, parenthood, and other family responsibilities. The sales pitch of the essay-writing services reassures students that they are learning what they need to know and merely "lack the time needed to get it down on paper."
But more disturbingly, some students may question the very value of writing term papers. After all, they may ask, how many contemporary jobs really require such archaic forms of writing? And what is the point of doing research and formulating an argument when reams of information on virtually any topic are available at the click of a button on the Internet? Some may even doubt the relevance of the whole college experience.
Here is where the real problem lies. The idea of paying someone else to do your work for you has become increasingly commonplace in our broader culture, even in the realm of writing. It is well known that many actors, athletes, politicians, and businesspeople have contracted with uncredited ghostwriters to produce their memoirs for them. There is no law against it.
At the same time, higher education has been transformed into an industry, another sphere of economic activity where goods and services are bought and sold. By this logic, a student who pays a fair market price for it has earned whatever grade it brings. In fact, many institutions of higher education market not the challenges provided by their course of study, but the ease with which busy students can complete it in the midst of other daily responsibilities. The shrewd shopper, it seems, invests the least time and effort necessary to get the goods.
But when students outsource their essays to third-party services, they are devaluing the very degree programs they pursue. They are making a mockery of the very idea of education by putting its trappings - assignments, grades, and degrees - ahead of real learning.. They're cheating their instructors, who issue grades on the presumption that they represent a student's actual work. They are also cheating their classmates who do invest the time and effort necessary to earn their own grades.
But ultimately, students who use essay-writing services are cheating no one more than themselves. They are depriving themselves of the opportunity to ask, "What new insights and perspectives might I gain in the process of writing this paper?" instead of "How can I check this box and get my credential?"
Some might argue that even students who use essay services are forced to learn something in order to graduate. After all, when they sit down to take exams, those who have absorbed nothing at all will be exposed. That may be true in a traditional classroom, but these days, more and more degree programs are moving online -- and in response, more and more Internet-based test-taking services have sprung up. One version of "Take-my-exam.com" called AllHomework.net boasts, "Just let us know what the exam is about and we will find the right expert who will log in on your behalf, finish the exam within the time limit and get you a guaranteed grade for the exam itself."
And why stop with exams? Why not follow this path to its logical conclusion? If the entire course is online, why shouldn't students hire someone to enroll and complete all its requirements on their behalf? In fact, "Take-my-course.com" sites have already begun to appear. One site called My Math Genius promises to get customers a "guaranteed grade," with experts who will complete all assignments and "ace your final and midterm." And why should the trend toward vicarious performance stop with education? How long must we wait until some intrepid entrepreneur founds ""Do-my-job.com" or "Live-my-life.com?"
Meanwhile, the proliferation of essay-writing and exam-taking services is merely a symptom of a much deeper and more pervasive disorder. For that reason, the solution is not merely tougher laws and stiffer penalties. We need a series of probing discussions in classrooms all over the country, encouraging students to reflect on the real purpose of education: the new people and ideas a student encounters, and the enlightenment that comes when an assignment truly challenges a student's heart and mind. Perhaps an essay assignment is in order?

Study Shows Studies Show Nothing - Money Morning

If you’ve ever wondered how a study can show something that just can’t be true, or how studies can completely contradict each other, we’ve figured it out. With a little help of course. After today’s Daily Reckoning, I hope you never believe another ‘study’.
Our heartfelt congratulations go out to Mathgen. A mathematics journal provisionally accepted its paper for publication.
Wait, ‘its’ study?
Yes, that’s right. These days a computer program can write an academic paper about mathematics. Then get published in academic journals like ‘Advances in Pure Mathematics’. And you thought those computer programs dominating the stock market were smart!
No longer are your sons and daughters safe from having to compete with machines in the academic world. That’s another ‘safe’ career choice gone. So what was the paper Mathgen wrote about? Here’s the abstract, which describes it:
"Let ρ = A. Is it possible to extend isomorphisms? We show that D’ is stochastically orthogonal and trivially affine. In [10], the main result was the construction of p-Cardano, compactly Erdös, Weyl functions. This could shed important light on a conjecture of Conway-d’Alembert."
If you’re confused, that’s sort of the idea.
Only a mathematics academic could decipher that abstract, because it’s completely meaningless. You see, Mathgen creates papers by combining random nouns, verbs, numbers, symbols and the rest of it.
It spits out something that makes grammatical sense, not that you’d know it, but is completely devoid of any meaning. The formatting is said to be nice, though.
Once the paper is randomly generated and submitted for the academic journal’s review, the academics safeguarding the gates of science and knowledge read the paper and figure it must mean something.
That’s how the paper gets past the peer review process. The same process that keeps climate change science squeaky clean, by the way. Here’s what the anonymous peer reviewer wrote about Mathgen’s bizarre creation:
"For the abstract, I consider that the author can’t introduce the main idea and work of this topic specifically."
Maybe that’s because there is no main idea. No ideas at all, in fact.
Anyway, once the academics of the peer review process give the paper a once over and decide it’s fine to publish in their illustrious journal, the valuable and useful knowledge in the paper is disseminated around the academic world. That will probably never happen to Mathgen’s paper because the joke was exposed before the journal was finalised.
If all this makes you chuckle and shrug, consider that it’s the norm in academic publishing. A similar computer program managed to get an article about postmodernism published in a Duke University journal. And even when people run coherent scientific experiments (with real people) the results have a habit of being suspect too.
Many studies can’t seem to be replicated these days. Meaning, if you ran exactly the same experiment, you wouldn’t get results that confirm the study’s findings. According to one science journalist, 47 of the top 53 most important cancer studies can’t be replicated. They might be completely wrong, and yet we base modern research on the assumption they are right.
To be clear for any sceptics, the Mathgen paper is a true ‘gotcha’ moment. It wasn’t about the fact that a paper can be written by a clever computer program. It wasn’t about anything. It was complete gibberish. But it did show the fact that academic journals are…academic. Let’s hope nobody reads them.
Unfortunately, finance and economics journals actually do get mentioned in the real world. In fact, their conclusions often determine public policy. Politicians hurl studies at each other proving their opinion.
Luckily for economists, it’s very difficult to disprove an economics study. You never know the ‘counterfactual’ — what would have happened. But if maths and science are corrupted, you’d think economics is corrupted twice over.
So the next time you read ‘a study has shown,’ you can disregard the end of the sentence.
Regards,
Nick Hubble
Editor Money Morning

October 17, 2012

How to find Plagiarism in Dissertations - Copy, Shake, and Paste

Germany is awash in another wave of discussions about plagiarism. This time it is the Minister of Education and Research, Annette Schavan. The story about plagiarism in her dissertation broke in May, and the University of Düsseldorf has been examining the case since. Today, October 17, the committee is meeting to decide on the results, but the documentation that they prepared was leaked to the press this past weekend, and the press has been in a frenzy.

And I have laryngitis and can't talk. I have journalists pleading with me to explain how the "magic" VroniPlag Wiki software works. The problem is, there is no magic software. The method used to find plagiarism in dissertations (or any other written work) is called "research". Just normal research.


But since so many people need to know how this is done, here's a crib sheet with 10 easy steps:

  1. Obtain the thesis. If you are just trying to find the dissertation of a particular person who did their doctoral work in Germany, give the German National Library a try. Type in the name and see what it comes up with. Then use the catalog of your local library (often called an OPAC, online public access catalog) or a union catalog to try and locate a copy. Most German states have a union catalog, in Berlin it is the KOBV.  If there is none in your locality, you can obtain a library card and then have the thesis sent to you using inter-library loan.
  2. Read the thesis. There is no royal road. The so-called plagiarism detection software can turn up the odd reference, but only if the sources are online. The best bet is to start reading it, and look for shifts in writing style, or places where the writing turns Spiegel-esque, or for sudden useless details, or misspellings, or just wrong content.
  3. Google. I've given up on other search machines. Just belly up to the search bar and type in three to five words from a sentence or paragraph and see what turns up. If you get a lead through Google Books, use step 1 to obtain a copy of the book. If you get lucky and the first paragraph is taken from the FAZ or the NZZ -- paydirt! Don't just try one paragraph, take a few from different parts of the book. 
  4. Follow the footnotes. University teachers do this when teaching their students how to footnote, and it scares the daylights out of students when they see that the professor found out that they were just making up the footnotes. Does the reference exist? Is the thing being said found on that page? Is the whole paragraph taken from the reference with the quotation marks "forgotten"? Does the chapter in the dissertation continue on after the footnote without a further reference? Is this paragraph perhaps just a translation of the reference? 
  5. Browse the bibliography. What is the most recent source used? Is it five years older than the dissertation? In some fields, this would sound an alarm. Is there some strange or obscure literature listed? Obtain it! Do you need journal articles? Germany had a wonderful listing of the holdings of all libraries nationwide, the Zeitschriftendatenbank. It will tell you where they can be found, and many can even be delivered to your email account as a pdf for a few Euros. Many libraries also subscribe to digital libraries that can be used when sitting at the library. A walk would do you good, anyway, so get over there and have a look.
  6. Digitize. If you have already found a source plagiarized in a dissertation, the chance is that there is more. Have a good look at each, and now digitize the relevant portions. Use a book scanner in the library to get a high-quality scan of the pages as a PDF. You lay the book flat under the camera, press a button, turn the page, press a button, until you are done. Experienced scanners can do over 100 pages per hour. Now use an optical character recognition (OCR) software on the PDF. There are free ones like Google's Tesseract or professional versions such as the one built into Adobe's Acrobat or OmniPage or Abbyy Fine Reader.
  7. Compare. This is one if the few software systems the VroniPlag Wiki people use. It is a text comparison tool that is based on the free algorithm of Dick Grune. The tool marks identical passages in two documents that it is comparing. Put the dissertation in one side, the source in the other, and press "Texte vergleichen!". Don't forget to make a screen shot if the results turn out colorful.
  8. Document. If you find anything, document it exactly. Page and line numbers from the dissertation, URL or page and line numbers from the source, and a copy of each. A two-column side-by-side has proved easy to understand when showing the results to others.
  9. Need help? If you have already found some nasty text parallels, drop in at the VroniPlag Wiki chat or use the drop if you want to be discreet. You might be able to interest someone in working on the case. But remember, they are all volunteers. Or you can continue on yourself, and then inform the ombud for good scientific practice at the university in question.
  10. Publish. If you feel that it is necessary to publish your results, you can either choose a wiki, such as the GuttenPlag Wiki or the VroniPlag Wiki, which makes it easier for others to help you with the documentation, or you can publish on a blog, like the SchavanPlag blog, which gives you complete control of what is published. Or you can print up a book, like Marion Soreth did in 1990 when she documented the dissertation of her colleague Elisabeth Ströker. 
All clear? If I've missed anything, please add in the comments!

October 12, 2012

Scientific fraud: a sign of the times? - The Guardian

If you read about scientific fraud in the recent news, it would seem that there is much to worry about. It's on the rise, apparently! There has been a 10-fold increase in the number of retracted papers since the 1970's, and a number of these are due to fraud or suspected fraud.

An investigation of retractions from the biomedical scientific literature database PubMed published in the prestigious Proceedings of the National Academy of Science USA (PNAS) found that a whopping 63.2% of health- and life-science related retractions were due to fraud, suspected fraud or plagiarism, with good old honest error retractions in the sound minority. This sounds scary – especially the 'suspected fraud'. Is this just the tip of the scientific deceit iceberg? Just how many lies are lurking in the scientific literature?

Then there are the stories. Professor Marc Hauser (formerly) of Harvard was accused by the U.S. Department of Health and Human Services's Office of Research Integrity of inventing results to support his idea of a biological foundation for cognition in monkeys – specifically if they could recognize changes in sound patterns like human babies can. Hauser was a popular scientist too; he even has a best-selling book: Moral Minds: How Nature Designed Our Universal Sense of Right and Wrong where he somewhat ironically argued that "policy wonks and politicians should listen more closely to our intuitions and write policy that effectively takes into account the moral voice of our species." Which worked out in his case; he was busted for scientific misconduct. His book also tells us that "our ability to detect cheaters who violate social norms is one of nature's gifts". Nature's gifts or not, his students and research assistants blew the whistle.

And this isn't just in life science, it's everywhere. Physics has its high profile cheaters too! There is Jan Hendrik Schön, the physicist who made up his data – 26 of his papers have been retracted and he has been stripped of his doctoral degree. And then there is the cold fusion boys who, to be fair, are probably more victims of faulty equipment and sticking to your beloved theory despite the facts, than perpetrators of actual fraud. Psychology is not immune either; Dirk Smeester, whose results seemed too good to be true, has also been caught just making stuff up.

Is no scientific discipline safe? Are scientists just incapable of keeping their modern houses clean? It has been argued that because of recent pressure for scientists to publish groundbreaking results that change the world, the temptation to commit fraud is perhaps bound to increase, implying that there was a simpler, more honest time for science. Dewy-eyed, there is a temptation to believe that scientists back in the day were only of high moral character and were purely duty-bound to pursue the truth. But this isn't really true. Fraud in science isn't new, just like fraud in anything isn't new.>>>

October 2, 2012

Misconduct, Not Error, Found Behind Most Journal Retractions - THE CHRONICLE

Paul Basken
Research misconduct, rather than error, is the leading cause of retractions in scientific journals, with the problem especially pronounced in more prestigious publications, a comprehensive analysis has concluded.
The analysis, described on Monday in PNAS, the Proceedings of the National Academy of Sciences, challenges previous findings that attributed most retractions to mistakes or inadvertent failures in equipment or supplies.
The PNAS finding came from a comprehensive review of more than 2,000 published retractions, including detailed investigations into the public explanations given by the retracting authors and their journals.>>>

September 22, 2012

Plagiarism in Turkey - Copy, Shake, and Paste

Some Turkish academics have been very busy the past few months, it seems. Perhaps inspired by the VroniPlag Wiki documentation in Germany, the authors have put together a massive documentation of plagiarism in Turkish theses that A. Murat Eren, a computer science Ph.D. and post-doc researcher in the United States, has published on his blog. The cases are documented with a short description of each and the committee that accepted the thesis, and some pictures with original and plagiarism.

I've translated the results section with Google translate and tried to fix the sentences to make sense - if someone can provide a proper translation I'll be glad to replace it. :
"With such ethically problematic theses and publications by the thesis advisers themselves who are now permitted to mentor students who themselves are submitting plagiarisms, there is a new generation of academics being produced that completes a cycle.

One of the largest problems is being able to access the theses themselves. University libraries arbitrarily restrict access to theses. In order to solve this problem the Council of Higher Education needs to set up a Thesis Archive.

On the other hand, even in thesis cases where a high level of plagiarism is found,
the legislative is found to be a bottleneck as no deterrent penalties are being proposed. Instead, there are severe reactions [against the whistleblowers] when scientists point out the theft, so the perpetrators continue to quietly steal."
I would hope that the authors work out a bit more hypertextual representation and that English translations would soon be forthcoming. There are a number of smaller blogs and articles that have popped up over the years: Plagiarism in Turkey - Plagiarism (in Turkish) - Plagiarism by Turkish Students - Retracted (a selection of retracted papers by Turkish authors) - a description of a mass plagiarism scandal in physics in 2007 in Turkey.

It will be interesting to see if there will be any sort of reaction on the part of Turkish officials to the new documentation of wide-spread plagiarism.

September 14, 2012

Mathgen paper accepted!

Nate Eldredge
I’m pleased to announce that Mathgen has had its first randomly-generated paper accepted by a reputable journal!

On August 3, 2012, a certain Professor Marcie Rathke of the University of Southern North Dakota at Hoople submitted a very interesting article to Advances in Pure Mathematics, one of the many fine journals put out by Scientific Research Publishing. (Your inbox and/or spam trap very likely contains useful information about their publications at this very moment!) This mathematical tour de force was entitled “Independent, Negative, Canonically Turing Arrows of Equations and Problems in Applied Formal PDE”, and I quote here its intriguing abstract:
Let ρ=A. Is it possible to extend isomorphisms? We show that D is stochastically orthogonal and trivially affine. In [10], the main result was the construction of p-Cardano, compactly Erdős, Weyl functions. This could shed important light on a conjecture of Conway-d’Alembert.
The full text was kindly provided by the author and is available as PDF.
After a remarkable turnaround time of only 10 days, on August 13, 2012, the editors were pleased to inform Professor Rathke that her submission had been accepted for publication. I reproduce here (with Professor Rathke’s kind permission) the notification, which includes the anonymous referee’s report. >>>

September 13, 2012

False positives: fraud and misconduct are threatening scientific research - The Guardian

Alok Jha
Dirk Smeesters had spent several years of his career as a social psychologist at Erasmus University in Rotterdam studying how consumers behaved in different situations. Did colour have an effect on what they bought? How did death-related stories in the media affect how people picked products? And was it better to use supermodels in cosmetics adverts than average-looking women?

The questions are certainly intriguing, but unfortunately for anyone wanting truthful answers, some of Smeesters' work turned out to be fraudulent. The psychologist, who admitted "massaging" the data in some of his papers, resigned from his position in June after being investigated by his university, which had been tipped off by Uri Simonsohn from the University of Pennsylvania in Philadelphia. Simonsohn carried out an independent analysis of the data and was suspicious of how perfect many of Smeesters' results seemed when, statistically speaking, there should have been more variation in his measurements.

The case, which led to two scientific papers being retracted, came on the heels of an even bigger fraud, uncovered last year, perpetrated by the Dutch psychologist Diederik Stapel. He was found to have fabricated data for years and published it in at least 30 peer-reviewed papers, including a report in the journal Science about how untidy environments may encourage discrimination.

The cases have sent shockwaves through a discipline that was already facing serious questions about plagiarism.

"In many respects, psychology is at a crossroads – the decisions we take now will determine whether or not it remains a serious, credible, scientific discipline along with the harder sciences," says Chris Chambers, a psychologist at Cardiff University.

"We have to be open about the problems that exist in psychology and understand that, though they're not unique to psychology, that doesn't mean we shouldn't be addressing them. If we do that, we can end up leading the other sciences rather than following them."

Cases of scientific misconduct tend to hit the headlines precisely because scientists are supposed to occupy a moral high ground when it comes to the search for truth about nature. The scientific method developed as a way to weed out human bias. But scientists, like anyone else, can be prone to bias in their bid for a place in the history books.

Increasing competition for shrinking government budgets for research and the disproportionately large rewards for publishing in the best journals have exacerbated the temptation to fudge results or ignore inconvenient data.

Massaged results can send other researchers down the wrong track, wasting time and money trying to replicate them. Worse, in medicine, it can delay the development of life-saving treatments or prolong the use of therapies that are ineffective or dangerous. Malpractice comes to light rarely, perhaps because scientific fraud is often easy to perpetrate but hard to uncover.

The field of psychology has come under particular scrutiny because many results in the scientific literature defy replication by other researchers. Critics say it is too easy to publish psychology papers which rely on sample sizes that are too small, for example, or to publish only those results that support a favoured hypothesis. Outright fraud is almost certainly just a small part of that problem, but high-profile examples have exposed a greyer area of bad or lazy scientific practice that many had preferred to brush under the carpet.

Many scientists, aided by software and statistical techniques to catch cheats, are now speaking up, calling on colleagues to put their houses in order.

Those who document misconduct in scientific research talk of a spectrum of bad practices. At the sharp end are plagiarism, fabrication and falsification of research. At the other end are questionable practices such as adding an author's name to a paper when they have not contributed to the work, sloppiness in methods or not disclosing conflicts of interest.

"Outright fraud is somewhat impossible to estimate, because if you're really good at it you wouldn't be detectable," said Simonsohn, a social psychologist. "It's like asking how much of our money is fake money – we only catch the really bad fakers, the good fakers we never catch."

If things go wrong, the responsibility to investigate and punish misconduct rests with the scientists' employers, the academic institution. But these organisations face something of a conflict of interest. "Some of the big institutions … were really in denial and wanted to say that it didn't happen under their roof," says Liz Wager of the Committee on Publication Ethics (Cope). "They're gradually realising that it's better to admit that it could happen and tell us what you're doing about it, rather than to say, 'It could never happen.'"

There are indications that bad practice – particularly at the less serious end of the scale – is rife. In 2009, Daniele Fanelli of the University of Edinburgh carried out a meta-analysis that pooled the results of 21 surveys of researchers who were asked whether they or their colleagues had fabricated or falsified research.

Publishing his results in the journal PLoS One, he found that an average of 1.97% of scientists admitted to having "fabricated, falsified or modified data or results at least once – a serious form of misconduct by any standard – and up to 33.7% admitted other questionable research practices. In surveys asking about the behaviour of colleagues, admission rates were 14.12% for falsification, and up to 72% for other questionable research practices."

A 2006 analysis of the images published in the Journal of Cell Biology found that about 1% had been deliberately falsified.  

Rise in retractions
According to a report in the journal Nature, published retractions in scientific journals have increased around 1,200% over the past decade, even though the number of published papers had gone up by only 44%. Around half of these retractions are suspected cases of misconduct.

Wager says these numbers make it difficult for a large research-intensive university, which might employ thousands of researchers, to maintain the line that misconduct is vanishingly rare.

New tools, such as text-matching software, have also increased the detection rates of fraud and plagiarism. Journals routinely use these to check papers as they are submitted or undergoing peer review. "Just the fact that the software is out there and there are people who can look at stuff, that has really alerted the world to the fact that plagiarism and redundant publication are probably way more common than we realised," says Wager. "That probably explains, to a big extent, this increase we've seen in retractions."

Ferric Fang, a professor at the University of Washington School of Medicine and editor in chief of the journal Infection and Immunity, thinks increased scrutiny is not the only factor and that the rate of retractions is indicative of some deeper problem.

He was alerted to concerns about the work of a Japanese scientist who had published in his journal. A reviewer for another journal noticed that Naoki Mori of the University of the Ryukyus in Japan had duplicated images in some of his papers and had given them different labels, as if they represented different measurements. An investigation revealed evidence of widespread data manipulation and this led Fang to retract six of Mori's papers from his journal. Other journals followed suit.  

Self-correction
The refrain from many scientists is that the scientific method is meant to be self-correcting. Bad results, corrupt data or fraud will get found out – either when they cannot be replicated or when they are proved incorrect in subsequent studies – and public retractions are a sign of strength.

That works up to a point, says Fang. "It ended up that there were 31 papers from the [Mori] laboratory that were retracted, many of those papers had been in the literature for five-10 years," he says. "I realised that 'scientific literature is self-correcting' is a little bit simplistic. These papers had been read many times, downloaded, cited and reviewed by peers and it was just by the chance observation by a very attentive reviewer that opened this whole case of serious misconduct."

Extraordinary claims that change the paradigm for a field will elicit lots of attention and people will look at the results very carefully. But cases such as Dr Mori's – where work is flawed and falsified but the results themselves are not particularly surprising or sensational and may even corroborated by others who perform their experiments legitimately – the misconduct is difficult to detect. "It's not that the results are wrong, it's that the data are false," says Fang.

And, often, research studies are very difficult to replicate. "If someone says they did a 15-year clinical study with 9,000 subjects and they publish their results, you may have to take their word for it because you're not going to be able to run out and recruit 9,000 patients of your own and do a 15-year study just to try to corroborate something that somebody else has done," says Fang. "A number of cases recently have come to light only because the investigators didn't have institutional review board approval for their studies.

Upon digging deeper, the institutions questioned whether any of the studies were done at all. This kind of misconduct is very difficult to detect otherwise."

Selective publishing
In psychology research, there is a particular problem with researchers who selectively publish some of their experiments to guarantee a positive result. "Let's say you have this theory that, when you play Mozart, people want to pay more for musical instruments," says Simonsohn. "So you do a study and you play Mozart (or not) and you ask people, 'How much would you pay for a piano or flute and five instruments?'"

If it turned out that only the price of a single type of instrument, violins, say, went up after people had listened to Mozart, it would be possible to publish a research paper that omitted the fact that the researchers had ever asked about any other instruments. This would not allow the reader to make a proper assessment of the strength of the effect that Mozart may (or may not) have on how much a person would pay for musical instruments.

Fanelli has examined this positive result bias. He looked at 4,600 studies across all disciplines between 1990 and 2007, and counted the number of papers that, after declaring an intent to test a particular hypothesis, reported a positive support for it. The overall frequency of positive supports had grown by more than 22% over this time period. In a separate study, Fanelli found that "the odds of reporting a positive result were around five times higher among papers in the disciplines of psychology and psychiatry and economics and business compared with space science".

Culture of neophilia
This issue is exacerbated in psychological research by the "file-drawer" problem, a situation when scientists who try to replicate and confirm previous studies find it difficult to get their research published. Scientific journals want to highlight novel, often surprising, findings. Negative results are unattractive to journal editors and lie in the bottom of researchers' filing cabinets, destined never to see the light of day.

"We have a culture which values novelty above all else, neophilia really, and that creates a strong publication bias," says Chambers. "To get into a good journal, you have to be publishing something novel, it helps if it's counter-intuitive and it also has to be a positive finding. You put those things together and you create a dangerous problem for the field."

When Daryl Bem, a psychologist at Cornell University in New York, published sensational findings in 2011 that seemed to show evidence for psychic effects in people, many scientists were unsurprisingly sceptical. But when psychologists later tried to publish their (failed) attempts to replicate Bem's work, they found journals refused to give them space. After repeated attempts elsewhere, a team of psychologists led by Chris French at Goldsmith's, University of London, eventually placed their negative results in the journal PLoS One this year.

There is no suggestion of misconduct in Bem's research but the lack of an avenue in which to publish failed attempts at replication suggests self-correction can be compromised and people such as Smeesters and Stapel can remain undetected for a long time.

In some cases, misconduct (or fraud) has grave implications. In 2006, Anil Potti and colleagues at Duke University reported in the New England Journal of Medicine that they had developed a way to track the progression of a patient's lung cancer with a device, called an expression array, that could monitor the activity of thousands of different genes.

In a subsequent report in Nature Medicine, the same scientists wrote about a way to use their expression array to work out which drugs would work best for individual patients with lung, breast or ovarian cancer, depending on their patterns of gene activity. Within months of that publication, the biostatisticians Keith Baggerly and Kevin Coombes of the MD Anderson Cancer Centre in Houston had their doubts, and began uncovering major flaws in the work.

"It looked so promising that they actually started to do trials of cancer patients, they chose the chemotherapy depending on this test," says Wager. "The test has turned out to be completely invalid, so people were getting the wrong therapy, because the paper was not retracted quickly enough."

Blowing the whistle
Despite Baggerly and Coombes raising the alarm several times with the institutions involved, it was not until 2010 that Potti resigned from Duke University and several of the papers referring to his work on the expression array were retracted."Usually there is no official mechanism for a whistleblower to take if they suspect fraud," says Chambers. "You often hear of cases where junior members of a department, such as PhD students, will be the ones that are closest to the coalface and will be the ones to identify suspicious cases. But what kind of support do they have? ... That's a big issue that needs to be addressed."

In July this year, a group of the UK's main research funders and university groups published a Concordat to Support Research Integrity. "I don't think anyone would want to see a command-control direct regulation approach here," says Christopher Hale, deputy director of policy at Universities UK. "The concordat ... outlines a framework and then identifies how people fit within that and what actions they will take forward to strengthen it." The concordat requires institutions to have a process in place for dealing with misconduct, which includes appointing a senior person at the institution who can provide the necessary leadership and oversight during investigations.

Michael Farthing, vice-chair of the UK Research Integrity Office and vice-chancellor of the University of Sussex, has been a long-time campaigner on getting institutions and funders to take research misconduct seriously. In a recent article for Times Higher Education, Farthing said he supported the concordat but that it would not be enough. He stopped short of suggesting a statutory regulator for research but wrote: "Government and research leaders should take action to support and encourage excellence in research integrity, not sit on their hands until – as has happened in other countries – a scandal drives them towards legislation."

Statements of principle are one thing – every university and research council probably already has one applauding honourable research and deploring fraud – the key is the steps institutions take in understanding and de-incentivising misconduct.

The economics of science
The pressure to commit misconduct is complex. Arturo Casadevall of the Albert Einstein College of Medicine in New York and editor in chief of the journal mBio, places a large part of the blame on the economics of science. "What is happening in recent years is that the rewards have become too high, for example, for publishing in certain journals. Just like we see the problem in sports that, if you compete and you get a reward, it translates into everything from money and endorsements and things like that. People begin to take risks because the rewards are disproportionate."

As a PhD student in the 1980s, Casadevall says he published research in a few different journals depending on what his research was about. "Within 10 years, all you heard was, 'Where is the paper going to be published?' not 'What's in it?'. Scientists have got into this idea that where you publish determines the value of the work and that's crazy. What's important is what's in the paper."

Casadevall and Fang are aware that their spotlight on misconduct has the potential to show up scientists in a disproportionately bad light – as yet another public institution that cannot be trusted beyond its own self-interest. But they say staying quiet about the issue is not an option.

"Science has the potential to address some of the most important problems in society and for that to happen, scientists have to be trusted by society and they have to be able to trust each others' work," Fang says. "If we are seen as just another special interest group that are doing whatever it takes to advance our careers and that the work is not necessarily reliable, it's tremendously damaging for all of society because we need to be able to rely on science."

For Simonsohn, the biggest issue with outright fraud is not that the bad scientist gets caught but the corrupting effect the work can have on the scientific literature. To reduce the potential negative effects dramatically, Simonsohn suggests requiring scientists to post their data online. "That's very minimal cost and it has many benefits beyond reduction of fraud. It allows other people to learn things from your data which you were not able to learn about, it allows calibration of other models, it allows people to, three years later, reanalyse your data with new techniques."

Ivan Oransky, editor of the Retraction Watch blog that collects examples of retracted papers, argues: "The reason the public stops trusting institutions is when [its members] say things like, 'There's nothing to see here, let us handle it,' and then they find out about something bad that happened that nobody handled. That's when mistrust builds.The big challenges that face humanity, says Casadevall, are scientific ones – climate change, a new pandemic, the fact that most of our calories are coming from a very few plants, which are susceptible to new pests. "These are the big problems and humanity's defence against them is science. We need to make the enterprise work better."  

Malpractice and misconduct 
The South Korean scientist Hwang Woo-suk, rose to international acclaim in 2004 when he announced, in the journal Science, that he had extracted stem cells from cloned human embryos. The following year, Hwang published results showing he had made stem cell lines from the skin of patients – a technique that could help create personalised cures for people with degenerative diseases. By 2006, however, Hwang's career was in tatters when it emerged that he had fabricated material for his research papers. Seoul National University sacked him and, after an investigation in 2009, he was convicted of embezzling research funds.

Around the same time, a Norwegian researcher, Jon Sudbø, admitted to fabricating and falsifying data. Over many years of malpractice, he perpetrated one of the biggest scientific frauds ever carried out by a single researcher – the fabrication of an entire 900-patient study, which was published in the Lancet in 2005.

Marc Hauser, a psychologist at Harvard University whose research interests included the evolution of morality and cognition in non-human primates, resigned in August 2011 after a three-year investigation by his institution found he was responsible for eight counts of scientific misconduct. The alarm was raised by some of his students, who disagreed with Hauser's interpretations of experiments that involved the, somewhat subjective, procedure of working out a monkey's thoughts based on its response to some sight or sound.

Hauser last week admitted to making "mistakes" that led to the findings of research misconduct. "I let important details get away from my control, and as head of the lab, I take responsibility for all errors made within the lab, whether or not I was directly involved," says Hauser in a statement sent to Nature. The doubts over Hauser's work affect a whole field of scientific work that uses the same research technique.

This article was amended on 14 September 2012. The original referred to Liz Wager of the Committee on Public Ethics rather than Publication Ethics. This has been corrected.

Random Posts



.
.

Popular Posts