December 17, 2012
Top Science Scandals of 2012 - The Scientist
December 11, 2012
Elsevier editorial system hacked, reviews faked, 11 retractions follow - Retraction Watch
November 17, 2012
Plagiarism and Essay Mills
Essay mills are companies whose sole purpose is to generate essays for high school and college students (in exchange for a fee, of course). Sure, essay mills claim that the papers are meant just to help the students write their own original papers, but with names such as echeat.com, it’s pretty clear what their real purpose is.
Professors in general are very worried about essay mills and their impact on learning, but not knowing exactly what essay mills are or the quality of their output, it is hard to know how worried we should be. So together with Aline Grüneisen, I decided to check it out. We ordered a typical college term paper from four different essay mills, and as the topic of the paper we chose… (surprise!) Cheating.
Here is the description of the task that we gave the four essay mills:
“When and why do people cheat? Consider the social circumstances involved in dishonesty, and provide a thoughtful response to the topic of cheating. Address various forms of cheating (personal, at work, etc.) and how each of these can be rationalized by a social culture of cheating.”
We requested a term paper for a university level social psychology class, 12 pages long, using 15 sources (cited and referenced in a bibliography), APA style, to be completed in the next 2 weeks, which we felt was a pretty basic and conventional request. The essay mills charged us in advance, between $150 to $216 per paper. >>>
November 8, 2012
Higher education: Call for a European integrity standard - NATURE
November 2, 2012
Scientific fraud is rife: it's time to stand up for good science - The Guardian
Peer review happens behind closed doors, with anonymous reviews only seen by editors and authors. This means we have no idea how effective it is. Photo: Alamy
• Universities treat successful grant applications as outputs, upon which continued careers depend;
• Statistical analyses are hard, and sometimes researchers get it wrong;
• Journals favour positive results over null findings, even though null findings from a well conducted study are just as informative;
• The way journal articles are assessed is inconsistent and secretive, and allows statistical errors to creep through.
October 24, 2012
Write My Essay, Please! - The Atlantic
Study Shows Studies Show Nothing - Money Morning
"Let ρ = A. Is it possible to extend isomorphisms? We show that D’ is stochastically orthogonal and trivially affine. In [10], the main result was the construction of p-Cardano, compactly Erdös, Weyl functions. This could shed important light on a conjecture of Conway-d’Alembert."
"For the abstract, I consider that the author can’t introduce the main idea and work of this topic specifically."
Editor Money Morning
October 17, 2012
How to find Plagiarism in Dissertations - Copy, Shake, and Paste
And I have laryngitis and can't talk. I have journalists pleading with me to explain how the "magic" VroniPlag Wiki software works. The problem is, there is no magic software. The method used to find plagiarism in dissertations (or any other written work) is called "research". Just normal research.
But since so many people need to know how this is done, here's a crib sheet with 10 easy steps:
- Obtain the thesis. If you are just trying to find the dissertation of a particular person who did their doctoral work in Germany, give the German National Library a try. Type in the name and see what it comes up with. Then use the catalog of your local library (often called an OPAC, online public access catalog) or a union catalog to try and locate a copy. Most German states have a union catalog, in Berlin it is the KOBV. If there is none in your locality, you can obtain a library card and then have the thesis sent to you using inter-library loan.
- Read the thesis. There is no royal road. The so-called plagiarism detection software can turn up the odd reference, but only if the sources are online. The best bet is to start reading it, and look for shifts in writing style, or places where the writing turns Spiegel-esque, or for sudden useless details, or misspellings, or just wrong content.
- Google. I've given up on other search machines. Just belly up to the search bar and type in three to five words from a sentence or paragraph and see what turns up. If you get a lead through Google Books, use step 1 to obtain a copy of the book. If you get lucky and the first paragraph is taken from the FAZ or the NZZ -- paydirt! Don't just try one paragraph, take a few from different parts of the book.
- Follow the footnotes. University teachers do this when teaching their students how to footnote, and it scares the daylights out of students when they see that the professor found out that they were just making up the footnotes. Does the reference exist? Is the thing being said found on that page? Is the whole paragraph taken from the reference with the quotation marks "forgotten"? Does the chapter in the dissertation continue on after the footnote without a further reference? Is this paragraph perhaps just a translation of the reference?
- Browse the bibliography. What is the most recent source used? Is it five years older than the dissertation? In some fields, this would sound an alarm. Is there some strange or obscure literature listed? Obtain it! Do you need journal articles? Germany had a wonderful listing of the holdings of all libraries nationwide, the Zeitschriftendatenbank. It will tell you where they can be found, and many can even be delivered to your email account as a pdf for a few Euros. Many libraries also subscribe to digital libraries that can be used when sitting at the library. A walk would do you good, anyway, so get over there and have a look.
- Digitize. If you have already found a source plagiarized in a dissertation, the chance is that there is more. Have a good look at each, and now digitize the relevant portions. Use a book scanner in the library to get a high-quality scan of the pages as a PDF. You lay the book flat under the camera, press a button, turn the page, press a button, until you are done. Experienced scanners can do over 100 pages per hour. Now use an optical character recognition (OCR) software on the PDF. There are free ones like Google's Tesseract or professional versions such as the one built into Adobe's Acrobat or OmniPage or Abbyy Fine Reader.
- Compare. This is one if the few software systems the VroniPlag Wiki people use. It is a text comparison tool that is based on the free algorithm of Dick Grune. The tool marks identical passages in two documents that it is comparing. Put the dissertation in one side, the source in the other, and press "Texte vergleichen!". Don't forget to make a screen shot if the results turn out colorful.
- Document. If you find anything, document it exactly. Page and line numbers from the dissertation, URL or page and line numbers from the source, and a copy of each. A two-column side-by-side has proved easy to understand when showing the results to others.
- Need help? If you have already found some nasty text parallels, drop in at the VroniPlag Wiki chat or use the drop if you want to be discreet. You might be able to interest someone in working on the case. But remember, they are all volunteers. Or you can continue on yourself, and then inform the ombud for good scientific practice at the university in question.
- Publish. If you feel that it is necessary to publish your results, you can either choose a wiki, such as the GuttenPlag Wiki or the VroniPlag Wiki, which makes it easier for others to help you with the documentation, or you can publish on a blog, like the SchavanPlag blog, which gives you complete control of what is published. Or you can print up a book, like Marion Soreth did in 1990 when she documented the dissertation of her colleague Elisabeth Ströker.
October 12, 2012
Scientific fraud: a sign of the times? - The Guardian
An investigation of retractions from the biomedical scientific literature database PubMed published in the prestigious Proceedings of the National Academy of Science USA (PNAS) found that a whopping 63.2% of health- and life-science related retractions were due to fraud, suspected fraud or plagiarism, with good old honest error retractions in the sound minority. This sounds scary – especially the 'suspected fraud'. Is this just the tip of the scientific deceit iceberg? Just how many lies are lurking in the scientific literature?
Then there are the stories. Professor Marc Hauser (formerly) of Harvard was accused by the U.S. Department of Health and Human Services's Office of Research Integrity of inventing results to support his idea of a biological foundation for cognition in monkeys – specifically if they could recognize changes in sound patterns like human babies can. Hauser was a popular scientist too; he even has a best-selling book: Moral Minds: How Nature Designed Our Universal Sense of Right and Wrong where he somewhat ironically argued that "policy wonks and politicians should listen more closely to our intuitions and write policy that effectively takes into account the moral voice of our species." Which worked out in his case; he was busted for scientific misconduct. His book also tells us that "our ability to detect cheaters who violate social norms is one of nature's gifts". Nature's gifts or not, his students and research assistants blew the whistle.
And this isn't just in life science, it's everywhere. Physics has its high profile cheaters too! There is Jan Hendrik Schön, the physicist who made up his data – 26 of his papers have been retracted and he has been stripped of his doctoral degree. And then there is the cold fusion boys who, to be fair, are probably more victims of faulty equipment and sticking to your beloved theory despite the facts, than perpetrators of actual fraud. Psychology is not immune either; Dirk Smeester, whose results seemed too good to be true, has also been caught just making stuff up.
Is no scientific discipline safe? Are scientists just incapable of keeping their modern houses clean? It has been argued that because of recent pressure for scientists to publish groundbreaking results that change the world, the temptation to commit fraud is perhaps bound to increase, implying that there was a simpler, more honest time for science. Dewy-eyed, there is a temptation to believe that scientists back in the day were only of high moral character and were purely duty-bound to pursue the truth. But this isn't really true. Fraud in science isn't new, just like fraud in anything isn't new.>>>
October 2, 2012
Misconduct, Not Error, Found Behind Most Journal Retractions - THE CHRONICLE
Research misconduct, rather than error, is the leading cause of retractions in scientific journals, with the problem especially pronounced in more prestigious publications, a comprehensive analysis has concluded.
The PNAS finding came from a comprehensive review of more than 2,000 published retractions, including detailed investigations into the public explanations given by the retracting authors and their journals.>>>
September 22, 2012
Plagiarism in Turkey - Copy, Shake, and Paste
I've translated the results section with Google translate and tried to fix the sentences to make sense - if someone can provide a proper translation I'll be glad to replace it. :
"With such ethically problematic theses and publications by the thesis advisers themselves who are now permitted to mentor students who themselves are submitting plagiarisms, there is a new generation of academics being produced that completes a cycle.I would hope that the authors work out a bit more hypertextual representation and that English translations would soon be forthcoming. There are a number of smaller blogs and articles that have popped up over the years: Plagiarism in Turkey - Plagiarism (in Turkish) - Plagiarism by Turkish Students - Retracted (a selection of retracted papers by Turkish authors) - a description of a mass plagiarism scandal in physics in 2007 in Turkey.
One of the largest problems is being able to access the theses themselves. University libraries arbitrarily restrict access to theses. In order to solve this problem the Council of Higher Education needs to set up a Thesis Archive.
On the other hand, even in thesis cases where a high level of plagiarism is found, the legislative is found to be a bottleneck as no deterrent penalties are being proposed. Instead, there are severe reactions [against the whistleblowers] when scientists point out the theft, so the perpetrators continue to quietly steal."
It will be interesting to see if there will be any sort of reaction on the part of Turkish officials to the new documentation of wide-spread plagiarism.
September 14, 2012
Mathgen paper accepted!
Letρ=A . Is it possible to extend isomorphisms? We show thatD′ is stochastically orthogonal and trivially affine. In [10], the main result was the construction ofp -Cardano, compactly Erdős, Weyl functions. This could shed important light on a conjecture of Conway-d’Alembert.
September 13, 2012
False positives: fraud and misconduct are threatening scientific research - The Guardian
Dirk Smeesters had spent several years of his career as a social psychologist at Erasmus University in Rotterdam studying how consumers behaved in different situations. Did colour have an effect on what they bought? How did death-related stories in the media affect how people picked products? And was it better to use supermodels in cosmetics adverts than average-looking women?
The questions are certainly intriguing, but unfortunately for anyone wanting truthful answers, some of Smeesters' work turned out to be fraudulent. The psychologist, who admitted "massaging" the data in some of his papers, resigned from his position in June after being investigated by his university, which had been tipped off by Uri Simonsohn from the University of Pennsylvania in Philadelphia. Simonsohn carried out an independent analysis of the data and was suspicious of how perfect many of Smeesters' results seemed when, statistically speaking, there should have been more variation in his measurements.
The case, which led to two scientific papers being retracted, came on the heels of an even bigger fraud, uncovered last year, perpetrated by the Dutch psychologist Diederik Stapel. He was found to have fabricated data for years and published it in at least 30 peer-reviewed papers, including a report in the journal Science about how untidy environments may encourage discrimination.
The cases have sent shockwaves through a discipline that was already facing serious questions about plagiarism.
"In many respects, psychology is at a crossroads – the decisions we take now will determine whether or not it remains a serious, credible, scientific discipline along with the harder sciences," says Chris Chambers, a psychologist at Cardiff University.
"We have to be open about the problems that exist in psychology and understand that, though they're not unique to psychology, that doesn't mean we shouldn't be addressing them. If we do that, we can end up leading the other sciences rather than following them."
Cases of scientific misconduct tend to hit the headlines precisely because scientists are supposed to occupy a moral high ground when it comes to the search for truth about nature. The scientific method developed as a way to weed out human bias. But scientists, like anyone else, can be prone to bias in their bid for a place in the history books.
Increasing competition for shrinking government budgets for research and the disproportionately large rewards for publishing in the best journals have exacerbated the temptation to fudge results or ignore inconvenient data.
Massaged results can send other researchers down the wrong track, wasting time and money trying to replicate them. Worse, in medicine, it can delay the development of life-saving treatments or prolong the use of therapies that are ineffective or dangerous. Malpractice comes to light rarely, perhaps because scientific fraud is often easy to perpetrate but hard to uncover.
The field of psychology has come under particular scrutiny because many results in the scientific literature defy replication by other researchers. Critics say it is too easy to publish psychology papers which rely on sample sizes that are too small, for example, or to publish only those results that support a favoured hypothesis. Outright fraud is almost certainly just a small part of that problem, but high-profile examples have exposed a greyer area of bad or lazy scientific practice that many had preferred to brush under the carpet.
Many scientists, aided by software and statistical techniques to catch cheats, are now speaking up, calling on colleagues to put their houses in order.
Those who document misconduct in scientific research talk of a spectrum of bad practices. At the sharp end are plagiarism, fabrication and falsification of research. At the other end are questionable practices such as adding an author's name to a paper when they have not contributed to the work, sloppiness in methods or not disclosing conflicts of interest.
"Outright fraud is somewhat impossible to estimate, because if you're really good at it you wouldn't be detectable," said Simonsohn, a social psychologist. "It's like asking how much of our money is fake money – we only catch the really bad fakers, the good fakers we never catch."
If things go wrong, the responsibility to investigate and punish misconduct rests with the scientists' employers, the academic institution. But these organisations face something of a conflict of interest. "Some of the big institutions … were really in denial and wanted to say that it didn't happen under their roof," says Liz Wager of the Committee on Publication Ethics (Cope). "They're gradually realising that it's better to admit that it could happen and tell us what you're doing about it, rather than to say, 'It could never happen.'"
There are indications that bad practice – particularly at the less serious end of the scale – is rife. In 2009, Daniele Fanelli of the University of Edinburgh carried out a meta-analysis that pooled the results of 21 surveys of researchers who were asked whether they or their colleagues had fabricated or falsified research.
Publishing his results in the journal PLoS One, he found that an average of 1.97% of scientists admitted to having "fabricated, falsified or modified data or results at least once – a serious form of misconduct by any standard – and up to 33.7% admitted other questionable research practices. In surveys asking about the behaviour of colleagues, admission rates were 14.12% for falsification, and up to 72% for other questionable research practices."
A 2006 analysis of the images published in the Journal of Cell Biology found that about 1% had been deliberately falsified.
Rise in retractions
According to a report in the journal Nature, published retractions in scientific journals have increased around 1,200% over the past decade, even though the number of published papers had gone up by only 44%. Around half of these retractions are suspected cases of misconduct.
Wager says these numbers make it difficult for a large research-intensive university, which might employ thousands of researchers, to maintain the line that misconduct is vanishingly rare.
New tools, such as text-matching software, have also increased the detection rates of fraud and plagiarism. Journals routinely use these to check papers as they are submitted or undergoing peer review. "Just the fact that the software is out there and there are people who can look at stuff, that has really alerted the world to the fact that plagiarism and redundant publication are probably way more common than we realised," says Wager. "That probably explains, to a big extent, this increase we've seen in retractions."
Ferric Fang, a professor at the University of Washington School of Medicine and editor in chief of the journal Infection and Immunity, thinks increased scrutiny is not the only factor and that the rate of retractions is indicative of some deeper problem.
He was alerted to concerns about the work of a Japanese scientist who had published in his journal. A reviewer for another journal noticed that Naoki Mori of the University of the Ryukyus in Japan had duplicated images in some of his papers and had given them different labels, as if they represented different measurements. An investigation revealed evidence of widespread data manipulation and this led Fang to retract six of Mori's papers from his journal. Other journals followed suit.
Self-correction
The refrain from many scientists is that the scientific method is meant to be self-correcting. Bad results, corrupt data or fraud will get found out – either when they cannot be replicated or when they are proved incorrect in subsequent studies – and public retractions are a sign of strength.
That works up to a point, says Fang. "It ended up that there were 31 papers from the [Mori] laboratory that were retracted, many of those papers had been in the literature for five-10 years," he says. "I realised that 'scientific literature is self-correcting' is a little bit simplistic. These papers had been read many times, downloaded, cited and reviewed by peers and it was just by the chance observation by a very attentive reviewer that opened this whole case of serious misconduct."
Extraordinary claims that change the paradigm for a field will elicit lots of attention and people will look at the results very carefully. But cases such as Dr Mori's – where work is flawed and falsified but the results themselves are not particularly surprising or sensational and may even corroborated by others who perform their experiments legitimately – the misconduct is difficult to detect. "It's not that the results are wrong, it's that the data are false," says Fang.
And, often, research studies are very difficult to replicate. "If someone says they did a 15-year clinical study with 9,000 subjects and they publish their results, you may have to take their word for it because you're not going to be able to run out and recruit 9,000 patients of your own and do a 15-year study just to try to corroborate something that somebody else has done," says Fang. "A number of cases recently have come to light only because the investigators didn't have institutional review board approval for their studies.
Upon digging deeper, the institutions questioned whether any of the studies were done at all. This kind of misconduct is very difficult to detect otherwise."
Selective publishing
In psychology research, there is a particular problem with researchers who selectively publish some of their experiments to guarantee a positive result. "Let's say you have this theory that, when you play Mozart, people want to pay more for musical instruments," says Simonsohn. "So you do a study and you play Mozart (or not) and you ask people, 'How much would you pay for a piano or flute and five instruments?'"
If it turned out that only the price of a single type of instrument, violins, say, went up after people had listened to Mozart, it would be possible to publish a research paper that omitted the fact that the researchers had ever asked about any other instruments. This would not allow the reader to make a proper assessment of the strength of the effect that Mozart may (or may not) have on how much a person would pay for musical instruments.
Fanelli has examined this positive result bias. He looked at 4,600 studies across all disciplines between 1990 and 2007, and counted the number of papers that, after declaring an intent to test a particular hypothesis, reported a positive support for it. The overall frequency of positive supports had grown by more than 22% over this time period. In a separate study, Fanelli found that "the odds of reporting a positive result were around five times higher among papers in the disciplines of psychology and psychiatry and economics and business compared with space science".
Culture of neophilia
This issue is exacerbated in psychological research by the "file-drawer" problem, a situation when scientists who try to replicate and confirm previous studies find it difficult to get their research published. Scientific journals want to highlight novel, often surprising, findings. Negative results are unattractive to journal editors and lie in the bottom of researchers' filing cabinets, destined never to see the light of day.
"We have a culture which values novelty above all else, neophilia really, and that creates a strong publication bias," says Chambers. "To get into a good journal, you have to be publishing something novel, it helps if it's counter-intuitive and it also has to be a positive finding. You put those things together and you create a dangerous problem for the field."
When Daryl Bem, a psychologist at Cornell University in New York, published sensational findings in 2011 that seemed to show evidence for psychic effects in people, many scientists were unsurprisingly sceptical. But when psychologists later tried to publish their (failed) attempts to replicate Bem's work, they found journals refused to give them space. After repeated attempts elsewhere, a team of psychologists led by Chris French at Goldsmith's, University of London, eventually placed their negative results in the journal PLoS One this year.
There is no suggestion of misconduct in Bem's research but the lack of an avenue in which to publish failed attempts at replication suggests self-correction can be compromised and people such as Smeesters and Stapel can remain undetected for a long time.
In some cases, misconduct (or fraud) has grave implications. In 2006, Anil Potti and colleagues at Duke University reported in the New England Journal of Medicine that they had developed a way to track the progression of a patient's lung cancer with a device, called an expression array, that could monitor the activity of thousands of different genes.
In a subsequent report in Nature Medicine, the same scientists wrote about a way to use their expression array to work out which drugs would work best for individual patients with lung, breast or ovarian cancer, depending on their patterns of gene activity. Within months of that publication, the biostatisticians Keith Baggerly and Kevin Coombes of the MD Anderson Cancer Centre in Houston had their doubts, and began uncovering major flaws in the work.
"It looked so promising that they actually started to do trials of cancer patients, they chose the chemotherapy depending on this test," says Wager. "The test has turned out to be completely invalid, so people were getting the wrong therapy, because the paper was not retracted quickly enough."
Blowing the whistle
Despite Baggerly and Coombes raising the alarm several times with the institutions involved, it was not until 2010 that Potti resigned from Duke University and several of the papers referring to his work on the expression array were retracted."Usually there is no official mechanism for a whistleblower to take if they suspect fraud," says Chambers. "You often hear of cases where junior members of a department, such as PhD students, will be the ones that are closest to the coalface and will be the ones to identify suspicious cases. But what kind of support do they have? ... That's a big issue that needs to be addressed."
In July this year, a group of the UK's main research funders and university groups published a Concordat to Support Research Integrity. "I don't think anyone would want to see a command-control direct regulation approach here," says Christopher Hale, deputy director of policy at Universities UK. "The concordat ... outlines a framework and then identifies how people fit within that and what actions they will take forward to strengthen it." The concordat requires institutions to have a process in place for dealing with misconduct, which includes appointing a senior person at the institution who can provide the necessary leadership and oversight during investigations.
Michael Farthing, vice-chair of the UK Research Integrity Office and vice-chancellor of the University of Sussex, has been a long-time campaigner on getting institutions and funders to take research misconduct seriously. In a recent article for Times Higher Education, Farthing said he supported the concordat but that it would not be enough. He stopped short of suggesting a statutory regulator for research but wrote: "Government and research leaders should take action to support and encourage excellence in research integrity, not sit on their hands until – as has happened in other countries – a scandal drives them towards legislation."
Statements of principle are one thing – every university and research council probably already has one applauding honourable research and deploring fraud – the key is the steps institutions take in understanding and de-incentivising misconduct.
The economics of science
The pressure to commit misconduct is complex. Arturo Casadevall of the Albert Einstein College of Medicine in New York and editor in chief of the journal mBio, places a large part of the blame on the economics of science. "What is happening in recent years is that the rewards have become too high, for example, for publishing in certain journals. Just like we see the problem in sports that, if you compete and you get a reward, it translates into everything from money and endorsements and things like that. People begin to take risks because the rewards are disproportionate."
As a PhD student in the 1980s, Casadevall says he published research in a few different journals depending on what his research was about. "Within 10 years, all you heard was, 'Where is the paper going to be published?' not 'What's in it?'. Scientists have got into this idea that where you publish determines the value of the work and that's crazy. What's important is what's in the paper."
Casadevall and Fang are aware that their spotlight on misconduct has the potential to show up scientists in a disproportionately bad light – as yet another public institution that cannot be trusted beyond its own self-interest. But they say staying quiet about the issue is not an option.
"Science has the potential to address some of the most important problems in society and for that to happen, scientists have to be trusted by society and they have to be able to trust each others' work," Fang says. "If we are seen as just another special interest group that are doing whatever it takes to advance our careers and that the work is not necessarily reliable, it's tremendously damaging for all of society because we need to be able to rely on science."
For Simonsohn, the biggest issue with outright fraud is not that the bad scientist gets caught but the corrupting effect the work can have on the scientific literature. To reduce the potential negative effects dramatically, Simonsohn suggests requiring scientists to post their data online. "That's very minimal cost and it has many benefits beyond reduction of fraud. It allows other people to learn things from your data which you were not able to learn about, it allows calibration of other models, it allows people to, three years later, reanalyse your data with new techniques."
Ivan Oransky, editor of the Retraction Watch blog that collects examples of retracted papers, argues: "The reason the public stops trusting institutions is when [its members] say things like, 'There's nothing to see here, let us handle it,' and then they find out about something bad that happened that nobody handled. That's when mistrust builds.The big challenges that face humanity, says Casadevall, are scientific ones – climate change, a new pandemic, the fact that most of our calories are coming from a very few plants, which are susceptible to new pests. "These are the big problems and humanity's defence against them is science. We need to make the enterprise work better."
Malpractice and misconduct
The South Korean scientist Hwang Woo-suk, rose to international acclaim in 2004 when he announced, in the journal Science, that he had extracted stem cells from cloned human embryos. The following year, Hwang published results showing he had made stem cell lines from the skin of patients – a technique that could help create personalised cures for people with degenerative diseases. By 2006, however, Hwang's career was in tatters when it emerged that he had fabricated material for his research papers. Seoul National University sacked him and, after an investigation in 2009, he was convicted of embezzling research funds.
Around the same time, a Norwegian researcher, Jon Sudbø, admitted to fabricating and falsifying data. Over many years of malpractice, he perpetrated one of the biggest scientific frauds ever carried out by a single researcher – the fabrication of an entire 900-patient study, which was published in the Lancet in 2005.
Marc Hauser, a psychologist at Harvard University whose research interests included the evolution of morality and cognition in non-human primates, resigned in August 2011 after a three-year investigation by his institution found he was responsible for eight counts of scientific misconduct. The alarm was raised by some of his students, who disagreed with Hauser's interpretations of experiments that involved the, somewhat subjective, procedure of working out a monkey's thoughts based on its response to some sight or sound.
Hauser last week admitted to making "mistakes" that led to the findings of research misconduct. "I let important details get away from my control, and as head of the lab, I take responsibility for all errors made within the lab, whether or not I was directly involved," says Hauser in a statement sent to Nature. The doubts over Hauser's work affect a whole field of scientific work that uses the same research technique.
This article was amended on 14 September 2012. The original referred to Liz Wager of the Committee on Public Ethics rather than Publication Ethics. This has been corrected.
Random Posts
Karachi University decides that plagiarism is not misconduct – drops charges - The k2p blog
It does seem that plagiarism is not considered a very serious matter at Universities in Pakistan.Karachi University has found a novel way to drop plagiarism charges against 4 academics against whom plagiarism charges were established by an independent committee. They charged the academics with mi... READ MORE>>
German Research Minister Faces Plagiarism Allegations - ScienceInsider
by Gretchen VogeBERLIN—German Education and Research Minister Annette Schavan is facing allegations that she plagiarized parts of her dissertation, published in 1980. A Web site, called schavanplag (in German) has listed 56 incidents in which the anonymous accuser says Sch... READ MORE>>
Journal Publishers in China Vow to Clamp Down on Academic Fraud - SCIENTIFIC AMERICAN
By David Cyranoski of Nature magazineChina's roughly 5,300 home-grown journals have been a receptacle for much of the nation's research that has resulted from misconductThe China Association for Science and Technology (CAST) in Beijing has taken the lead among the country's publishers in trying t... READ MORE>>
A Sharp Rise in Retractions Prompts Calls for Reform - The New York Times
By CARL ZIMMER In the fall of 2010, Dr. Ferric C. Fang made an unsettling discovery. Dr. Fang, who is editor in chief of the journal Infection and Immunity, found that one of his authors had doctored several papers. It was a new experience for him. “Pri... READ MORE>>
Honest work - NATURE
Nature 484, 141 (12 April 2012), doi:10.1038/484141bThe plagiarism police deserve thanks for defending the honour of the PhD.Last week, Hungary's President Pál Schmitt was forced to resign because of plagiarism detected in his 1992 PhD thesis on physical education. Tivadar Tulassay, rector o... READ MORE>>
A lot of science is just plain wrong
Nigel Hawkes Suddenly, everybody’s saying it: the scientific and medical literature is riddled with poor studies, irreproducible results, concealed data and sloppy mistakes.Since these studies underpin a huge number of government policies, from health to the environment, that’s a serious charge.Le... READ MORE>>
Hungarian President Resigns in Plagiarism Scandal
BUDAPEST, Hungary (AP) — Hungarian President Pal Schmitt resigned Monday because of a plagiarism scandal regarding a doctoral dissertation he had written 20 years ago on the Olympics.>>> READ MORE>>
Popular Posts
-
This guest post is from Kayhan Kantarlı, a retired professor of physics from the University of Ege in Turkey. He published a first versio...
-
Jeffrey Beall This is a list of questionable, scholarly open-access publishers. I recommend that scholars not do any business with these pu...
-
The Yomiuri Shimbun Turkish national Serkan Anilir, recently stripped of the doctorate he obtained from the University of Tokyo over plagiar...
-
Richard Knox Many online journals are ready to publish bad research in exchange for a credit card number. That's the conclusion o...
-
When Robert Barbato of the E. Philip Saunders College of Business at Rochester Institute of Technology (RIT) heard he was being accused of p...