November 17, 2012

Plagiarism and Essay Mills

Sometimes as I decide what kind of papers to assign to my students, I can’t help but think about their potential to use essay mills.

Essay mills are companies whose sole purpose is to generate essays for high school and college students (in exchange for a fee, of course).  Sure, essay mills claim that the papers are meant just to help the students write their own original papers, but with names such as echeat.com, it’s pretty clear what their real purpose is.

Professors in general are very worried about essay mills and their impact on learning, but not knowing exactly what essay mills are or the quality of their output, it is hard to know how worried we should be. So together with Aline Grüneisen, I decided to check it out.  We ordered a typical college term paper from four different essay mills, and as the topic of the paper we chose…  (surprise!) Cheating.

Here is the description of the task that we gave the four essay mills:

“When and why do people cheat? Consider the social circumstances involved in dishonesty, and provide a thoughtful response to the topic of cheating. Address various forms of cheating (personal, at work, etc.) and how each of these can be rationalized by a social culture of cheating.”

We requested a term paper for a university level social psychology class, 12 pages long, using 15 sources (cited and referenced in a bibliography), APA style, to be completed in the next 2 weeks, which we felt was a pretty basic and conventional request. The essay mills charged us in advance, between $150 to $216 per paper. >>>

November 8, 2012

Higher education: Call for a European integrity standard - NATURE


The global market for diplomas and academic rankings has had the unintended consequence of stimulating misconduct, from data manipulation and plagiarism, to sheer fraud. If incentives for integrity prove too hard to create, then at least some of the reasons for cheating must be obliterated through an acknowledgement of the problem in Europe-wide policy initiatives.
At the Second World Conference on the Right to Education this week in Brussels, we shall propose that the next ministerial communiqué of the Bologna Process in 2015 includes a clear reference to integrity as a principle. The Bologna Process is an agreement between European countries that ensures comparability in the standards and quality of higher-education qualifications.
Furthermore, the revised version of the European Standards and Guidelines for Quality Assurance, to be adopted by the 47 Bologna Process ministers in 2015, should include a standard that is linked to academic integrity (with substantive indicators), which could be added to all national and institutional quality-assurance systems.
We believe that an organization such as the Council of Europe has enforcement capabilities that can create momentum for peer pressure and encourage integrity. A standard-setting text, such as a recommendation by the Council of Ministers, or even a convention on this topic, would be timely given the deepening lack of public trust in higher-education credentials.
We do not expect that a few new international rules alone can change much. But we aim to create ways for institutions to become entrepreneurs of integrity in their own countries, as some models already exist (A. Mungiu-Pippidi and A. E. Dusu Int. J. Educ. Dev. 31, 532546; 2011).

November 2, 2012

Scientific fraud is rife: it's time to stand up for good science - The Guardian

The way we fund and publish science encourages fraud. A forum about academic misconduct aims to find practical solutions
 A meeting room 
Peer review happens behind closed doors, with anonymous reviews only seen by editors and authors. This means we have no idea how effective it is. Photo: Alamy
Science is broken. Psychology was rocked recently by stories of academics making up data, sometimes overshadowing whole careers. And it isn't the only discipline with problems - the current record for fraudulent papers is held by anaesthesiologist Yoshitaka Fujii, with 172 faked articles.
These scandals highlight deeper cultural problems in academia. Pressure to turn out lots of high-quality publications not only promotes extreme behaviours, it normalises the little things, like the selective publication of positive novel findings – which leads to "non-significant" but possibly true findings sitting unpublished on shelves, and a lack of much needed replication studies.
Why does this matter? Science is about furthering our collective knowledge, and it happens in increments. Successive generations of scientists build upon theoretical foundations set by their predecessors. If those foundations are made of sand, though, then time and money will be wasted in the pursuit of ideas that simply aren't right.
A recent paper in the journal Proceedings of the National Academy of Sciences shows that since 1973, nearly a thousand biomedical papers have been retracted because someone cheated the system. That's a massive 67% of all biomedical retractions. And the situation is getting worse - last year, Nature reported that the rise in retraction rates has overtaken the rise in the number of papers being published.
This is happening because the entire way that we go about funding, researching and publishing science is flawed. As Chris Chambers and Petroc Sumner point out, the reasons are numerous and interconnecting:
• Pressure to publish in "high impact" journals, at all research career levels;
• Universities treat successful grant applications as outputs, upon which continued careers depend;
• Statistical analyses are hard, and sometimes researchers get it wrong;
• Journals favour positive results over null findings, even though null findings from a well conducted study are just as informative;
• The way journal articles are assessed is inconsistent and secretive, and allows statistical errors to creep through.
Problems occur at all levels in the system, and we need to stop stubbornly arguing that "it's not that bad" or that talking about it somehow damages science. The damage has already been done – now we need to start fixing it.
Chambers and Sumner argue that replication is critical to keeping science honest, and they are right. Replication is a great way to verify the results of a given study, and its widespread adoption would, in time, act as a deterrent for dodgy practices. The nature of statistics means that sometimes positive findings arise by chance, and if replications aren't published, we can't be sure that a finding wasn't simply a statistical anomaly.
But replication isn't enough: we need to enact practical changes at all levels in the system. The scientific process must be as open to scrutiny as possible – that means enforcing study pre-registration to deter inappropriate post-hoc statistical testing, archiving and sharing data online for others to scrutinise, and incentivising these practices (such as guaranteeing publications, regardless of findings).
The peer-review process needs to be overhauled. Currently, it happens behind closed doors, with anonymous reviews only seen by journal editors and manuscript authors. This means we have no real idea how effective peer review is – though we know it can easily be gamed. Extreme examples of fake reviewers, fake journal articles, and even fake journals have been uncovered.
More often, shoddy science and dodgy statistics are accepted for publication by reviewers with inadequate levels of expertise. Peer review must become more transparent. Journals like Frontiers already use an interactive reviewing format, with reviewers and authors discussing a paper in a real-time, forum-like setting.
A simple next step would be to make this system open and viewable by everyone, while maintaining the anonymity of the reviewers themselves. This would allow young researchers to be critical of a senior academic's paper without fear of career suicide.
On 12 November, we are hosting a session on academic misconduct at SpotOn London, Nature's conference about all things science online.
The aim of the session is to find practical solutions to these problems that science faces. It will involve scientific researchers, journalists and journal editors. We've made some suggestions here, but we want more from you. What would you like to see discussed? Do you have any ideas, opinions or solutions?
We'll take the best points and air them at the session, so speak up now! Let's stop burying our heads in the sand and stand up for good science.
Pete Etchells is a biological psychologist and Suzi Gage is a translational epidemiology PhD student. Both are at the University of Bristol

Random Posts



.
.

Popular Posts