Sometimes the results of scientific research projects are just wrong. There was an accident, the data was flawed or insufficient, the methods were flawed, or conclusions improperly drawn. That happens, and subsequent studies will generally recognize that something went wrong, and correct it. It’s not malicious, it’s accidental, and that’s just the way that science works.
But what if the errors ARE malicious? One of the larger science stories in the last few weeks was how a large number of scientific papers have been withdrawn from The Literature over a peer review ring. Peer review is the process by which a scientific paper is read by a scientist (picked by one of the journal’s editors) who wasn’t part of the research team, who impartially judges it on its merits, clarity, and the quality of the work and arguments that back it up. That’s the stamp of approval that says the research worthwhile. In this case, the papers were being sent to ONE OF THE TEAM MEMBERS, who then quickly gave it their stamp of approval.
I’ll admit, my first reaction was “how?” because most of the news sources didn’t mention exactly how it was done (to discourage copycats?). Turns out, they exploited an assumption of honesty on the part of the scientific journal: many journals allow you to suggest who you’d like to review the paper*. That’s a good thing if the field is so large that the editor has no idea who would be qualified to judge a paper’s merits, but obviously, it allows for abuse. Suggest a fake person at an email address you have access to, and bingo, you’re reviewing your own paper. There are some other recent incidents where the reviewer process is blind (as it ought to be) but the editors use a database of names… which the authors have stuffed full of fake IDs so that they will more than likely be picked.
This is just one of the ways that papers can mess up and have to be withdrawn. Based on a quick search through Retraction Watch, it looks like the most common type of fraud is plagiarism, of both the typical stealing-someone-else’s-text-or-figures kind, and the murkier matter of self-plagiarization, where you copy your own words from, say, the last paper you wrote on the subject. After that, there are matters of everything from misrepresented procedures, to experiments that cannot be reproduced, to failing to give proper authorship to someone who did a lot of the work, to fake journals that will publish anything, to outright fabricating data. (And then there’s this)
Obviously, malicious science is bad. Do I have a solution for this? No, not really… People smarter than I are working on shoring up the flaws that have been exploited thus far (it seems that this fake-peer-review thing was a massive tiger whose tail the scientific community was chasing for a few years): requiring multiple reviewers, making trusted shortlists of people to review papers, warning everyone about the predatory journals that will publish anything for money… But as for the root cause? Right now in academia you’re judged by your standing as a researcher, and that comes from your publications (moreso if people don’t know you personally, which is increasingly likely as the field increases). As long as there’s a reward for publishing, SOMEONE is going to try to game the system.
I’m not sure if it’s getting worse — read Retraction Watch and you’ll think the entire modern scientific enterprise is full of frauds, but it’s actually very few compared to the torrent of papers being published every day — but at least people are looking at the problem. And finding them is good. Because aside from the retracted papers themselves, now we have to deal with all the subsequent well-meaning papers that may have built off the (possibly fake) conclusions. And entire political movements built up around falsified results.
*This is meant to protect against the other peer-review malfeasance of having a scientific rival torpedo your work because s/he hates you, or has a vested interest in something else. Although, this is not always true: In the 1960s, Peter van de Kamp announced the discovery of planets orbiting Barnard’s Star, the (then) third-closest star system to the Sun. Because it was an extraordinary discovery, others rushed to replicate it, and George Gatewood and Heinrich Eichhorn discovered they couldn’t, and that the “planets” were actually a flaw with the data. Supposedly, van de Kamp was the peer reviewer on that paper, and let it go through even though it questioned his claim to fame. That’s how peer review is supposed to work. Of course, van de Kamp felt he could overcome their objections, but that’s not the point.