Monday, August 15, 2011

Retractions of Scientific Articles

Last Wednesday an article appeared on the front page of The Wall Street Journal, "Mistakes in Scientific Studies Surge," that got our attention. Retractions of articles from scientific journals have been climbing at a staggering rate. While the number of articles has increased by 44% since 2001, the number of retractions has "leapt more than 15-fold." In the area of medicine, there were 87 retractions in 2001-2005. But in 2006-2010, there were 436 retractions. These retractions were not confined to third-rate, fly-by-night publications. They included some of the most prestigious journals, such as Nature, Science, and the New England Journal of Medicine. The WSJ article focuses on a blood pressure research paper that was published in The Lancet in 2003 and retracted six and a half years later. That's a pretty long lag time. But, as you will soon learn, that is by no means the longest lag time on record. Increasingly long lag times between publication and retraction is another trend identified by the article.



We wish there was an easily-available resource, like what Westlaw has for cases, so that one could determine when a science article had been questioned or undermined. There would be lots and lots of red flags, apparently. Some online versions of journals try to render retracted discredited articles nearly illegible by digitally stamping the word "retracted" over every page. But there is also the problem of subsequent research that in some way relied upon a paper that was later found to be erroneous. That taint is another reason why long lag times for retractions are problematic. There is a website called Retraction Watch that can be a useful tool, but it is still getting tougher to keep up with the avalanche of retractions.



The reliability of scientific publications matters to us, because many of our cases involve battles over what the research does or does not say about issues such as efficacy or safety. Jurors in our cases, especially those cases lasting more than a few weeks, have practically earned a degree in epidemiology by the time they have sat through heated, back-and-forth testimony about selection bias and p-values. Scientific publications are used in product liability litigation to prove up matters such as causation, notice to the company, or notice to learned intermediaries. Different jurisdictions have different rules about the extent to which the content of articles can be paraded before the fact-finder, but -- let's face it -- experts usually can find a way to discuss articles in some detail to support their opinions.



Sometimes, in a sort of meta-scientific matter that takes up more trial time and ups the emotional ante, plaintiffs will argue that the company bought the science via grants, ghost-writing, and whatever other awful-sounding practices are hinted at in some stupid email or conjured up by an imaginative, plaintiff wind-up expert. Just getting in the dollar amounts of the company grants can cause grief. The words of some particularly colorful plaintiff lawyers are still ringing in our ears: "X Corp. spent more money rigging the science than trying to learn the truth or fixing their product." Plaintiff lawyers and their experts might even try to use journal articles that argue that other journal articles suffer from pro-industry bias. Etc. The publication issue starts to look like an Escher print. It becomes a sideshow. It's unfair and it's hogwash because, frankly, companies should be investing in research concerning their products. That said, the plaintiff tactic can be pretty effective. Sometimes the answer is nothing more than this: the data is the data. Sometimes a prudent defense will omit use of perfectly fine research if the company funded it, just to keep the funding issue out of the case.



We've written in the past about a particularly high-profile retraction of an article that had been heavily used by plaintiffs in vaccine-autism litigation. In 1998, Andrew Wakefield and his co-authors wrote an article in The Lancet suggesting a possible link between MMR vaccines and autism. On February 2, 2010, the editors of The Lancet retracted the paper. That is a long delay, to say the least. In the meantime, the Wakefield article was cited in more than 650 other published articles. We are not the only ones who believe that a lot of harm was done in the interim. Many parents decided not to have their children vaccinated. According to Scientific American, "outbreaks of measles following a drop in vaccination rates have occurred in the U.S. and U.K. in the past several years. In 2008 the number of measles cases increased by 36 percent in England and Wales, killing an estimated 10 individuals in those countries between 1990 and 2008." In sum, Wakefield's article wasn't just flawed (or, some would say, something that rhymes with flawed), it was deadly.



And it's not as if retraction halts the public health problems caused by the offending research. Accusations ring more loudly than retractions. As Paul Offit, Chief of the Division of Infectious Diseases and director of of the Vaccine Education Center at the Children's Hospital of Philadelphia, said, "It's very hard to un-scare people once you've scared them." It's almost like what Raymond Donovan said after he was acquitted in a criminal trial: "What office do I go to to get my reputation back?" Even after retraction of the Wakefield article, there are still scientists, parents, and plaintiff lawyers who cling to the vaccine-autism theory. Why let facts get in the way?



The WSJ article is so interesting both because of what it says and what it does not say. What it says is that sometimes the data and/or the analyses in scientific journals is wrong. Why? Is it because there are more and more (and thus more crappy) journals? Is it because more research is taking place or being published in countries lacking the type of scientific infrastructure for detecting misconduct? Are journals are getting better at detecting errors? There are now some very sophisticated software programs that can identify plagiarism, for example. Is it because some scientists are getting dumber or more careless? Worst of all, is it because some scientists are getting more corrupt? Sadly, that might be the case. Retractions due to fraud increased more than seven-fold between 2004 and 2009, while overall retractions merely (merely?!) doubled.



What the WSJ article does not say is that alleged industry malfeasance has anything to do with the recent rise in bad research. Too often, that bad research ends up supporting the plaintiff theory du jour. (Truth be told, much plaintiff junk science doesn't even have the support of bad research; it has NO research behind it.) If scientific fraud is on the rise, it doesn't seem to have anything to do with the plaintiff view of the world that companies buy science. Rather, the increasing rate of academic fraud is a result of academic pressure. Henry Kissinger once said that battles among academics are so nasty because the stakes are so low. But to professors competing for tenure, the stakes don't seem so low. In an environment of publish or perish, it's no surprise that some people would rather publish twaddle than perish. Some academics are determined to make a splash, and there is a bias in favor of publishing dramatic results. So we get research that is eye-catching, impressive ... and wrong.



The stakes in the cases we litigate aren't low. Depending on the importance of the scientific literature in the case, the frailty or vigor of the research can be a key issue in expert discovery, trial presentations, or, for that matter, voir dire. The lawyers and the experts need to understand whether the relevant literature has been attacked or undermined, and the lawyers certainly need to understand whether bad science has infected the information environment.



* * * * * * * * *



Speaking of retractions, the original version of this post incorrectly attributed the "what office do I go to to get my reputation back" quotation. Apologies for that (our memory grows increasingly unreliable as we plunge into dotage) and thanks to a particularly perceptive reader. Someone else pointed out that the frantic doings in the movie Shampoo are not confined to one day, as we said in a recent post. That is technically, true, but only barely. We won't change or retract on that one. Let it be a monument to our susceptibility to that phenomenon described by Stanley Cavell where movies and memory play with each other in strange ways.