Thursday, May 3, 2012

Predictive Coding Gets A Chance

Bexis attended the annual spring meeting last week.  PLAC meetings are almost always good for at least one blog post.  This is it.


 
In the high-tech morass that is ediscovery, parties have tried various ways to do something about the disparity between cost and benefit.  An approach is to attempt to use new technology to fix – or at least ameliorate – the problems caused by the explosion in electronic information caused by existing technology.

 
One such proposed technological fix is called “predictive coding.”  Googling that phrase yields far more technical information than we could possibly provide (or maybe even understand), so in the nutshell of a very small nut, predictive coding takes advantage of artificial intelligence software that enables a computer to learn from its mistakes and adjust its processes accordingly.  The need for attorneys to review produced edocuments is a major aspect of excessive ediscovery cost.

 
Predictive coding can reduce that cost by using computers to extrapolate actual attorney review of a small subset (a “seed set”) of edocuments over the entire proposed production of documents.  The attorneys review the seed set – then the computer does a similar set of documents based upon the attorney coding.  The attorneys review that set and correct errors.  The computer does another set, having incorporated the attorney’s revisions.  That review process is repeated however many times, until everyone is satisfied the error rate (both false positive and false negative) is acceptable.  The vendors claim predictive coding ultimately makes fewer mistakes than review by actual human attorneys.  Take those financially interested claims with however many grains of salt you believe they deserve.

 
But until recently, no court anywhere had authorized the use of predictive coding in actual ediscovery.  Now that’s changed.  A presentation we heard at the PLAC spring meeting last week (by David Cohen of Reed Smith), mentioned four decisions in three cases where predictive coding had been judicially authorized as an ediscovery tool.

 
The oldest of them was decided less than three months ago.  In Moore v. Publicis Groupe, ___ F. Supp.2d ___, 2012 WL 607412 (Mag. S.D.N.Y. Feb. 24, 2012), a magistrate judge declared “that computer-assisted review is an acceptable way to search for relevant ESI in appropriate cases.”  Id. at *1.  In Moore the parties had initially “agreed” to use predictive coding, but disputes then (predictably?) arose, requiring judicial resolution.  Perhaps not so coincidentally, the magistrate before whom that agreement was reached had personally written an article on the benefits of predictive coding, id. at *2, and lawyers sure pay attention when their judge starts quoting his/her own articles.

 
Be that as it may, the first Moore decision (yes, there’s Moore to follow), established some guidelines for the use of predictive coding:
  • “[P]roportionality requires consideration of results as well as costs. And if stopping at 40,000 [out of a universe of 3 million documents] is going to leave a tremendous number of likely highly responsive documents unproduced, [the defendant’s] proposed cutoff doesn't work.”  Id. at *3.
  • A seed set of 2400 documents would be culled and reviewed until the predictive coding process reached a level of 95% confidence – that the documents generated by the program were responsive.  Id. at *5.
  • “[A]ll of the documents [in] the seed set, whether . . . ultimately coded relevant or irrelevant, aside from privilege, will be turned over to” plaintiffs.  Id.  Both sides could code documents in the seed set. Id.
  • The seed set coding itself involved two processes:  a keyword code and “judgmental sampling” – the latter to be performed by “senior attorneys” who would not otherwise be conducting a manual document review.  Id.
  • The number of training iterations for the computer was initially set at seven, with the possibility of more if the results had not stabilized.  Id. at *6.
  • Predictive coding was accurate enough to support a certification that a disclosure was “complete and correct” under Fed. R. Civ. P. 26(g)(1)(A).  Id. at *7.
  • Daubert requirements do not apply to a determination of the validity of an ediscovery method.  Id.
  • Accuracy concerns would be addressed “down the road” by reviewing documents from that seed set that the predictive coding system had judged irrelevant.  Id. at *8.  If the system was deeming “hot documents” to be “irrelevant,” then the software would have to be “retrained” or “some other search method employed.”  Id.

 
The first Moore opinion stated that it was dealing with the “easy” case, where both sides agreed to the use of predictive coding.  In dictum the magistrate addressed the criteria that a “harder” case – where one side did not want predictive coding at all – would entail:

 
The question to ask in that situation is what methodology would the requesting party suggest instead?  Linear manual review is simply too expensive where, as here, there are over three million emails to review.  Moreover, while some lawyers still consider manual review to be the “gold standard,” that is a myth, as statistics clearly show that computerized searches are at least as accurate, if not more so, than manual review. . . .  [O]n every measure, the performance of [predictive coding] was at least as accurate (measured against the original review) as that of human re-review.

2012 WL 607412, at *9 (citation and quotation marks omitted).


 
The first Moore opinion closed with four “lessons for the future”:  (1) judicial approval of predictive coding is necessarily tentative in any given case because how well such processes work can only be determined by their results; (2) for predictive coding to work requires staged discovery; (3) counsel need to get relevant information ahead of time from their clients’ knowledge of the producing party’s records; and (4) ediscovery vendors should participate in hearings concerning predictive coding.  Id. at *12.

 
As the first Moore opinion was by a magistrate, it carried with it a right of appeal to the relevant federal district court.  Fed. R. Civ. P. 72(a).  Rather than go through with predictive coding, the plaintiffs in Moore took such an appeal – indeed, once away from the magistrate (and his published article) they appeared to revoke their consent to predictive coding altogether.  In Moore v. Publicis Groupe SA, 2012 WL 1446534 (S.D.N.Y. April 26, 2012), the district court affirmed – just in time for the PLAC spring meeting.

 
The court affirmed the magistrate in all respects – except that it didn’t matter whether the plaintiffs agreed to predictive coding or not, since the procedures adequately protected their rights:

 
[T]he confusion [over plaintiff’s consent] is immaterial because the ESI protocol contains standards for measuring the reliability of the process and the protocol builds in levels of participation by Plaintiffs.  It provides that the search methods will be carefully crafted and tested for quality assurance, with Plaintiffs participating in their implementation. . . .  If there is a concern with the relevance of the culled documents, the parties may raise the issue before [the magistrate] before the final production.  Further, upon the receipt of the production, if Plaintiffs determine that they are missing relevant documents, they may revisit the issue of whether the software is the best method.


 
Moore II, 2012 WL 1446534, at *2.  The reliability of predictive coding can only be determined by looking at its results.  Thus, it is “premature” and “speculative” to raise reliablity concerns before the system is tested in practice.  If problems arise, then “the parties are allowed to reconsider their methods.”  Id.

 
In the end, the court in Moore II cautioned that perfection cannot be allowed to become the enemy of the good.  Proportionality has a role to play in determining how ediscovery is to be conducted:
There simply is no review tool that guarantees perfection. . . .  [T]here are risks inherent in any method of reviewing electronic documents.  Manual review with keyword searches is costly, though appropriate in certain situations.  However, even if all parties here were willing to entertain the notion of manually reviewing the documents, such review is prone to human error and marred with inconsistencies from the various attorneys’ determination of whether a document is responsive.


 
2012 WL 1446534, at *3.

 
The PLAC presentation also indicated that predictive coding had been approved in the case of Kleen Products LLC et al v. Packaging Corporation of America, 1:10-cv-05711 (N.D. Ill.), earlier that very week (that is to say, last week, now).  We have a PACER account, and we’re not afraid to use it, so we looked up the docket for that case.  Unfortunately, we can’t confirm or deny approval of predictive coding in Kleen.  That’s because no order appears on PACER.  PACER does, however, indicate that a discovery hearing was held on April 20, 2012, so it’s likely that an oral decision occurred, and the parties are still working out the terms of the order.  There’s also a “transcript” entry in the docket, but it was not accessible through PACER, so all we can say at this point is that it exists.

 
Finally, a state court, in Virginia, has recently authorized predictive coding.  See Global Aerospace Inc. v. Landow Aviation, 2012 WL 1431215 (Va. Cir. Loudoun Co. April 23, 2012) (“it is hereby ordered Defendants shall be allowed to proceed with the use of predictive coding for purposes of the processing and production of electronically stored information, with processing to be completed with 60 days and production to follow as soon as practicable and in no more than 60 days”).  That’s it, however – no rationale is provided.

 
As far as we know (we ran a search just to be sure) these cases represent the sum total of all judicial opinions that have ever discussed predictive coding.  But then, several months ago, there was exactly zero.  It’s a fast moving field. Stay tuned.

 
Finally, why should we care?

 
Two reasons – from a defense perspective.  Number one, it promises to be a hell of a lot cheaper than manual review.  High costs = high nuisance value = higher settlements = more incentive for the other side to bring more meritless lawsuits.  Not to mention that we should, as a general principle, try to save our clients money whenever we can.  Number two, ediscovery’s incredibly complicated, and things can go awry.  If things go awry, defendants (and their lawyers) tend to get blamed, because we’re almost always the producing party.  But if a court orders descriptive coding and things go awry….  At that point it’s harder to blame (and/or sanction) us because we’re doing exactly what the court ordered us to do.  Thus, plaintiffs are less able to litigate ediscovery and instead must contend with (see number one) the merits of their lawsuits. The bottom line (we hope) is the same, that being fewer meritless lawsuits.