|

Forensic Fail? As Research Continues to Underscore the Fallibility of Forensic Science, the Judge’s Role as Gatekeeper is More Important than Ever

by

Vol. 102 No. 1 (2018) | Forensic Fail | Download PDF Version of Article
Forensic Fail

This year marks the 25th anniversary of the supreme court’s decision in Daubert V. Merrell Dow Pharmaceuticals, Inc., which fundamentally reshaped how judges evaluate scientific and expert evidence.1 This volume of Judicature, with three wonderful contributions by Jay Koehler, Pate Skene, and an expert team led by William Thompson, comes at an ideal time to reconsider how successful the modern judicial approach to expert evidence has been. That approach is now reflected in Federal Rule of Evidence 702, revised in 2000 to comport with the Daubert ruling, and in state judicial rulings and state rules of evidence, which have followed suit in most states.2

The Supreme Court’s Daubert ruling coincided with a surge in scientific research relevant to criminal cases, including the development of modern DNA testing that both exonerated hundreds of individuals and provided more accurate evidence of guilt. Since then, leading scientific commissions have pointed out real shortcomings in the use of forensic evidence in the courtroom. They also have noted that judges have largely abdicated their responsibility as gatekeepers.3 Moreover, we have learned that those same DNA exonerations are not a one-sided triumph of modern forensic science, because over half of those innocent people were originally convicted by flawed, overstated, and unreliable forensics.4 A flood of scandals have led to audits of thousands of state and federal cases, lab closures, and review commissions. In response to such concerns, the Judicial Conference Advisory Committee on Evidence Rules has solicited comments on potential revisions to Rule 702 addressing forensic expert testimony.5

In this volume, William Thompson and his coauthors describe how we are undergoing a sea change in forensics, particularly in the pattern disciplines. This is a time of crisis but also a time of great promise in forensic science. The terminology used to express conclusions, error-rate statistics, and the fundamental conception of what experts are doing are all in flux. Traditional disciplines like latent fingerprinting, which has been done the same way for over a hundred years, are on the cusp of a transformation. In that field, analysts would typically state without qualification that a print was a “match” and came from the defendant, and that there was a zero probability of an error. Now it is well understood that no human expertise is immune from error, any subjective comparison is inherently probabilistic, and expertise depends on the proficiency of the person doing the analysis.

Today, as Thompson and colleagues describe, forensic expert conclusions are becoming more appropriately cautious in many disciplines. The 2016 White House Presidential Council of Advisers on Science and Technology (PCAST) report emphasized the need to validate forensics, including by studying error rates and informing jurors of those error rates.6 The 2017 American Association for the Advancement of Science (AAAS report), which Thompson co-authored, recommended such changes as well and made more detailed recommendations concerning the language to be used in latent fingerprinting.7 Indeed, forensic conclusions may soon be quantitative; Thompson describes the move to incorporate statistics in forensics. Researchers are hard at work on methods to use algorithms to supplement or even supplant the subjective judgment of individual forensic analysts.

Next, Jay Koehler focuses on reliability: What are the error rates for forensics? As Koehler notes, many people assume that forensics are nearly infallible. If jurors think that forensic experts are infallible but judges know they are not, then what is the obligation of a judge to ensure that the jury is informed about the limitations of the science?

Rule 702 states that an expert may testify if using “reliable principles and methods,” which are “reliably applied” to the facts.8 Or as the Advisory Committee states, judges shall “exclude unreliable expert testimony.”9 Koehler is right that now is the time to ask whether the “reliability rule” adopted in Daubert and in Rule 702 is being appropriately used by the judiciary.

Koehler also highlights the (sometimes quite aggressive) responses to the PCAST report. Some members of government agencies and professional organizations called the report unfounded and biased for suggesting that a range of forensic disciplines lack empirically validated reliability. They suggest there is no problem with continuing to rely on an expert’s experience and subjective professional judgment.

In response, Koehler emphasizes judges should not admit evidence just because an expert claims to have experience. Judges should not admit evidence just because other judges have done so for a long time. Judges should not admit evidence just because experts take (extremely unrealistic and easy) proficiency tests. Experts should have to show that their work is reliable and that they are truly proficient. That is, after all, what Rule 702 demands. The problem, Koehler concludes, is not with the text of the rule, but its laissez-faire application by judges.

Finally, Pate Skene further explores what the proper role of judges is at a time when empirical evidence to support so many forensics can be mixed or lacking. Skene describes how judges have themselves raised real questions about the reliability of commonly used forensic techniques. Skene focuses on the problem that for many forensic techniques, well-designed empirical studies have not yet been conducted to validate the reliability of the techniques. The PCAST report emphasized as much.

Turning back to jurors, Skene highlights how important it is for judges not to just exercise their role as gatekeeper, but also to ensure that when forensic evidence is admitted, jurors hear about its limitations. Jurors are highly receptive to information about error rates in forensic techniques and information about the proficiency of particular forensic analysts, as Greg Mitchell and I have shown in several studies.10 Skene suggests that such information may be conveyed by jury instructions or by additional experts who can explain error rates or reliability concerns to the jury. Skene also suggests that the need for judicial intervention to educate jurors will be greatest when there is less known about the reliability of a forensic technique.

Twenty-five years after Daubert, the reliability revolution is still nascent. In an era of plea bargaining and the vanishing criminal trial, it is all the more important that judges safeguard reliability, since it will be the rare occasion when a fact-finder can scrutinize reliability in the courtroom. It is equally important that crime labs themselves incorporate blind proficiency and error management as part of routine quality control. In response to quite complex problems, these thought-provoking contributions from Koehler, Thompson, and Skene set out a clear agenda to bring reliability back into our criminal courtrooms.

Footnotes:
1. 509 U.S. 579 (1993).
2. Fed. R. Evid. 702; see Brandon L. Garrett & M. Chris Fabricant, The Myth of the Reliability Test, 86 Fordham L. Rev. 1559 (2018) (summarizing state expert evidence rules, listing states that adopted the revised federal Rule 702, and analyzing state court rulings).
3. See Nat’l Research Council, Strengthening Forensic Science in the United States: A Path Forward 95–97 (2009).
4. See generally Brandon L. Garrett & Peter J. Neufeld, Invalid Forensic Science Testimony and Wrongful Convictions, 95 Va. L. Rev. 1 (2009).
5. Daniel J. Capra, Foreword: Reed Symposium on Forensic Expert Testimony, Daubert, and Rule 702, 86 Fordham L. Rev. 1459 (2018).
6. President’s Council of Advisors on Sci. & Tech., Exec. Office of the President, Forensic Science in Criminal Courts: Ensuring Scientific Validity of FeatureComparison Methods (2016).
7. Am. Ass’n for the Advancement of Sci., Forensic Science Assessments: A Quality and Gap Analysis: Latent Fingerprint Examination (2017).
8. Fed. R. Evid. 702(c)–(d).
9. Fed. R. Evid. 702 advisory committee’s notes to 2000 amendment.
10. Brandon L. Garrett & Gregory Mitchell, The Proficiency of Experts, 166 U. Pa. L. Rev. (forthcoming 2018); Gregory Mitchell & Brandon L. Garrett, The Impact of Proficiency Testing Information on the Weight Given to Fingerprint Evidence (draft on file with author); Brandon Garrett & Gregory Mitchell, How Jurors Evaluate Fingerprint Evidence: The Relative Importance of Match Language, Method Information and Error Acknowledgement, 10 J. Empirical Leg. Stud., 484 (2013).