Forensic science is at a crossroads. In the last two decades, often-used forms of pattern evidence, such as fingerprint, tool mark, and bite mark identification, have faced significant criticism for lacking adequate scientific validation or proven reliability. Is this the beginning of a sea change, signaling the rise of a science-based, empirically grounded approach to these forms of evidence, both in the courtroom and in the crime laboratory? Or has the increased attention produced Band-Aids rather than meaningful and lasting cures? This essay argues that the current state of forensic science reform is both “half empty” and “half full.” Looking first at bite mark evidence, then at modifications in the language used by forensic scientists for their courtroom testimony, and, finally, at the creation and the elimination of the National Commission on Forensic Science, this essay argues that we have thus far seen modest and meaningful – but far from adequate or transformative – reform. Our best hope for sustained, substantial changes necessary for improving forensic science evidence within our system of justice requires the creation of another national commission or other institutional body, made up of both research scientists and other institutional stakeholders, and situated as to prevent “capture” by either forensic practitioners or advocates within our adversarial system.
Forensic science evidence is at a crossroads. Over the last two decades, forensic science claims and methods have been subject to a growing chorus of academic and scientific criticism. Much of the criticism has focused on the deeply inadequate research foundations of many forms of regularly used pattern identification evidence, including latent fingerprints, tool marks, bullets, bite marks, documents, and signatures. Important reports by experts and from authoritative institutional bodies such as the National Academy of Sciences (nas) and the President's Council of Advisors on Science and Technology (pcast) have forcefully expressed concerns about the reliability and validity of these and other forensic science techniques like blood spatter evidence and arson determinations. This attention and criticism have expanded public awareness and spurred nontrivial reforms and meaningful institutional and research engagements. Still, as of yet, there has been little fundamental change in how forensic science is used in courtrooms around the country.
This present reality – a host of meaningful but mostly superficial changes alongside a still-faltering trickle of serious research – permits two radically different stories to be told about the likely future of the forensic sciences over the next decade or two. It would be possible (though just barely) to tell a “momentum” story, suggesting that we are on the cusp of an increasingly empirically based, science-driven approach to the validation and use of these influential kinds of evidence. But an equally possible – and, in my view, more realistic – story would characterize the changes made thus far as genuine, but limited and sputtering efforts at reform, unlikely to operate as gateways toward necessary substantial transformations, at least on the near horizon.
Looking at present-day forensic science is thus akin to peering at one of those well-known optical illusions like the figure on the following page, where, from one perspective, the viewer sees a young woman looking away from the observer, but then by squinting or shifting the visual vantage point, the viewer sees the image transformed into an elderly lady, eyes cast downward (see Figure 1).
In this essay, I explore these two linked but disparate ways of understanding the current state of forensic sciences and their use in the courtroom. My argument is, in essence, that neither the present nor the future of the forensic sciences can be adequately understood without taking into account both perspectives at once, somehow finding a way to see both the elegant young lady and the elderly woman at the same time. Furthermore, I suggest that unless we can create a legitimate, appropriately independent institutional body to engage meaningfully with ongoing important questions about the forensic sciences – an authoritative commission or task force institutionally located at least one step outside both the adversarial system and law enforcement – linked forensic science laboratories – the chance of sustained, thoughtful reform in the service of justice is distressingly low.
I start with some basic background information about the current state of the forensic sciences, especially forms of pattern evidence, and then look more closely at three examples that illustrate how the current state of the forensic sciences is a glass both half empty and half full. We have, on the one hand, seen genuine progress toward scientifically sounder forensic science claims; but on the other hand, that progress is dispiritingly limited. Specifically, I first look at the disparate approaches taken in two recent bite mark identification cases, a revealing contrast because bite mark comparisons are one of the least probative and most poorly validated kinds of forensic science evidence currently in regular use. I then turn to examine some modest court-imposed modifications on how forensic experts may testify; and finally, I look, very briefly, at the creation and the dismantling of the National Commission on Forensic Science (ncfs). Unfortunately, with the decision not to recommission the ncfs, we now lack any locus for a broadly conceived, authoritative panel of experts and stakeholders to convene regularly to assess the state of forensic science and recommend reforms. Unless we can reproduce some space and place for those engagements, the hope of ongoing momentum for thoughtful reform and engagement substantially dims. Although I appreciate that an ongoing effort spearheaded by the National Institute of Standards and Technology (nist) may bring some valuable, albeit limited, near-term improvements, unless we are able to create some broader site for sustained and fair-minded engagement, outside of the control of both forensic scientists themselves and stakeholders in the adversary system, changes are likely to be partial, faltering, incremental, and insufficient.
To situate these issues, I offer a somewhat whirlwind overview of key concerns in the forensic pattern identification fields, a broad-strokes overview of core issues and limitations that everyone who wishes to be educated in the controversies surrounding pattern evidence ought to know.
Many feature-comparison methods have been used in court (and for investigatory purposes) for decades, or in some instances, for more than a century. Expert handwriting identification evidence, for example, has a nineteenth-century origin; latent fingerprint identification was first used in the United States in 1911, in the case People v. Jennings; firearms and bullet comparison evidence received attention in the Sacco and Vanzetti trial in 1920 and began to be used with increasing frequency thereafter; and bite mark identification evidence appears in some Salem witchcraft trials, but its more modern origin story traces back to People v. Marx, a 1975 California murder case involving a bite mark on the murder victim's nose.1
Forensic science pattern identification methods grew up mostly outside of universities, undisciplined by norms of academic research. Rather, these methods were primarily developed to aid criminal investigations and facilitate crime-solving. From the late 1920s and 1930s onwards, they largely gestated and developed within crime laboratories, and these laboratories were usually structured as appendages to law enforcement.2
Until recently, most forensic scientists had law enforcement backgrounds that typically did not include substantial formal training in science. Traditionally, many forensic analysts and crime scene investigators were sworn officers, though this is much less true today. Many forensic laboratories do now require their analysts to have an undergraduate science or forensic science degree, but even now, few forensic practitioners have Ph.D.-level training in science.3
Given the educational backgrounds and experience of their personnel, forensic science laboratories have, unsurprisingly, generally lacked a “research culture.”4 Until recently, to the extent that there was any university-based research in support of the forensic sciences, professorial investigators were few; those that did exist tended to be closely involved with law enforcement communities and their efforts were primarily directed toward justifying or increasing the reach of the forensic sciences rather than putting their knowledge claims to hard tests or validating the specific methods used.
Indeed, until approximately the last decade, there was remarkably little serious, methodologically sophisticated research conducted on forensic science error rates or methods. We continue to have many gaps in our knowledge about the accuracy of examiners and the real-world error rates for most pattern identification sciences.5 Given their widespread use in the courtroom, it is rather astonishing how little peer-reviewed, high-quality evidence establishes the scientific validity of often-used methods like tool mark assessments and firearms identification. More research, but still not a great deal, exists for areas like fingerprint and document identification.
The risks posed by inadequately validated forensic science are not merely theoretical or abstract. In wrongful convictions established through dna-based exonerations, faulty forensic science appears in roughly half of the known cases, making it the second most frequently found contributing factor (second only to erroneous eyewitness identification).6
For what purpose is pattern evidence used in court, and how is it presented? Forensic science fields answer questions like: Was the defendant identified by the analyst the source of the fingerprint lifted from the crime scene? Did a bullet or cartridge casing match the defendant's gun? It is important to recognize, however, that in these and other areas, the words used to describe a forensic scientist's findings, like “match” and “identification,” are fuzzy and not self-explicating. Furthermore, they can have either a strong or a weak meaning. Some fields traditionally claim a strong conclusion that individualizes pattern evidence to a specific person or source, like linking a fingerprint uniquely to the defendant's index finger. In other fields, accepted conclusions take weaker forms. When microscopic hair analysis was regularly used, for example, the field's standards prohibited the conclusion that any specific individual was the definite source of a given hair. An expert was supposed to testify only that the hairs examined shared a set of class characteristics that made it possible that they shared a common source, meaning that hairs found at a crime scene might have come from the defendant (while the scientist was also expected to acknowledge that other individuals' hair could also match the hairs taken from the crime scene). Nevertheless, a recent audit, jointly conducted by the fbi and the Innocence Project, found that microscopic hair identification experts regularly overstated their conclusions when testifying, frequently going well beyond what the field ostensibly permitted.7 Thus, one concern about these kinds of evidence relates to fields making too-strong, scientifically unvalidated claims (like “individualization”); another relates to examiners who go beyond the field norms and testify to stronger conclusions than can be supported by the field.
Pattern identification determinations by experts are fundamentally – and are acknowledged by practitioners to be – subjective. They lack formal, validated criteria for determining a match. Rather, all the forensic feature pattern fields involve a trained examiner looking closely at the questioned item and determining, based on training, experience, and judgment, whether sufficient similarity exists to claim a match.8 As one fingerprint expert stated nearly twenty years ago (as true now as when he wrote it), determining a match requires a certain “leap of faith” through which the expert becomes “subjectively certain.”9
Apart from dna identification, none of the widely used forms of pattern identification evidence currently rest upon an established statistical foundation.10 At present, pattern identification experts across the whole range of other forensic identification fields, including firearms, tool mark, microscopic hair, latent fingerprint, and bite mark identification, as well as questioned document examination, cannot assert a quantitative, validated probability that two items of evidence match or share a common source. No fully validated models provide specified statistical probability akin to the “random match probability” used with dna evidence. (There are, however, some reasonably promising preliminary efforts under development.)
However, at the same time, at a conceptual level, pattern identification claims are and must be probabilistic; we just do not yet have fully specified, validated probabilistic models.11 Forensic examiners once commonly resisted the claim that they were engaged in a probabilistic enterprise, but there is now a growing acceptance among forensic thought leaders (if not, perhaps, the rank and file) that comparison judgments are indeed probabilistic at their core. Perhaps foreshadowing changes to come, the Department of Defense fingerprint laboratory recently became the first to require its analysts to use likelihood ratios (based on an internally developed model) in their testimony to express the strength of a match in quantitative terms, but their model has not yet been subjected to formal validation or broad peer review.
Forensic practitioners typically do undergo proficiency tests that to some degree assess the accuracy of their judgments. Seldom, however, do these tests adequately mimic actual casework conditions. Not only are the samples included in these tests often easier to evaluate than many samples encountered in actual forensic work, but most often, these tests are not conducted “blind”; that is to say, analysts know they are being tested, which may induce them to take extra care.12 Although it is possible to insert realistic forensic samples within the stream of casework, blinding protocols for proficiency testing are rarely used. Best practices, following the model of randomized controlled trials in medicine, in which even the person providing the sample to the analyst does not know it is a test sample, are virtually unknown.13
Many forensic methods raise concerns about cognitive bias. Given that most forensic laboratories are affiliated with, or under the direct control of, law enforcement, analysts may view themselves as part of the law enforcement “team.” In addition, in many laboratories, forensic analysts may be privy to significant nonforensic case information and evidence, which risks inadvertently contaminating or biasing their judgments, no matter how ethical and well-intentioned an analyst may be. This problem is made more acute by the inherent subjectivity of the methods involved.14 A number of commentators have therefore argued for the development of protocols to prevent examiners from having access to inculpatory (or exculpatory) information unless or until it is needed for their forensic analysis.15 Some laboratories have taken steps in this direction, but it remains more the exception than the rule.
In addition to the danger of inadvertent cognitive bias, outright forensic fraud, in which bad-apple examiners knowingly lie about what they did or found, has occurred within numerous laboratories.16 There have been cases of intentional misstating of results; exaggerations so extreme that they cannot be attributed to mere carelessness; and even “dry-labbing,” where examiners report conclusions for tests they never conducted at all. The adversary system has largely failed to operate as a check on such fraud; the many known instances of fraud have only rarely been uncovered via the mechanisms of a trial, such as impeachment or on cross-examination. When a single instance of fraud is somehow discovered, triggering a review of an examiner's past actions and reports, the review often reveals many other instances of dishonesty by that examiner.
Throughout the twentieth century, defense counsel rarely challenged the admissibility or scientific reliability of pattern identification evidence. This has begun to change, albeit in a limited fashion. While still uncommon, enterprising defense counsel have now mounted a number of substantial admissibility challenges in several different fields. Though almost no effort has led to the exclusion of the forensic evidence at issue, some evidentiary challenges have resulted in modest judicially imposed restrictions or limitations on a forensic examiner's testimony, typically restricting the language the examiner is permitted to use in describing the strength and meaning of a conclusion.17 Here, it should be noted, I am talking only about trials. We have virtually no information on the frequency with which the prosecution has shared flawed forensic science testimony with defense counsel who then used it to persuade a client to accept a plea deal.
Until recently, forensic practitioners in many of these pattern fields regularly testified in the language of total certainty rather than probability. They also frequently testified to being “100 percent confident” of their conclusions and sometimes claimed (preposterously) that the error rate of their technique was “zero.”18 While some practitioners still testify using language of this sort, many now make somewhat less absolute claims; indeed, there is a significant movement within the forensic science community to establish norms to preclude claims of such blatantly excessive certainty.
In 2009, the National Research Council, the research arm of the National Academy of Sciences, released a blockbuster report on the forensic sciences, which confirmed many of the limitations described above and emphasized the lack of adequate scientific validation for many forensic science fields. The report asserted, for example,
The simple reality is that the interpretation of forensic evidence is not always based on scientific studies to determine its validity. This is a serious problem…. There is a notable dearth of peer-reviewed, published studies establishing the scientific bases and validity of many forensic methods.19
This report, also calling for a new federal agency to provide oversight and assessment to the forensic sciences, seemed at first like it would be hard for courts and practitioners to ignore. However, after the report was published, some prosecutors argued in briefs and in statements that the nas report should have no bearing on admissibility decisions or on the judicial assessment of the validity or reliability of these methods, a position strongly challenged in print by one of the cochairs of the committee responsible for the report (himself a distinguished judge).20 While many courts assessing the admissibility of pattern evidence made mention of the report, and a number of judges evinced surprise at the dearth of strong research to validate these techniques, very few saw these weaknesses as requiring any fundamental change to their reception of these forensic sciences as legitimate forms of evidence.
The report did matter: it brought significant additional attention to pattern evidence and its weaknesses, and it no doubt helped to educate defense attorneys, some judges and prosecutors, and numerous forensic analysts themselves. As important as the nas report has been, however, the effects of the report on trial and appellate court admissibility decisions have to date been extremely modest.21 But it did certainly change the conversation surrounding these techniques.
Due in significant part to this report and its reception, a trickle of research into the validity of forensic science methods has begun to emerge, with a variety of results depending on the study and the field. For example, in 2011 – exactly one century after it was first admitted in a U.S. court – a team of researchers released the first published study looking at fingerprint error rates.22 This study found a small (though nonzero) false positive error rate and a more significant though still single digit (7.5 percent) false negative error rate. By contrast, the small number of studies examining error rates in bite mark identification uniformly show distressingly high – double digits or higher – levels of error.23
The 2009 nas report's call for the creation of a new, full-fledged administrative agency focused on forensic science never achieved significant political traction, but in 2013, as a partnership of the Department of Justice (doj) and the nist, the ncfs was born.24 It included a significant number of academic research scientists, as well as stakeholders from the forensic science and criminal justice communities, and emerged as a respected and significant location for addressing forensic science reform. However, after three years of operation, the Department of Justice decided not to reauthorize the Commission.
In 2016, a second distinguished group of scientists – the President's Council of Advisors on Science and Technology – issued its own major report about pattern identification sciences.25 This report, like the nas report seven years earlier, found a general dearth of adequate scientific studies to establish the validity of many kinds of forensic science. pcast's report made a variety of recommendations, including some pointedly directed at judges and their admissibility determinations. pcast asserted the fundamental need for scientific validity as a prerequisite for using scientific evidence (both as a matter of good science and good law). It further refined the concept of validity as it applies to the forensic sciences by identifying two key parts: foundational validity for any given field, and the validity of an analysis as applied in particular laboratories to particular casework.
The fundamental tenet of the pcast report was that foundational scientific validity requires appropriate empirical studies to establish accuracy and error rates. These studies are a nonnegotiable sine qua non for which there simply can be no substitute.26 In the report's words:
Without appropriate estimates of accuracy [and error rates], an examiner's statement that two samples are similar – or even indistinguishable – is scientifically meaningless: it has no probative value, and considerable potential for prejudicial impact. Nothing – not training, personal experience, nor professional practices – can substitute for adequate empirical demonstration of accuracy.
pcast asserted that many forms of forensic pattern-matching evidence presently lack foundational validity. pcast found that simple dna analysis and latent fingerprint identification do have enough appropriate testing to establish foundational validity, but that numerous other fields, including firearms and tool mark identification, bite mark identification, and more complex dna mixture analysis, do not. The pcast report strongly intimated that these long-used forms of evidence ought not to be admitted at trials unless or until foundational validity (and validity as applied) can be established; and that some kinds of evidence, like bite mark evidence, were likely never to be proven valid.
Given these strong conclusions, it is perhaps unsurprising that prosecutors gave the report a strikingly chilly reception.27 Nor, unfortunately, do most courts seem inclined to take seriously pcast's framework and admonitions, despite pcast's high status and distinction, and notwithstanding the obligation of judges in the federal courts and in many states to play an explicit gatekeeping role in which they assess expert and scientific evidence for validity.
This broad-strokes overview of the past and present of forensic science illustrates the deep cultural divide between forensic science practice and research science. The pattern identification disciplines grew up within law enforcement, not universities; their methods are subjective and experience-based rather than objective or statistical; and forensic practitioners even today typically lack doctoral-level science training. Judgment honed by experience is the primary coin of the realm, not formal empirical study or statistical modeling. In many fields, we still lack substantial, validated information about how likely analysts are to offer inaccurate conclusions. Consider this point carefully. These forms of evidence are being used to establish guilt for serious criminal offenses, yet we lack substantial knowledge of how often the evidence presented is mistaken, overstated, imprecise, or wrong.
Furthermore, for most of their history, the fundamental epistemic legitimacy of these experience-based disciplines was almost never challenged in court. At best, the biases or competency of a testifying examiner was questioned or an effort was made on cross-examination to get an examiner to hedge a bit about the certainty of a conclusion. When defense challenges to admissibility are mounted now, courts sometimes insist upon modest changes to the words used by the expert, but generally admit the evidence, notwithstanding the lack of substantial scientific testing and validation. Several distinguished interdisciplinary panels of scientists and other experts have weighed in with great concern for, but thus far only modest influence on, the lack of adequate scientific foundation and validation in these fields. Finally, we must note the real, human costs to using problematic forms of forensic evidence in court, including the danger of wrongful conviction.
This overview of key points about forensic science shows that pattern identification fields have faced modest, but not transformative, reform. Half full: the emergence of some credible research; the recognition by many practitioners that they are engaged in a probabilistic enterprise; substantial focus on these issues from two extremely prestigious, highly legitimate bodies of experts, nas and pcast; the creation of the ncfs and a nist-led effort that calls on the forensic science disciplines to identify standards and approaches in need of reform. Half empty: relevant research remains limited and many fields still lack adequate validation; though probabilistic in theory, trial testimony remains grounded on experience and subjective judgment; both judges and practitioners have largely resisted the conclusions of the nas and pcast reports; the ncfs no longer exists.
I turn now in somewhat more detail to three specific examples that show, simultaneously, the existence of some degree of meaningful change, alongside reasons for, at best, muted optimism about further transformation, at least in the near term.
First, bite marks. On June 8, 2017, Alfred Swinton was released from prison after serving eighteen years of a sixty-year sentence for murder. His release came about because the bite mark identification that had been a lynchpin of the prosecution's case at the original trial was no longer deemed persuasive or valid by anyone involved (including the prosecutor's office and the original bite mark expert himself). Faced with this consensus, the judge vacated the original conviction, ordered a new trial, and released Swinton to house arrest while awaiting further judicial proceedings.
Once suspected of being a serial killer in Connecticut, though never charged with other crimes, Swinton's conviction was vacated not just because of the crumbling of the credibility of the bite mark evidence, but also because dna testing showed that biological material from the bite mark (on the victim), as well as on the victim's nails, did not match Swinton's dna. In addition, a bra in Swinton's possession had been said at trial to have belonged to the victim, but dna retrieved from the bra did not match that of the victim. Without the dna corroboration of Swinton's claim of innocence, perhaps the prosecutor would have stood by the state's original contention that the bite mark identification constituted substantial proof of guilt. Even with the dna evidence and the discrediting of the bite mark evidence, the state has not yet conceded Swinton's innocence.
But the bite mark expert's disavowal of his earlier testimony pulled no punches: “I no longer believe with reasonable medical certainty – or with any degree of certainty – that the marks on [the victim] were created by Mr. Swinton's teeth, because of the recent developments in the scientific understanding of bite-mark analysis,” odontologist Constantine Karazulas told the Hartford Courant.28 He even called his earlier testimony “junk science” and stated that he “no longer believes that Mr. Swinton's dentition was uniquely capable of producing the bite marks I observed…. Indeed, many thousands of individuals could have produced those injuries.”29 (Interestingly, Karazulas used a novel form of computer enhancement to examine Swinton's dentition in the original trial; the issues surrounding this use of computer-enhanced images produced a lengthy, detailed Connecticut Supreme Court opinion affirming the legitimacy of the computer enhancement techniques he used, while breezing over the question of the reliability of bite mark identification in a mere footnote.)30
Does this case indicate a potential sea change for bite mark evidence, one of the most problematic forms of pattern evidence in current use? At the time of the original trial, the expert had called his own techniques “the new gold standard for forensic odontology” and celebrated his care and confidence in his conclusion of a match.31 Now, instead, he offers a forceful recantation of his earlier claims. Given this about-face by the expert, coupled with dna evidence that generally failed to link Swinton to the murder, the prosecution neither defended the legitimacy of the bite mark evidence nor opposed the defense's motion to vacate. If the prosecution decides to retry the case, they appear ready to acknowledge that bite mark identification evidence will have no legitimate role in the next go-round.
By contrast, just a couple of months earlier, in a retrial of a murder case in Pennsylvania vacated for constitutional flaws, a state court trial judge ruled bite mark evidence admissible. In motions preceding the trial, the judge even denied Paul Aaron Ross, the defendant (who was well represented with substantial involvement from the Innocence Project), the opportunity for a Frye hearing, in which the defendant would have argued that bite mark evidence was no longer “generally accepted” by the relevant scientific community, the standard that scientific and expert evidence must meet in Pennsylvania in order to be admissible.32 The defendant was therefore denied the opportunity to present testimony or detailed evidence about bite mark testimony's known weaknesses. Judge Jolene Kopriva appears to have denied the hearing primarily because bite mark evidence was not a novel kind of proof: “The unique aspect of this case is that challenges are being brought to an existing scientific field, not a novel methodology,” Kopriva said.33 “Although the use of bite mark evidence is beginning to face challenges, it would be premature,” she said, “for this court to order that the methodology is no longer generally accepted in the relevant scientific community.” There is, however, a bit of a catch-22 here. If the judge will not allow a full evidentiary hearing about the validity of bite mark testimony because such evidence has long been admissible in Pennsylvania, how can she determine whether it is in fact premature to conclude that the evidence is no longer accepted in the relevant scientific community?
Of course, whether a form of evidence is deemed “generally accepted” depends in part on precisely how one defines the relevant scientific community: there is more acceptance of bite mark evidence among bite mark experts themselves than in a more broadly defined scientific community. But as we see from the Swinton case, even some bite mark experts no longer believe in the validity of the technique.34 And there is little doubt that a great many other scientists who have examined the field have significant doubts about its validity – or, perhaps more accurately, substantial confidence that validity is lacking.35
To be sure, Judge Kopriva did place modest limits on the form that bite mark testimony could take. In the first trial, the expert testified that the bite marks were “very highly consistent” with the defendant's dentition. This time he would be limited to the language permitted by the bite mark expert's professional society, the American Board of Forensic Odontologists, which recommends that bite mark testimony offer one of three possible conclusions without further detail or elaboration: 1) that the person is included within the pool of possible sources; 2) that the result is inconclusive; or 3) that the person is excluded from the pool of possible biters. (At the time of writing, the defendants have requested an interlocutory appeal of the ruling to disallow a Frye hearing.)
More than two dozen dna exonerations to date involve cases in which bite mark evidence played an important role at trial.36 Additionally, unlike areas of forensic science in which the problem is a near-total lack of research (like tool mark identification), or areas in which the existing research is limited and methodologically flawed but weakly supports an inference of validity (like firearms identification), numerous bite mark studies affirmatively illustrate its significant weaknesses. (In one study, experts lacked substantial agreement on whether certain marks left on skin were bite marks at all, much less on whether a given person's dentition could have produced them.)37
Nonetheless, Judge Kopriva deemed it premature to take a stand, or even to permit a full admissibility hearing to make an evidence-based assessment. This invites the question: if it would be premature to exclude it now, on our current basis of knowledge, at what point, and on what research basis, could exclusion be warranted?
To be fair, if Judge Kopriva believes that a Frye hearing is allowable only for a novel technique, rather than a long-used one, she could feel stymied by legal doctrine and the judicial responsibility to protect settled questions from being reopened. Some cases do limit Frye's ambit to the novel; but if so, how should change come about when needed? Surely Frye cannot mean that any form of expert evidence of long-standing use must be admissible forevermore, even if novel evidence of its problems, limitations, and lack of validity emerge? How ought the need for change be balanced with the legal system's norms regarding closure and precedent? Judge Kopriva's treatment of bite mark evidence illustrates the potentially awkward fit between judicial practices and the potential need to unsettle the ways we use forensic science. The law regarding the admissibility of scientific evidence may not change at the pace of science, but surely it should change when science substantially undercuts the original justifications for allowing the evidence.
We thus see a pair of encounters with bite mark evidence, in two courthouses in two different states, merely months apart, each taking a profoundly different approach to the assessment of bite mark evidence (albeit at different stages within the criminal process). In one case, we see a clear recognition of fundamental weaknesses in an especially dubious form of identification evidence. In the other case, we see the mighty power of precedent even in the face of a deeply troubling kind of proof. The Swinton case suggests change is on its way, at least with respect to bite mark evidence (perhaps in some ways too easy a target, as one of the most obviously problematic of the forensic sciences). At the same time, the Ross case suggests that even bite mark identification evidence may not be going anywhere too soon. Lest Ross be written off as a peculiar outlier, it is worth noting that not a single trial-level judge in any court has yet excluded bite mark evidence based on its lack of reliability.
One could view the dramatic difference between these two cases as a simple side effect of our federal system, which allows wide disparities in state evidence rules, as well, perhaps, as a consequence of the cases' procedurally different postures. Nonetheless, these two cases suggest rather different pathways for the future of bite mark evidence – and by extension, perhaps for the forensic sciences more generally.
In the Swinton case, we can admire the collaborative efforts to face up to the weaknesses in the bite mark testimony, including the odontologist's willingness to disavow the accuracy of his earlier testimony, as well as the prosecutor's privileging of evidentiary integrity over closure and maintaining the conviction. Building on the Swinton case, one could plausibly suggest that bite mark evidence is in its death throes. Other evidence supports this possibility too, such as the fact that the Texas forensic science commission – Texas, a state not generally seen as soft on crime – declared a moratorium on bite mark evidence until or unless a stronger research foundation could be established.38 The glass half full, it would seem; the double image seen from its more captivating angle.
Still we cannot escape the contrast with the judge in Ross, several months earlier, refusing even to permit the evidentiary hearing that would have given the defense's concerns a full airing. The fact that she restricted the form of allowable testimony and could cite precedent for her decision is small consolation, for a person's freedom is at stake, and if bite mark evidence is fundamentally unsound, the judge's small step is akin to permitting a Ouija board's messages into evidence so long as the expert through whom the evidence is offered suggests the board is a probable rather than a certain source of truth.
In deciding as she did, Judge Kopriva is not unique. Far from it: no judge to date has rejected a prosecution offer of bite mark evidence, notwithstanding two dozen dna exonerations in cases in which bite marks were erroneously associated with the defendant, and a growing research literature challenging the validity of bite mark identification claims. My point, however, is less to excoriate Judge Kopriva than to highlight the deep power of precedential thinking that, when rooted in outdated or pseudoscience, generates a deep – and systemically intended – legal bias in favor of maintaining the status quo, perpetuating the ongoing acceptance of questionable forensic science pattern evidence.39 Now the glass looks half empty: if judges refuse to exclude bite mark evidence, it is difficult to imagine they will insist on stronger scientific foundations as a precondition for the admissibility of evidence involving less egregiously unreliable, but nonetheless inadequately studied, techniques. If strong, affirmative evidence that bite mark evidence is unreliable is not enough, what hope is there that courts will take the mere absence of adequate evidence seriously? So the double image switches back with a blink of an eye, the viewer as downcast and dispirited as the world-weary woman.
This pair of bite mark decisions, grounding two dramatically different narratives about what lies on the near horizon, vividly illustrates how, at present, depending on what we are looking for and from what angle we choose to see, we can find both reason for hope in and reason for hopelessness about forensic sciences' future. To understand both the present and the range of possible futures for forensic science, we must recognize that both the optimistic and the pessimistic narratives are plausible; indeed, in some meaningful sense, both are true.
Judges today are tremendously reluctant to exclude from trials long familiar forms of forensic science evidence even when, as with bite mark evidence, the scientific foundation is weak and the evidence has played an established role in numerous proven wrongful convictions. But there is a growing move – insisted upon by some judges as a precondition to admissibility, and also called for by some leaders in the forensic science community – to scale back exaggerated and overconfident assertions of knowledge and authority by forensic scientists. One author refers to this felicitously as a shift from the “dogma expert,” who asserts her findings in absolute terms, to the “transparent expert,” who is more honest about the limits of her knowledge and the existence of some uncertainty in her conclusions.40 Although this shift has begun, it remains incomplete, and the limits called for by judges and forensic leaders are often not fully enforced. Still worse, these well-intentioned efforts to rein in exaggerated forensic science claims may, in actual practice, have little meaningful effect on how forensic science evidence is actually assessed or understood by juries.
Not long ago, experts in many of the forensic science subspecialties routinely testified in the language of absolute certainty. They would, in many fields, make identifications that were both absolute and particularized: identifications made to the “exclusion of all other” fingers, or firearm cartridges, or tools, and they often claimed to be “absolutely certain” or “100 percent confident,” or that their technique boasted a “zero error rate.” Although such dogmatic testimonial over-claiming still occurs, it has certainly become less frequent. We now routinely see experts offering (and courts insisting upon) somewhat more epistemically modest, less hubristic claims about the established strength of the evidence. Instead of saying a fingerprint could not possibly have come from anyone in the whole world but the suspect, the fingerprint expert might now say that based on her experience, she believes that the chances of two prints sharing this much correspondence and not deriving from the same source is remote, or very unlikely, rather than absolutely impossible.41 Moreover, this testimonial shift has spurred some examiners and labs to grow enthusiastic about a potential statistical turn, through which they could someday provide validated quantified assessments, or testify about likelihood ratios, rather than experience-based conclusions, even though not so long ago, most examiners roundly disclaimed any legitimate role for such probabilistic thinking.42 However, as the essay by Joseph Kadane and Jonathan Koehler in this issue indicates, it is unclear whether a modest, as opposed to a major, scaling back in testimonial certainty is likely to have any effect on how probative the factfinder perceives the evidence to be.43
The positive, half-full narrative about this development would emphasize three points. First, by reining in unjustified overstatements that were previously commonplace, courts are not simply improving the quality of the testimony heard by the factfinder, but also increasing their own focus upon the substantive value of the offered evidence. Regulating the experts' language is modest gatekeeping perhaps, but it is a form of gatekeeping nonetheless, and therefore possibly a stepping stone to more thorough scrutiny of whether an adequate foundation undergirds an expert's claims. Second, this change in the form of testimony has spurred additional interest from the forensic science community in efforts to develop validated probabilities to ground the weight of testimony. As mentioned above, the Department of Defense announced in March 2017 that its experts would henceforth testify using quantified likelihood ratios based on an internally created statistical model.44 Whether that model will hold up to scrutiny will be interesting to see, but the increased interest in developing such models is itself a positive turn. In addition, if forensic examiners embrace this shift to more modest language and conclusions, this focus potentially invites more careful thought about the relationship between evidentiary support and testimonial conclusion, which might in turn contribute to a broader shift in perspective, encouraging experts to see themselves less as partners to law enforcement and more as scientists.45
The more pessimistic, still-half-empty story would assert that these modest changes in the language used for testimony are the forensic equivalent to rearranging the deck chairs on the Titanic. Will a factfinder – especially a lay juror primed by popular culture and shows like csi to believe in the power of a forensic “match” – hear or weigh the evidence differently because of modestly changed language? How different is it, really, for a juror to hear that an expert believes the chances of this print coming from anyone other than the defendant is “extremely low,” rather than “zero,” especially if that conclusion is accompanied by an assertion like, “based on my many years of experience, I truly would not expect to see this much similarity unless the two prints came from the same person”? Or, still worse, that it would be a “practical impossibility” for them to come from different sources? The space between impossible and unlikely is real, but in this context, it may also be razor thin, especially when coupled with assertive body language and an authoritative tone of voice. In reality, the specific words used to convey the meaning of the match to a factfinder may be far less important than evidence professors or scientists might hope or think: “impossible” and “very unlikely” may, in practice, be near-fungible within the trial setting. (To be sure, this point operates in both directions: it may be that factfinders interpreted the earlier claims of “impossible” as, in fact, meaning something more akin to “very unlikely,” notwithstanding the expert's stronger claim. Even so, the key point remains: modest linguistic changes in an expert's phrasing of her conclusions may have little real effect on jurors' assessment of probative value.)
The pessimistic story would also emphasize that many of the forensic science disciplines still lack adequate empirical grounds even for their weaker claims: without a valid statistical model, how does an examiner truly know that a coincidental match is as unlikely as she asserts? Furthermore, by insisting upon minor modifications to the language of testimony, judges confronting Daubert and Frye challenges may deem themselves to have taken adequate and appropriate action in response to the concerns about the validity of forensic science. Rather than acting as a spur to further engagement, modest reform in testimonial language may instead lead to judicial quiescence and complacency. In the Ross bite mark case, we see how the judge did partly constrain the testimony, but simultaneously refused the Frye hearing on admissibility. The optimistic story sees these minor reforms to testimony as a sign of reflective engagement by the judiciary and practitioners, and a potential stepping stone to bigger change; the pessimistic story sees it as a superficial salve that may permit the avoidance of deeper and more important cures.
One final example of the dynamics of change and its limits can be seen in the establishment of and, then, roughly three years later, the failure to renew the National Commission on Forensic Science. The Commission was a joint creation of the Department of Justice and nist. Commissioners came from a variety of fields, including research scientists, law professors and judges, forensic science providers, law enforcement officials, prosecutors, and defense attorneys. The ncfs's mandate was, in short, “to enhance the practice and improve the reliability of forensic science.”46 When the ncfs began, many – frankly, myself included – were dubious that a forensic science improvement effort partly centered in the Justice Department was likely to bring about significant change or meaningful improvement. Moreover, the ncfs, during its short lifespan, was not without controversy. In January 2015, Federal District Court Judge Jed Rakoff, the sole federal judge on the Commission, noisily resigned over the Justice Department's unilateral decision that issues concerning pretrial discovery procedures for forensic evidence were beyond the Commission's legitimate scope, a decision he saw as putting “strategic advantage [for prosecutors] ahead of the truth.”47 The controversy drew media attention; the doj backtracked, and Judge Rakoff returned to the Commission. That controversy and the media coverage it received likely enhanced the ncfs's independence from the doj.
Over its three years of operation, the ncfs made a series of advisory recommendations specifically directed at the Justice Department, and offered a number of additional “views”: documents that captured the Commission's collective consensus on important topics. Recommendations ranged from directing that forensic science service providers should all be appropriately accredited (which perhaps seems obvious, but accreditation has not been a uniform or consistent expectation for forensic laboratories); to stating that forensic labs should develop written policies about their documentation, reporting, and interpretation practices (this also perhaps seems obvious, but again, has not been standard practice); to an effort to enhance pretrial discovery (the subject over which Judge Rakoff nearly resigned); to a recommendation that forensic experts cease using the phrase in their testimony “to a reasonable degree of scientific [or field-specific] certainty,” because the phrase, though often used in court, has no accepted scientific meaning.48
The ncfs recommendations were, for the most part, not transformative or field-changing. But they were thoughtful, meaningful steps in a positive direction. More – over – and perhaps most important – the Commission had become a place where a set of thoughtful interlocutors, including academic research scientists and stakeholders in the adversarial process, could jointly consider what was needed to make forensic science fields more trustworthy, fairer, and better grounded in valid science. The significant inclusion of research scientists alongside the more typical “stakeholder” participants gave the ncfs institutional legitimacy and also meant that its proposals generally incorporated insights from scientific, legal, and forensic perspectives.
Unfortunately, the Commission proved short-lived. In April 2017, Attorney General Jeff Sessions announced that he would not renew the ncfs. I recently asked one dozen academics and scientists with an interest in forensic science to share with me their brief thoughts on the most positive and the most negative occurrence relating to forensic science in the last decade. Fully half of those I asked named the demise of the ncfs as the single worst development that had occurred in that time period.
The half-full story, then, is that the Commission existed at all, and that it managed, in a quite short amount of time, to develop both some valuable proposals and some institutional legitimacy. The half-empty story is, obviously, that it no longer exists.
As the Commissioners wrote in their final report, summarizing their activities: “The National Commission on Forensic Science has provided an essential forum … to improve the forensic sciences…. But there is still work to be done.”49
And indeed, there is. Will that work take place? In another decade or two, will we be able to tell a story of ongoing, meaningful incremental change? Will a fair-minded observer be able to conclude that the forensic techniques we use in court have an appropriate degree of scientific validation to support their use and their conclusions? It seems reasonable to hope that the most blatantly problematic forms of forensic science, like bite mark evidence, will no longer be used. But will other kinds of forensic pattern evidence be on a surer foundation? Will testimony be presented in epistemically justified ways, and experts' claims limited to their legitimate evidentiary basis? Or will experts still testify to near certainty without empirical validation? Will we still lack validated statistical models and robust proficiency tests?
The future of forensic science remains uncertain, but our best chance for substantial ongoing improvements rests on the creation, or re-creation, of an entity akin to the ncfs. Simply put, we need some institutional structure, some body, separate from the courts, from adversarial advocates, and from practitioners themselves, a body that includes representatives from all these arenas along with accomplished research scientists. We are simply not likely to see continued forward motion unless there is some institutional body to prompt reform, a commission or working group with both convening power and a claim to legitimacy, in which academic researchers and forensic science stakeholders can jointly assess the state of forensic science and continue to push for, and argue about, improvements.
While it is deeply unfortunate that the ncfs was not recommissioned, perhaps this institutional failure also generates an opportunity. Perhaps, just perhaps, it creates an opening for the building of a next-generation commission, one positioned wholly outside the prosecutorial and law enforcement apparatus, and yet with sufficient institutional legitimacy not to be ignored. The path to the creation of such an alternative to the ncfs is neither obvious nor easy. But if it can happen, the future of forensic science will almost certainly be far brighter, and the substance of what is used in investigations and offered in courtrooms throughout our nation will be more reliable, more trustworthy, and more scientifically valid. That vision for the future of forensic science is most certainly not assured, but it may yet be possible. There is little reason to have confidence that either the courts or the forensic science community, much less the Justice Department, will have the capacity or the will to make significant positive improvements on their own. But if we can somehow create an institutional space where scientists, lawyers, judges, and forensic leaders all work together, a collaborative space that values reason-giving, empirical research, and thoughtful engagement with evidence and its assessment, then perhaps that half-full glass may yet fill to the brim.
Many thanks to Shari Diamond and Rick Lempert for their extremely thoughtful comments and suggestions, and to ucla reference librarian Elyse Meyers for her excellent help. I am also grateful to the participants in the American Academy of Arts and Sciences meeting for this volume for valuable questions and ideas. I also want to honor the memory of Stephen Fienberg (1942–2016), University Professor of Statistics and Social Science at Carnegie Mellon University. Originally, he and I had planned to coauthor an essay for this volume. I am so sorry not to have had that opportunity, but I am also enormously grateful for the ways he used his wide-ranging curiosity, collaborative talents, and brilliance to bring insights from statistics into so many areas of public policy import, including, of course, forensic science.
Sources on the history of specific pattern identification techniques include, for example, Michael J. Saks, “Merlin and Solomon: Lessons from the Law's Formative Encounters with Forensic Identification Science,” Hastings Law Journal 49 (4) (April 1998): 1069–1141; David L. Faigman, Edward K. Cheng, Jennifer Mnookin, et al., Modern Scientific Evidence: The Law & Science of Expert Testimony (Eagan, Minn.: Thomson Reuters, 2016); Simon A. Cole, Suspect Identities: A History of Fingerprinting and Criminal Identification (Cambridge, Mass.: Harvard University Press, 2002); Jennifer L. Mnookin, “Scripting Expertise: The History of Handwriting Identification Evidence and the Judicial Construction of Reliability,” Virginia Law Review 87 (8) (December 2001): 1723–1845; and People v. Marx, 126 Cal. Rptr. 350 (1975).
J. Edgar Hoover, “The Scientific Crime Detection Laboratory,” The University of Chicago Law Review 10 (3) (April 1943): 335–338.
Exceptions include forensic odontology, in which experts have advanced degrees in dentistry, and dna profiling, in which analysts often have graduate-level science training.
Jennifer L. Mnookin, Simon A. Cole, Itiel E. Dror, et al., “The Need for a Research Culture in the Forensic Sciences,” UCLA Law Review 58 (3) (February 2011): 725–779.
The best black-box study we currently have of any forensic science discipline is Bradford T. Ulery, R. Austin Hicklin, JoAnn Buscaglia, et al., “Accuracy and Reliability of Forensic Latent Fingerprint Decisions,” Proceedings of the National Academy of Sciences 108 (19) (May 10, 2011): 7733–7738.
Brandon L. Garret and Peter J. Neufeld, “Invalid Forensic Science Testimony and Wrongful Convictions,” Virginia Law Review 95 (1) (March 2009): 1–97; and Innocence Project, “Wrongful Convictions Involving Unvalidated or Improper Forensic Science that Were Later Overturned through dna Testing” (New York: Innocence Project, 2016), https://www.innocenceproject.org/wp-content/uploads/2016/02/DNA_Exonerations_Forensic_Science.pdf.
Spencer Hsu, “fbi Admits Flaws in Hair Analysis over Decades,” The Washington Post, April 18, 2015, https://www.washingtonpost.com/local/crime/fbi-overstated-forensic-hair-matches-in-nearly-all-criminal-trials-for-decades/2015/04/18/39c8d8c6-e515–11e4-b510–962fcfabc310_story.html?utm_term=.100652ec7671; and Federal Bureau of Investigation, “fbi/doj Microscopic Hair Comparison Analysis Review,” https://www.fbi.gov/services/laboratory/scientific-analysis/fbidoj-microscopic-hair-comparison-analysis-review.
Simon Cole, “More than Zero: Accounting for Error in Latent Fingerprint Examination,” Journal of Criminal Law and Criminology 95 (3) (2005): 985–1078; Melissa K. Taylor, David H. Kaye, Thomas Busey, et al., “Latent Print Examination and Human Factors: Improving the Practice through a Systems Approach” (Gaithersburg, Md.: National Institute of Standards and Technology, 2012), https://www.nist.gov/sites/default/files/documents/oles/latent.pdf; and Jennifer L. Mnookin, “The Validity of Latent Fingerprint Identification: Confessions of a Fingerprinting Moderate,” Law, Probability and Risk 7 (2008): 127–141.
David A. Stoney, “What Made Us Ever Think We Could Individualize Using Statistics?” Journal of the Forensic Science Society 31 (2) (April 1991): 197–199. In some ways, this is parallel to the subjective certainty required by a juror to make the judgment to convict, but we do not call this latter judgment scientific.
We have a strong statistical basis for determining the “random match probability” for dna–that is, how likely it is that a person selected at random would match the sample in question – in cases involving either a single source for the biological material, or when there is a mixture of two people's biological material, but one of them is known (as in a rape case). Assessing probabilities becomes significantly more challenging for dna samples involving multiple possible unknown contributors.
Christophe Champod and Ian W. Evett, “Probabilistic Approach to Fingerprint Evidence,” Journal of Forensic Identification 51 (2) (March – April 2001): 101–122; Cédric Neumann, Christophe Champod, Roberto Puch-Solis, et al., “Computation of Likelihood Ratios in Fingerprint Identification for Configurations of Any Number of Minutiae,” Journal of Forensic Sciences 52 (1) (January 2007): 54–64; United States Department of the Army, Defense Forensic Science Center, “Information Paper” (Forest Park, La.: United States Department of the Army, 2015) [report on using “identification” in latent print technical reports]; and Heidi Eldridge, “The Shifting Landscape of Latent Print Testimony: An American Perspective,” Journal of Forensic Science and Medicine 3 (2) (2017): 72–81.
See United States v. Llera-Plaza, 179 F. Supp. 2d 492 (E.D. Pa. 2002) [Llera-Plaza I]; United States v. Llera-Plaza, 188 F. Supp. 2d 549 (E.D. Pa. 2002) [Llera-Plaza II]; Jonathan J. Koehler, “Proficiency Tests to Estimate Error Rates in the Forensic Sciences,” Law, Probability and Risk 12 (2) (2013): 89–98; and Jonathan J. Koehler, “Fingerprint Error Rates and Proficiency Tests: What They Are and Why They Matter,” Hastings Law Journal 59 (5) (2008): 1077–1100.
Koehler, “Proficiency Tests to Estimate Error Rates” [see note 12]; and Koehler, “Fingerprint Error Rates” [see note 12].
For thoughtful explorations of this problem, see D. Michael Risinger, Michael J. Saks, William C. Thompson, et al., “The Daubert/Kumho Implications of Observer Effects in Forensic Science: Hidden Problems of Expectation and Suggestion,” California Law Review 90 (1) (January 2002): 1–56.
See, for example, Dan E. Krane, Simon Ford, Jason R. Gilder, et al., “Sequential Unmasking: A Means of Minimizing Observer Effects in Forensic dna Interpretation,” Journal of Forensic Sciences 53 (4) (July 2008): 1006–1007.
Mark Hansen, “Crimes in the Lab,” ABA Journal 99 (2013): 44–51; Paul C. Giannelli, “The Abuse of Scientific Evidence in Criminal Cases: The Need for Independent Crime Laboratories,” Virginia Journal of Social Policy and the Law 4 (1996): 439–478; and Brent J. Turvey, Forensic Fraud (Oxford: Academic Press, 2013).
It is perhaps worth noting more broadly that the courts have shown themselves to be generally ineffective regulators of flawed forensic science. A number of kinds of forensic evidence that have now been substantially discredited (such as certain alleged indicia of arson, microscopic hair identification, and bullet-lead comparison) were regularly admitted by courts until external forces made their weaknesses impossible to ignore.
See, for example, United States v. Mitchell, Crim. No. 960407–1 (E.D. Pa., judgment entered Feb. 2000); United States v. Havvard, 117 F. Supp. 2d 848 (S.D. Ind. 2000); United States v. Hicks, 389 F.3d 514 (5th Cir. 2004); and United States v. Green, 405 F. Supp. 2d 104 (D. Mass. 2005).
National Research Council, Strengthening Forensic Science in the United States: A Path Forward (Washington, D.C.: The National Academies Press, 2009).
Ibid.; and Harry T. Edwards, “The National Academy of Sciences Report on Forensic Sciences: What it Means for the Bench and Bar,” Jurimetrics 51 (1) (Fall 2010): 1–15.
For a general literature review looking at the nas report's influence, see Paul C. Giannelli, “The 2009 nas Forensic Science Report: A Literature Review,” Criminal Law Bulletin 48 (2) (2012): 378–393; and “§ 29:10 Cases Citing nrc Report,” “§ 32:20 Legal Responses After the nrc/nas Report on Strengthening Forensic Sciences in the United States,” and “§ 34:5 Current Developments in Caselaw” in Faigman et al., eds., Modern Scientific Evidence [see note 1].
Ulery et al., “Accuracy and Reliability” [see note 5]. This is, in my view, a significant, thoughtful study and an important step forward for the field. However, it is important to recognize that this study was neither conducted blind nor in ecologically valid conditions: study participants were well-aware that they were participating in a critically important study of the reliability of their field. Thus, it is not clear that their degree of effort and care would mirror that of actual casework; it might be lower because no one's liberty was at stake, but it might be higher because of the critical import of this study to the field. Moreover, there was no risk of cognitive contamination because of other case-related knowledge.
Michael J. Saks, Thomas Albright, Thomas L. Bohan, et al., “Forensic Bitemark Identification: Weak Foundations, Exaggerated Claims,” Journal of Law and the Biosciences 3 (3) (December 2016): 538–575; and Mary A. Bush, Howard I. Cooper, and Robert B. J. Dorion, “Inquiry into the Scientific Basis for Bitemark Profiling and Arbitrary Distortion Compensation,” Journal of Forensic Sciences 55 (4) (July 2010): 976–983.
National Commission on Forensic Science, https://www.justice.gov/archives/ncfs.
President's Council of Advisors on Science and Technology, “Forensic Science in Criminal Courts: Ensuring Scientific Validity of Feature-Comparison Methods” (Washington, D.C.: President's Council of Advisors on Science and Technology, 2016), https://obamawhitehouse.archives.gov/sites/default/files/microsites/ostp/PCAST/pcast_forensic_science_report_final.pdf.
Michael A. Ramos, National District Attorneys Association to President Barack Obama (November 16, 2016), http://www.ciclt.net/ul/ndaajustice/PCAST/NDAA%20PCAST%20Response%20FINAL.pdf; Federal Bureau of Investigation, “Comments On: President's Council of Advisors on Science and Technology, Report to the President: Forensic Science in Criminal Courts: Ensuring Scientific Validity of Pattern Comparison Methods” (September 20, 2016), www.fbi.gov/file-repository/fbi-pcast-response.pdf; and Gary Fields, “White House Advisory Council Report is Critical of Forensics Used in Criminal Trials: U.S. Attorney General says Justice Department Won't Adopt Recommendations,” The Wall Street Journal, September 20, 2016, https://www.wsj.com/articles/white-house-advisory-council-releases-report-critical-of-forensics-used-in-criminal-trials-1474394743.
Dave Altimari and David Owens, “Convicted Killer, Suspect in Slayings of 4 Other Hartford Women, Could Soon Walk Free,” Hartford Courant, March 9, 2017, http://www.courant.com/news/connecticut/hc-hartford-swinton-murder-exonerate-20170309-story.html.
State v. Swinton, 847 A.2d 921, n.14 (Conn. 2004).
Gus Karazulas, “New Forensic Odontology Tools” (March 28, 2001), https://www.meyerinst.com/html/lucis/new-forensic-odontology-tools.pdf.
Frye v. United States, 293 F. 1013 (D.C. Cir. 1923).
Kay Stephens, “Judge Permits Bite Mark Evidence for Ross Retrial: District Attorney Can Use Testimony About Mark During Ross Murder Retrial,” Altoona Mirror, March 9, 2017, http://www.altoonamirror.com/news/local-news/2017/03/judge-permits-bite-mark-evidence-for-ross-retrial/.
Nor is Karazulas alone; Michael Bowers, a forensic dentist, has also come out strongly against bite mark evidence. See Saks et al., “Forensic Bitemark Identification” [see note 23].
See, for example, those joining Saks in ibid.
“§ 35:7 Erroneous Identification and Conviction,” in Faigman et al., eds., Modern Scientific Evidence [see note 1]; and President's Council of Advisors on Science and Technology, “Forensic Science in Criminal Courts” [see note 25].
See Saks et al., “Forensic Bitemark Identification” [see note 23].
Joe Palazzolo, “Texas Commission Recommends Ban on Bite-Mark Evidence,” The Wall Street Journal, February 12, 2016.
To be sure, the doctrinal argument for exclusion is weaker in Frye states (like Pennsylvania), where the legal standard focuses on whether a novel form of evidence is “generally accepted” by the relevant scientific community. But given that Frye is designed to get at evidentiary integrity by assessing the views of the scientific community itself, a substantial change in views by the community ought to spur reconsideration of a previously accepted method.
Eldridge, “The Shifting Landscape” [see note 11].
Simon A. Cole, “Individualization is Dead, Long Live Individualization! Reforms of Reporting Practices for Fingerprint Analysis in the United States,” Law, Probability and Risk 13 (2) (2014): 117–150. This of course begs the question of how a fingerprint expert can truly know this based on experience alone; fingerprint experts' typical tasks do not consist of searching for the prints most similar to one another yet deriving from different sources.
For one example of such efforts, see Center for Statistics and Applications in Forensic Evidence, “Pattern Evidence,” http://forensicstats.org/our-research/pattern-evidence/. See also Neumann et al., “Computation of Likelihood Ratios in Fingerprint Identification for Configurations of Any Number of Minutiae” [see note 11].
Joseph B. Kadane and Jonathan J. Koehler, “Certainty and Uncertainty in Reporting Fingerprint Evidence,” Dædalus 147 (4) (Fall 2018): 119–134.
Eldridge, “The Shifting Landscape” [see note 11].
Christophe Champod, “Fingerprint Identification: Advances since the 2009 National Research Council Report,” Philosophical Transactions of the Royal Society B: Biological Sciences 370 (1674) (2015).
The United States Department of Justice Archives, “National Commission on Forensic Science,” https://www.justice.gov/archives/ncfs.
Spencer S. Hsu, “Judge Rakoff Returns to Forensic Panel after Justice Department Backs Off Decision,” The Washington Post, January 30, 2015. For Judge Rakoff's resignation letter, see “Full Text: Judge's Protest Resignation Letter,” The Washington Post, January 29, 2015, https://www.washingtonpost.com/local/full-text-judges-protest-resignation-letter/2015/01/29/41659da6-a7e1–11e4-a2b2–776095f393b2_story.html?utm_term=.2e7c6951ff06.
For a complete list of recommendations and views offered by the Commission, see National Commission of Forensic Science, Reflecting Back – Looking toward the Future (Washington, D.C.: National Institute of Standards and Technology, 2017), Appendix C, https://www.justice.gov/archives/ncfs/page/file/959356/download.