32 ELR 10980 | Environmental Law Reporter | copyright © 2002 | All rights reserved
"Two Strikes and You're Out!": How to Prevail in Daubert ChallengesNed I. Miltenberg and Anthony Z. RoismanNed Miltenberg is the Senior Counsel and Associate Director of Legal Affairs for Constitutional Litigation at the Association of Trial Lawyers of America. Anthony Roisman is Of Counsel to Hershenson, Carter, Scott & McGee in Norwich, Vermont, where he specializes in toxic tort and environmental litigation.
[32 ELR 10980]
In 1992, Drs. Arnold Schecter and Daniel Teitelbaum, two highly qualified scientists, testified that polychlorinated biphenyls (PCBs), together with certain dioxins and furans that were PCB derivatives, could have accelerated cancer in a 37-year-old electrician who, as part of his job, had bathed daily for many years in a "PCB-dielectric soup." Although Drs. Schechter and Teitelbaum had carefully described their data, methodologies, and scientific reasoning, in 1997, the U.S. Supreme Court, in General Electric Co. v. Joiner,1 ruled that their detailed explanations of how they had reached their conclusions were not detailed enough and therefore inadmissible. Strangely, an explanation that would easily satisfy the editors of a peer review journal or the organizers of an academic conference as being scientifically valid may not be legally "reliable" and "admissible."
The decision in Joiner, like the more famous decision four years earlier in Daubert v. Merrell Dow Pharmaceuticals, Inc.,2 has led to increasingly frequent and successful challenges to the testimony of expert witnesses in "toxic tort" cases.3 Although, the campaign against "junk science" is ostensibly aimed at "rogue," "out-of-the-mainstream," "eccentric," and otherwise disreputable phony scientists, corporate lawyers are using the standards of legal admissibility developed in Daubert and Joiner not only to exclude the testimony of inexperienced or unqualified experts but to target anyone who dares testify against corporate practices that endanger human health and life.
This is not surprising. What is surprising—and ominous—is not only that highly credentialed and highly regarded scientists such as Drs. Schecter and Teitelbaum, or Dr. Richard Lemen (the former Acting Director of the National Institute of Occupational Safety and Health (NIOSH) and the current President of the Society for Occupational and Environmental Health), and Dr. David Ozonoff (the Chair of the Department of Environmental Health at Boston University's School of Public Health) are being successfully targeted for exclusion, but that those efforts have been so successful.
Indeed, so-called Daubert motions (to exclude expert testimony on the grounds that it is scientifically invalid and legally unreliable) are becoming the norm in nearly every case involving an expert witness, regardless of the field of expertise. Plaintiffs—and the experts they have retained—lose 70% of the time.4 Once again, federal and state courts are not merely excluding the testimony of self-anointed experts (poseurs, whom many in the scientific community would deride as charlatans), but distinguished scientists whose scientific qualifications, methodologies, and conclusions ought to be beyond dispute. So, for example, in 1998, an Arizona trial judge condemned the testimony of Drs. Lemen and Ozonoff on the ground that neither scientist used sufficiently scientific methods in concluding that trichloroethylenes (TCEs) were dangerous to human health.5
Lawyers who fail to anticipate these attacks can lose their case. Their clients will lose compensation for their injuries. Experts who are not prepared to defend themselves can also lose: unscrupulous corporate lawyers may use one judge's negative ruling to brand them as "notorious junk scientists," which could seriously damage their reputations and careers. Inevitably, lawyers use one disqualification to justify a second. What is particularly worrisome is that a second disqualification can ruin a reputation, as few lawyers will hire a snakebit witness. Unfortunately, many plaintiffs' lawyers know less about science than scientists know about the law, and they often don't anticipate legal "traps" for their experts. Defense lawyers seeking to destroy an expert's credibility are taking advantage of the new Daubert-Joiner standards to exaggerate (or invent) defects in an expert's background or testimony.
In Daubert, the Court identified five nonexclusive "factors" that it said lower trial courts should use in gauging the scientific validity and legal reliability and admissibility of the testimony of an otherwise qualified scientist:
(a) whether the method the scientist used consists of a testable hypothesis;
[32 ELR 10981]
(b) whether the method has been subject to peer review;
(c) the method's known or potential rate of error;
(d) the existence and maintenance of standards controlling the method's/technique's operation; and
(e) whether the method is generally accepted by other scientists in the relevant field.
These five factors have become famous nationwide. But the Court has allowed—indeed, invited—lower courts to supplement these five factors with other tests, standards, and criteria. And lower courts have taken this invitation to heart, devising dozens of new and additional factors to screen expert testimony. These additional factors include:
(a) Not only whether the "principles and methodology," i.e., the explanatory theories used by the expert, are "scientifically valid" and therefore "reliable" as evidence, as required by Daubert, but whether the conclusions drawn the expert's methodology actually produced a "correct, accurate, truthful, or valid conclusion."6
(b) Whether the expert has reasonably extrapolated from an accepted premise and reliable facts to a sensible conclusion.7
(c) Whether the field of expertise claimed by the expert is known to reach reliable results for the type of opinion the expert would give.8
(d) Whether the methodologies the expert "employs in the courtroom [reflect] the same level of intellectual rigor that characterizes the practice of an expert in the relevant field."9
(e) Whether the expert has adequately accounted for obvious (and even nonobvious) alternative explanations.10 Tellingly, although failure to undertake a differential diagnosis of all possible causes may be grounds for excluding testimony, the fact that an expert has performed such a diagnosis does not, by itself, constitute grounds for admitting the testimony. Indeed, a number of courts view differential diagnosis as an illegitimate nonscientific tool.11
(f) Whether the expert intends "to testify about matters growing naturally and directly out of research they have conducted independent of the litigation, or whether they have developed their opinions expressly for purposes of testifying."12
(g) Whether an expert in a product liability action has suggested a safer alternative design, and, in fact, whether such a design had been built, tested, subjected to peer review, and generally accepted.13
(h) Whether the expert used a "weight of the evidence" methodology, which some courts view as illegitimate, and even if the methodology is acceptable in principle, whether the expert properly applied that methodology, e.g., whether the expert weighed all factors, how the expert weighed each factor, whether the expert ranked each factor, how the expert weighed incommensurable factors, etc.14
(i) Whether the expert properly performed the methodology in dispute.15
(j) Whether the methodology/technique is subject to abuse.16
(k) Whether it is analogous to scientific techniques and results held to be admissible.
(l) Whether "fail-safe" mechanisms were available and were used.
(m) Whether the expert's hypothesis or conclusions are illogical or "self-contradictory . . . . It is evident that a hypothesis that contradicts itself is logically ill-formed and cannot be tested."17
(n) Whether the expert's explanatory theory is [32 ELR 10982] generally consistent with generally accepted theories.18
(o) Whether the expert's methodologies and conclusions are relatively precise.19
(p) Whether the expert formed his opinion and then looked for reasons to support it, rather than doing research that led him to his conclusion.20
(q) Whether the expert's field is informed by any "specialized literature."21
(r) How much "the technique relies on the subjective interpretation of the expert."22
(s) Whether there are any "safeguards in the characteristics of the technique."23
(t) Whether the methodology or technique is "analogous to other scientific techniques whose results are admissible."24
(u) What are the "nature and breadth of the inference adduced?"25
(v) What are the relative "clarity and simplicity with which the technique can be described and its results explained?"26
(w) What is the "extent to which the basic data are verifiable by the court and jury?"27
(x) What is the "availability of other experts to test and evaluate the technique?"28
(y) What is the "probative significance of the evidence in the circumstances of the [particular] case?"29
(z) Whether the expert testimony "fits" the fact of the case. As Prof. Edward J. Imwinkried has observed, many courts have interpreted the second, "relevance" or "helpfulness" prong, of Daubert as a matter of sufficiency, not admissibility.30
(aa) Whether the expert made improper extrapolations, i.e., drew inappropriate conclusions from accepted premises.31
(bb) Whether the expert largely relied on anecdotal evidence, e.g., personal experience with patients or on only a few case studies.32
(cc) Whether the expert excessively relied on "temporal proximity"—concluding that substance X caused an injury Y solely because the injury appeared sometime after exposure.33
(dd) Whether there is a scant logical relationship between an expert's testimony and the facts of the case.34
To make matters worse, federal and state courts are taking it upon themselves not only to evaluate whether the scientist used valid methodologies and techniques in a valid way, but whether particular scientific methodologies, techniques, and standards are "good enough" for the law—even if the methodologies are "good enough" for, and regularly used in, the scientific or regulatory community. Thus, courts have repeatedly ruled that only epidemiologists can testify about causation of disease in humans and that even epidemiologists will have their testimony deemed inadmissible and insufficient if, for example, they relied on epidemiological studies that do not show a "relative risk" of 2.0 or more (doubling),35 or if they relied on studies that are not "statistically significant,"36 or if they relied on "animal studies" to establish causation in humans,37 or if they failed to satisfy every one of the supposedly rigid (Sir Austin) "Bradford-Hill" criteria for causation.38 Scientists who are engaged as expert witnesses must be familiar with the Daubert factors and standards and the dozens of new hurdles that have been erected by trial judges around the country. These experts must be prepared to explain why, e.g., the Bradford-Hill criteria [32 ELR 10983] are not the be-all and end-all of good science, or why industry and government scientists routinely rely on animal studies, or why the "weight-of-the-evidence" methodology is neither novel nor disreputable.
Scientists should expect the worst from opposing lawyers and experts and even from the judge. (For example, professors of medicine can expect that they will be condemned as not "real" physicians unless they practice clinical medicine, while professors who teach epidemiology can expect that they will be castigated as not "real" epidemiologists unless they teach that discipline from a perch in an epidemiology department.)
Because there is so much at stake, for the lawyer, the client, and the expert, it is no longer prudent for an expert to merely prepare a competent and honest report and be prepared to defend that report during a deposition. The expert must, in addition, be prepared to meet the new challenges that Daubert and its progeny have created. The expert must be familiar with the ever-changing legal standards and be prepared to explain how his work completely satisfies those standards. If those newly invented legal criteria are contrary to the governing scientific standards used everyday in the ordinary course of professional work outside the courtroom, the expert has to explain why the legal standards are inappropriate and irrelevant.
Thus, scientists should be prepared to explain and defend—in excruciating detail—every option they considered, every choice they made (and which ones they rejected), and everything they did (or chose not to do). Scientists need to show how their mental gears operate.39 Of course, all of this may convince a good scientist that no sane person would subject themselves to the litigation process. There is much to be said for that conclusion. But, many of the most important issues which concern scientists today end up in the courts with or without the best scientific experts. When a court in a tort case approves or rejects a hypothesis, its impact is not limited to the particular litigation or even to similar cases. Such rejection may become "common wisdom" which will be used as part of the attack on the same hypothesis when it is offered in support of government regulation or statutory changes. The simple fact is that the increasing incidence of general and worker populations being exposed to new and more exotic toxins is creating pressure for more statutes, regulations, and tort suits to address the problems that are arising. So long as the present trend continues in the courts, the ability of good and competent scientists to express their opinions—outside the purely scientific arena of peer-reviewed journals and professional conferences—is endangered.
The disqualification of an expert may be damaging to a lawyer's case and may deny his client the compensation to which she is entitled. In addition, it can prove devastating to an expert's career. While lawyers are inclined to see the solution solely in terms of what we can do in the courts, much of the solution to this problem must come from the scientists themselves. Many of the standards now laid down by the courts for admissibility of scientific testimony are wrong-headed and represent the inevitable consequences of lay judges trying to do more than they are qualified to do with too little guidance and often well-financed misdirection from defendants. One antidote to this problem is for scientists themselves to become more proactive. Scientists could examine the decisions of courts where issues on which they are experts are being addressed and begin to write about the errors being made by those courts on the scientific issues they are deciding.
For example, it would greatly assist the legal process to have available (a) an analysis of the scientifically acceptable methodologies which can be used for reaching a conclusion on whether exposure to a particular toxic substance is more likely than not capable of causing a particular adverse health outcome, (b) an examination of the scientifically acceptable methodologies that can be used to determine whether exposure to such a substance more likely than not caused or contributed to a person having the adverse health outcome, and (c) a review of the use of statistics in reaching any of these judgments, with particular emphasis on accepting or rejecting the use of epidemiologic studies by the use of such popular "benchmarks" of reliability as "relative risk of 2.0" and ".05 confidence level." These and hundreds of similar issues are confusing courts and resulting in decisions that incorrectly castigate good experts for not doing what the court mistakenly concludes is "good" and "reliable" science.
1. 522 U.S. 136, 28 ELR 20227 (1997).
2. 509 U.S. 579, 23 ELR 20979 (1993).
3. The federal rule under which these decisions arose, Rule 702 of the Federal Rules of Evidence, was amended at the end of 2000. The amendments were merely intended to codify existing law, and did not add new standards.
4. See D. Michael Risinger, Navigating Expert Reliability, 64 ALB. L. REV. 99, 104 (2000) (detailing that post-Daubert challenges to expert testimony are being filed at an annual rate 36 times the pre-Daubert rate, defendants in civil cases file 90% of such challenges, and they win two-thirds of the time).
5. Lofgren v. Motorola, No. CV 93-05521, 1998 WL 299925 (Ariz. Super. Ct. June 1, 1998).
6. Michael H. Graham, The Expert Witness Predicament: Determining "Reliable" Under the Gatekeeping Test of Daubert, Kumho, and Proposed Amended Rule 702 of the Federal Rules of Evidence, 54 U. MIAMI L. REV. 317, 336 (2000) (citing Joiner, 522 U.S. at 146, 28 ELR at 20229 ("conclusions and methodology are not entirely distinct from one another"), and Kumho Tire Co. v. Carmichael, 526 U.S. 137, 154, 29 ELR 20638, 20641 (1999) (discussing the relationship between an explanatory theory and the conclusions reached from it)).
7. See Joiner, 522 U.S. at 146, 28 ELR at 20229 (noting that in some cases a trial court "may conclude that there is simply too great an analytical gap between the data and the opinion proffered"). See also Mid-State Fertilizer Co. v. Exchange Nat'l Bank, 877 F.2d 1333, 1339 (7th Cir. 1989) ("an expert who supplies nothing but a bottom line supplies nothing of value to the judicial process"); Rosen v. Ciba-Geigy Corp., 78 F.3d 316, 319 (7th Cir. 1996).
8. See Kumho, 526 U.S. at 151, 29 ELR at 20641 (Daubert's general acceptance factor does not "help show that an expert's testimony is reliable where the discipline itself lacks reliability, as, for example, do theories grounded in any so-called generally accepted principles of astrology or necromancy"). See also Moore v. Ashland Chem., Inc., 151 F.3d 269 (5th Cir. 1998) (en banc) (clinical doctor was properly precluded from testifying to the toxicological cause of the plaintiff's respiratory problem, where the opinion was not sufficiently grounded in scientific methodology); Sterling v. Velsicol Chem. Corp., 855 F.2d 1188, 19 ELR 20404 (6th Cir. 1988) (rejecting testimony based on "clinical ecology" as unfounded and unreliable).
9. Kumho, 526 U.S. at 151, 29 ELR at 20641.
10. Compare Claar v. Burlington N. R.R. Co., 29 F.3d 499 (9th Cir. 1994) (testimony excluded where the expert failed to consider other obvious causes for the plaintiff's condition) with Ambrosini v. Labarraque, 101 F.3d 129 (D.C. Cir. 1996) (the possibility of some uneliminated causes presents a question of weight, so long as the most obvious causes have been considered and reasonably ruled out by the expert).
11. See, e.g., Black v. Food Lion, Inc., 171 F.3d 308, 313-14 (5th Cir. 1999); Allen v. Pennsylvania Eng'g Corp., 102 F.3d 194, 195-96 (5th Cir. 1996) (expert evidence suggesting connection between exposure to ethylene oxide and brain cancer insufficient under Daubert); Sorensen v. Shaklee Corp., 31 F.3d 638, 649 (8th Cir. 1994).
12. Daubert v. Merrell Dow Pharmaceuticals, Inc. (Daubert II), 43 F.3d 1311, 1317, 25 ELR 20856, 20858-59 (9th Cir. 1995).
13. Compare Byrnes v. Honda Motor Co., Ltd., 887 F. Supp. 279, 282 (S.D. Fla. 1994) (insisting on production and testing of safer alternative design); Moisenko v. Volkswagen AG, 20 F. Supp. 2d 1129, 1132 (W.D. Mich. 1998) with Southland Sod Farms v. Stover Seed Co., 108 F.3d 1134, 1142 (9th Cir. 1997) (rejecting need for alternative design); Arnold v. Riddell, Inc., 882 F. Supp. 979, 991 (D. Kan. 1995) (same).
14. See, e.g., Allen, 102 F.3d at 198; Wright v. Willamette Indus., Inc., 91 F.3d 1105, 1107 (8th Cir. 1996).
15. See, e.g., United States v. Martinez, 3 F.3d 1191, 1198 (8th Cir. 1993) ("the reliability inquiry set forth in Daubert mandates that there be a preliminary showing that the expert properly performed a reliable methodology in arriving at his opinion"); cf. United States v. Davis, 40 F.3d 1069 (10th Cir. 1994).
16. Meyers v. Arcudi, 947 F. Supp. 581, 585 (D. Conn. 1996).
17. In re TMI Litig. Cases Consol. II, 911 F. Supp. 775, 787 (M.D. Pa. 1996).
18. Id. ("Scientific knowledge tends to be cumulative and progressive, and a hypothesis that is not consistent with accepted theories should be regarded with great caution, whether or not the hypothesis ultimately proves true.").
19. Id.
Broad generalizations are far more difficult to corroborate than precise statements and have little explanatory power . . . . If severe and varied tests are the best indicator of validity, it follows that broad generalizations that can account for any possible state of affairs, and thus cannot be empirically tested, are not as good.
20. See Claar v. Burlington N. R.R. Co., 29 F.3d 499, 502-03 (9th Cir. 1994) ("Coming to a firm conclusion first and then doing research to support it is the antithesis of [the scientific] method."). See also In re Paoli R.R. Yard PCB Litig., 35 F.3d 717, 742 n.8, 25 ELR 20989, 20997 n.8 (3d Cir. 1994), cert. denied, 513 U.S. 1190 (1995) (court should consider "the non-judicial uses to which the method has been put"). Cf. Braun v. Lorillard, Inc., 84 F.3d 230, 235 (7th Cir. 1996), cert. denied, 519 U.S. 992 (1996).
21. State v. Lyons, 924 P.2d 802, 811 (Or. 1996).
22. Id.
23. Id.
24. Id.
25. Id.
26. Id.
27. Id.
28. Id.
29. Id.
30. Edward J. Imwinkried, Daubert Revisited: Disturbing Implications, 22 CHAMPION 18, 19 (1988).
31. Daniel J. Capra, Daubert Puzzle, 32 GA. L. REV. 699, 714-32 (1998).
32. Id.
33. Id.
34. Id.
n35 See, e.g., Daubert v. Merrell Dow Pharmaceuticals, Inc. (Daubert II), 43 F.3d 1311, 1320-21, 25 ELR 20856, 20860-61 (9th Cir. 1995); Casey v. Ohio Med. Prods., Inc., 877 F. Supp. 1380, 1385 (N.D. Cal. 1995); In re Hanford Nuclear Reservation Litig., No. 91-3015-AAM, 1998 WL 775340, at * 8 (E.D. Wash. Aug. 21, 1998).
36. See, e.g., Wade-Greaux v. Whitehall Labs., Inc., 874 F. Supp. 1441, 1452-53 (D.V.I. 1988).
37. See, e.g., Turpin v. Merrell Dow Pharmaceuticals, Inc., 959 F.2d 1349, 1359 (6th Cir. 1992); In re Agent Orange Prod. Liab. Litig., 611 F. Supp. 1223, 1241 (E.D.N.Y. 1985) ("The animal studies are not helpful in the instant case because they involve different biological species. They are of so little probative force and are so potentially misleading as to be inadmissible."); National Bank of Commerce v. Dow Chem. Co., 965 F. Supp. 1490, 1527 (E.D. Ark. 1996) ("Because of the difference in animal species, the methods and routes of administration of the suspect chemical agent, maternal metabolisms and other factors, animal studies, taken alone, are unreliable predictors of causation in humans."); In re Paoli R.R. Yard PCB Litig., 35 F.3d 717, 743, 25 ELR 20989, 20997 (3d Cir. 1994).
38. See, e.g., Merrell Dow Pharmaceuticals, Inc. v. Havner, 953 S.W.2d, 706, 718-19 (Tex. 1997); Landrigan v. Celotex Corp., 605 A.2d 1079, 1085 (N.J. 1992); A. Bradford-Hill, The Environment and Disease: Association or Causation?, 58 PROC. OF THE ROYAL SOC'Y OF MED. 295 (1965). See also Raynor v. Merrell Dow Pharmaceuticals, Inc., 104 F.3d 1371, 1376 (D.C. Cir. 1997); Hall v. Baxter Healthcare Corp., 947 F. Supp. 1387, 1412-13 (D. Or. 1996); Jones v. United States, 933 F. Supp. 894, 900-01 (N.D. Cal. 1996), aff'd, 127 F.3d 1154 (9th Cir. 1997), cert. denied, 524 U.S. 946 (1998).
39. For example, scientists should be prepared to articulate:
(a) Which hypotheses they considered (and which ones they either did not consider or rejected after investigation)?
(b) How they evaluated and tested each hypothesis?
(c) Which tests/investigations/experiments they used to determine whether their hypotheses were valid or not? Why did they use these tests/investigations/experiments? Are there other tests that they did not use? Why not?
(d) Are these the sort of tests, etc. that they use in their non-litigation work?
(e) Are these the sort of tests that are used by others in their field? Majority?
(f) What fundamental principles and assumptions underlie their hypotheses, testing, and assumptions?
(g) What are the established standards and protocols of investigation, analysis, and testing in their field? What are they? Did they use/follow them? If not, why not?
(h) What equipment/testing apparatus did they use? Why? Is this the sort of equipment that is standard in their field?
(i) Had this equipment been contemporaneously calibrated and checked for accuracy?
(j) Describe in detail the research and analytical methodologies they used.
(k) Are those methodologies generally accepted? By what percentage of practitioners?
(l) Do those methodologies differ from those used by defendants' experts? Why and how?
(m) Are those methodologies/techniques "testable/falsifiable"?
(n) What is the rate of error of those methodologies/techniques?
(o) Have those methodologies been published in a peer-reviewed journal or otherwise validated by their peers, such as at a conference, etc.? Describe.
(p) Regarding specific causation, explain in detail how the expert came to conclude that one hypothesis regarding causation fit the facts of this particular case better than other hypotheses.
(q) What data did they rely upon (or reject) and why (or why not)?
(r) Why were the methodologies relevant and useful in analyzing the ultimate question, e.g., causation? Why were those methodologies as sound as (or better than) other methodologies?
32 ELR 10980 | Environmental Law Reporter | copyright © 2002 | All rights reserved
|