33 ELR 10306 | Environmental Law Reporter | copyright © 2003 | All rights reserved


The Reverse Science Charade

James W. Conrad Jr.

James W. Conrad Jr., is counsel for the American Chemistry Council. He received his B.A. from Haverford College in 1981 and his J.D. from George Washington University Law School in 1985. This Article is the outgrowth of a presentation by the author at the 2001 Annual Meeting of the Society for Risk Analysis. The views it expresses are his own. The author would like to thank Gail Charnley and Gary Marchant for helpful comments on an earlier draft.

[33 ELR 10306]

One of the most significant law review articles of the past decade in the area of environmental regulation is Wendy Wagner's "The Science Charade in Toxic Risk Reduction."1 The gist of the article is quite simple: "Agencies exaggerate the contributions made by science in setting toxic standards in order to avoid accountability for the underlying policy decisions."2 The article amply documents the existence of the phenomenon in compelling fashion. Besides being simple to grasp and fundamentally correct in many cases, the article's thesis seems not to have been articulated previously. Thus, the "science charade" concept has now found a currency in the field that rivals only Don Elliott's coining of the phrase "ossification" to describe how the proliferation of procedural requirements and judicial review have combined to rigidify and slow the rulemaking process.3 (As we'll see, these two concepts are related.)4

While there should be no dispute that the science charade as Wagner describes it has been pervasive and problematic, the novelty and catchiness of the phrase may be contributing to another, equally troublesome phenomenon: the "reverse science charade." This problem consists of agencies (or others) exaggerating the limitations of science, and risk analysis, in order to justify regulation on the basis of policy choices—choices that are commonly embodied in default assumptions and safety factors.

After briefly recapping the original concept of the science charade, this Article describes several examples of the reverse science charade in the environmental literature and in U.S. Environmental Protection Agency (EPA) practice. The Article then explains why the reverse science charade is problematic for public policy generally and risk analysis in particular. I also argue that it undercuts itself. Finally, I discuss how Justice Stephen Breyer's concurrence in Whitman v. American Trucking Ass'n5 may be a harbinger of bad news for proponents of the reverse science charade.

The Science Charade

Wagner's influential article describes in telling detail how, over the years, EPA and other agencies have claimed that particular decisions they made—or delayed making—were premised on scientific considerations, when in fact policy considerations seem to have been the true drivers. For example, she describes how EPA published "a [15-]page presentation of mind-numbing scientific justification" for its 1979 ozone standard, even though the science was insufficient to narrow the choice beyond a wide range; she also quotes then-Administrator Douglas Costle's subsequent concession that the Agency's choice "was a value judgment" that was undeniably influenced by economic and political concerns.6 In some cases, she explains, the evidence is pretty compelling that the relevant decision, e.g., whether to regulate formaldehyde under the Toxic Substances Control Act, was made beforehand, but then dressed up and characterized as a scientific one.7 Examples abound outside of Wagner's article, as well. Consider EPA's 1993 assessment of the risks of "secondhand" or environmental tobacco smoke,8 in which the Agency changed the level of statistical significance to present a "scientific" case to support a prior policy decision—that environmental tobacco smoke should be regarded as a known human carcinogen.9

Wagner also explains the chief motivation for adopting the charade: protection against judicial reversal. Frustrated by reversals of rules that were explicitly based on policy considerations, agencies—whether consciously or unconsciously—have tended to cast their decisions as based on expert analysis of scientific data, in the hope that reviewing courts will be more likely to defer to the agencies' special expertise in scientific matters.10

Wagner offers several solutions to this practice, the most "moderate" of which is quite elegant, at least on its face: [33 ELR 10307] agencies should clearly distinguish between the policy considerations and the science behind their decisions, disclosing how certain or uncertain the science is and how significant the effects of the policy choices are.11 Because this admonition is truly a neutral principle of the sort celebrated by scholars like John Hart Ely,12 the science charade and its solution have been trumpeted by writers at all points on the ideological spectrum—from Public Citizen to the Washington Legal Foundation.13

The more complicated challenge, of course, is identifying where the dividing line should be drawn between science and policy. And this is where the reverse science charade can take root.

Science, Transcience, and Policy

In a helpful side trip through the epistemology of science, Wagner's article begins by noting Thomas Kuhn's point that, at the most basic level, there is no higher standard of scientific truth than the assent of the relevant scientific community. She adds, though, Karl Popper's response that, while scientific propositions may not be "provable," they can be disproved, and on this basis science has proceeded by conducting experiments to see whether particular hypotheses can be falsified. If they cannot, then they come to be regarded as scientific fact.14

Some hypotheses cannot be directly tested, however, and so their validity has to be assessed by making assumptions about how related experiments might bear on their truth. Questions that can only be addressed this way Wagner characterizes as "transcientific," because answering them involves a combination of science and policy. Wagner recognizes that in between the poles of science and transcience lies something called scientific judgment. In her typology, however, whenever there are "significant splits" in the scientific community over such judgment calls, she regards those issues as transcientific, and hence ultimately to be resolved by policy considerations.15

The problem is that many of the most important questions that need to be addressed in environmental regulation are transcientific ones. The most notorious example is how to assess the carcinogenicity of a substance to which people are exposed at low doses, when the only ethical and practicable way to answer the question has been to expose small numbers of laboratory animals to high doses. As a result, we have, at least historically, been forced to make policy (in this case, precautionary) assumptions about extrapolating from animals to people and from high doses to low doses.16

Characterizing how toxic something may be is only one step in the risk assessment process, though. One next needs to determine how much people or other receptors are exposed to the substance. This exposure assessment process has been laced with even greater numbers of assumptions and other policy judgments. Cost-benefit analysis, which incorporates risk assessments to make risk management decisions, is even more dependent on policy-based assumptions that in many cases cannot efficiently be replaced by scientific data.

Science has something to say about transcientific questions, however, and is a crucial part of risk assessments and risk management decisions—decisions that regulators simply cannot avoid making. The important question raised by the two charades is how significant the role of science, or scientific judgment, should be in these hybrid, "science-policy" decisionmaking processes. If it is characterized too glibly as answering them, we have the science charade. If it is too glibly excluded from the answer, we have its reverse.

The Reverse Science Charade

As noted earlier, the science charade can be practiced regardless of one's place in the environmental policy universe. It is not limited to government agencies like EPA, for example; business interests and members of the U.S. Congress often wave the "good science" flag to support policy-based positions, and exaggerate the ability of risk analysis to answer policy-based questions. Environmental interests also trumpet the ability of science to answer political questions when it suits their purposes.17

Similarly, the reverse science charade can be engaged in by anyone interested in environmental regulation. Empirically, however, it appears to be most common among people concerned about the effects of industry influence and overly intrusive, conservative courts on regulatory agencies.18 This section of the Article first canvasses recent publications that evidence the reverse science charade. It then describes how EPA has employed it, demonstrating that the phenomenon is as much a problem as the original science charade. These discussions also draw out some of the short-comings of the reverse science charade, which will then be addressed in the next section.

[33 ELR 10308]

Examples From Recent Publications

Wagner

We begin our survey at the source, Wagner's article. While it repeatedly catches EPA in the act of overstating the ability of science to answer regulatory questions, the article also consistently understates the role science can play in resolving such questions. For example, the article flatly declares that the choice between threshold and nonthreshold models for carcinogens "cannot be resolved by science and thus must be determined by policy factors."19 As discussed below, the growing body of empirical-based knowledge about mechanisms of carcinogenesis is in fact allowing us to choose one model or the other with increasing confidence. It seems entirely appropriate, moreover, to say in such cases that the choice is more a matter of scientific judgment than it is of policy, even science policy.

As noted earlier, Wagner draws the line between science and scientific judgment, on the one hand, and transcience, on the other, as being demarcated by where "significant splits" exist among the scientific community—recognizing, implicitly, that the line could move as the splits narrow. More often, however, her article assumes the futility of "looking to science to resolve transcientific questions"20 —in essence, arguing that this is a sort of logical category mistake. In doing so, I believe she understates the capacity for scientific experimentation and data collection to reduce the policy component of transcientific or "science-policy" choices, to make them all or mainly "scientific" ones.

Of prime importance in this connection is the field of physiological-based pharmacokinetic/pharmacodynamic (PBPK/PD) modeling, which involves using mathematical equations to describe the movement and persistence of chemicals and their metabolites in and through the different tissues of living organisms.21 PBPK/PD computer models employ anatomical and physiological constants and chemical specific parameters to predict the delivered dose of a chemical to a target organism when the external concentration or administered dose is known. These models are capable of performing quantitative extrapolations of dose from animal to human, and from exposure route to exposure route, e.g., inhalation to ingestion, matters that historically have been regarded by EPA as "black boxes" requiring—in the former case, at least—the use of policy-based safety factors (typically a conservative order of magnitude). Importantly, because the constants and parameters employed by these models are empirically measurable, the models can be validated. Thus, questions that have previously been regarded as "transcientific" are gradually becoming susceptible to scientific resolution.

Another area where science is becoming able to answer hitherto unanswerable transcientific questions is the field of toxicogenomics.22 Now that the human genome has been mapped, scientists have been able to develop gene chips, or microarrays, that can contain numerous deoxyribonucleic acid (DNA) sequences at specific locations on the chip. These chips can then be used to test indirectly which genes are expressed, i.e., turned on or off, by exposure to particular hazardous agents. Microarrays are more sensitive markers of toxicity than more conventional toxicological endpoints like tumors or lesions, which generally occur only at high doses. Also, the changes in gene expression indicated by microarrays tend to be chemical-specific, whereas the more obvious physical changes traditionally used by toxicology may be produced by several different agents. An illustrative example of how toxicogenomics is answering questions that previously could not be tested directly is the effect on humans of low doses of ionizing radiation. Using microarrays, the U.S. Department of Energy's Low-Dose Radiation effects program is exploring the actual shape of the low-dose response curve, which has historically been modeled by a linear, no-threshold model to be protective.23

It must be pointed out that many of the advances in these fields have occurred since Wagner's article went to print eight years ago, and it would be unfair to criticize the article for not addressing them. But the point remains—the universe of transcientific questions is not a steady-state one; it is clearly shrinking. It is at least risky, and in many cases may turn out to be wrong, to declare any particular scientific question to be a priori unanswerable by science or scientific judgment.

Applegate and Campbell-Mohn

A more recent example of the reverse science charade is John Applegate and Celia Campbell-Mohn's "Risk Assessment: Science, Law & Policy."24 This article trumpets "the fact that there is no level at which [carcinogens and certain other pollutants] can be deemed safe as a matter of strict scientific fact," arguing that this fact "ought to be regarded as a reason to permit regulatory agencies to act with particular vigor."25 They continue: "Precise estimates and purely scientific risk levels … undermine agencies' ability to protect public health from dangerous substances."26

By definition, whatever level of risk one regards as "safe" is a policy choice, not a matter of "strict scientific fact." On the other hand, given any particular choice of risk level, progress in fields like PK/PD and toxicogenomics are making it increasingly possible to say, as a scientific matter, how likely a particular dose of a chemical is to present a chosen level of risk.

The authors make some more radical attacks on the proper role of science and risk assessment in environmental regulation. Assailing the call for presentation of central tendency estimates of risk, they charge that "averages can be extremely misleading, and they leave above-or below-average individuals unprotected—a result that seems inconsistent with the preventive goals of environmental legislation."27 Taking on the growing acceptance of probabilistic approaches to risk assessment, they assert that risk ranges, [33 ELR 10309] rather than point estimates of risk, "significantly reduce the utility and manageability of risk assessment results in risk management and as components of cost-benefit analysis and risk computation. Moreover, they increase the ability to manipulate results by choosing values that justify a particular result…. Ranges may be more descriptively accurate, but they are less useful."28

While it is true that regulatory levels based on central tendency estimates of risk may leave substantial numbers of people unprotected, it is difficult to understand how average or other central tendency measures of a phenomenon—presented along with other percentile measures—can be anything but helpful to a risk manager, or anyone interested in the issue, for that matter. Certainly a central tendency estimate is more informative—and less misleading—than an upper bound estimate. EPA's 1995 Policy for Risk Characterization calls for "information on the range of exposures derived from exposure scenarios and the use of multiple risk descriptors (e.g., central tendency, high end of individual risk, population risk, important subgroups, if known) …."29 Similarly, when it reauthorized the Safe Drinking Water Act (SDWA) in 1996, Congress called on EPA, in setting national primary drinking water regulations, to publish information on "the expected risk or central estimate of risk [and] each appropriate upper-bound or lower bound estimate of risk."30 As a matter of risk management, one may—and generally should—set a regulatory limit so that it protects a large percentile of the exposed population. But it can only be illuminating, as part of the risk assessment process, to know what the average or median values for various parameters are.

Indeed, ideally one wants to know the full distribution of all the relevant values in a risk assessment, in order to have the most complete picture possible. Using distributions instead of single point estimates is more computationally demanding, but with current software and hardware this difference can be trivial. Nor do distributions increase the ability to manipulate results. To the contrary, specifying the distributions for relevant variables increases the transparency of the process, because it enables others to see how the result is affected by various choices among the distributions. As Administrator Carol Browner's cover memorandum for EPA's Policy for Risk Characterization states, "we must adopt as values transparency in our decisionmaking process …. This means we must fully, openly, and clearly characterize risks."31

In fact, the prominent administrative law scholar Richard Pierce has argued that the use of single point estimates or "narrow ranges" that Applegate and Campbell-Mohn endorse shows the "'perils of precise quantification'—a recurring symptom of the science charade."32 Pierce himself may have fallen victim to the reverse science charade, however. In the same article, he concludes a long discussion about how cost-benefit analysis is laden with policy choices by declaring that "cost-benefit analysis is too indeterminate and value-laden to be useful to a court."33 Pierce is probably right in the point he is trying to make: cost-benefit calculations can produce such wildly different results, depending on the range of defensible inputs to them, that it may be unwise to set up cost-effectiveness as the dispositive test for upholding or invalidating a regulation. I am concerned, however, that analyses and conclusions like Pierce's will be cited more broadly by proponents of the reverse science charade to argue that cost-benefit analysis is not useful for any purpose, or that courts should never scrutinize a cost-benefit evaluation. The stakes in environmental regulation are simply too high for us to turn a blind eye to them, Pierce himself argues.34 When done with sufficient rigor and transparency, cost-benefit analysis can be very illuminating, forcing us to deal with the trade offs that are inherent in the business of the regulatory state. And courts should be able to review these analyses, where a statute or regulation makes them relevant to an agency's decision, at least to assure that the underlying science and assumptions are minimally reliable and reasonable, respectively.35

Public Citizen

The reverse science charade was prominently deployed during U.S. Senate consideration of John Graham's nomination as Administrator of the Office of Management and Budget's Office of Information and Regulatory Affairs. In an exceptionally long and detailed treatise—all the more remarkable for how quickly it appeared on the scene—Public Citizen and several other nongovernmental organizations consistently minimize the contribution that science can make to risk analysis or cost-benefit analysis. The document first mischaracterizes the views of Graham and his fellow travelers as believing that risk analysis and cost-benefit analysis are purely "scientific." The authors then attack this straw man by arguing instead that both processes are really nothing more than crass manipulations designed to lend a veneer of authority to purely political decisions:

Graham's field of "risk management" uses statistical and other data and modeling methods, including the results of risk assessments, to examine our choices about assessing risks to public health and safety…. Because any evaluation of the end result depends upon knowing the precise policy decisions and information criteria that were used in the beginning—conclusions in risk management are based on policy values and categories that have little to do with science.36

The resulting limitations of risk analysis are, in their view, drastic and qualitative:

[33 ELR 10310]

The lack of consensus on risk management principles is an insurmountable stumbling block for its broader application…. The complexity of the issues quickly outgrows the ability of risk management to provide useful information.37

Figure 1 is a chart from the Public Citizen report that shows visually the vanishingly small role its authors accept for science in risk analysis.38 Not surprisingly, they reach a similar conclusion about cost-benefit analysis: "At its broadest application, [economic analysis] is actually politics, masquerading as science."39

Keystone Center/Center for Science, Policy, and Outcomes

Most troubling, for those who believe science has a significant role in environmental decisionmaking, is a report by the staff of the Keystone Center and the Center for Science, Policy, and Outcomes entitled New Roles for Science in Environmental Decision Making.40 The authors recount several examples in which the scientific questions underlying important policy decisions became "battles of the experts," with neither side accepting the other's conclusions. While such battles are admittedly all too commonplace, the lesson the authors draw is much more alarming:

Complex environmental problems rarely allow science to achieve … definitive, authoritative answers that can provide a predictive foundation for action…. Once an issue becomes highly contentious it may be beneficial to explicitly minimize the role of science in the political process until a clear problem definition emerges and an adaptive approach to addressing the problem is accepted…. . Adaptive approaches do not require scientific certainty prior to taking action—in fact, they assume that such certainty cannot be achieved. Rather, [they] define a central role for science in monitoring progress toward predefined goals …. It may often be preferable to designate a quiet time for science until after the problem is well-defined and after desired goals are identified through political means.41

Such an approach seems to be a declaration of surrender: science is so limited in its ability to inform decisions that it should be asked to leave the room until after a decision is made based on purely political grounds. Then it may be allowed back in, but only to serve in a monitoring role, to help us evaluate the consequences of the decision. The authors fail to address, however, what we should do if science ultimately tells us that the political decision made earlier was the wrong one.42

Examples From EPA Practice

The reverse science charade would not be terribly troubling if it were only a sort of parlor game played by academics and interest groups. Unfortunately, EPA plays the reverse science charade about as often as it plays the original science charade. For every instance in which EPA has concealed the policy grounds for a decision by dressing it up in the garb of scientific rationality, one can find another instance in which EPA has declined to acknowledge, or at least be persuaded by, relevant scientific evidence, instead insisting on basing its decision—quite plainly—on its own, long-standing policy choices and the default assumptions embodying them.

Nonthreshold Approach to Low-Dose Extrapolation

The most inertial of EPA's policy choices is its nonthreshold, linear approach to extrapolating from high to low doses in assessing carcinogenicity. As long ago as 1986, EPA's Guidelines for Carcinogen Risk Assessment stated:

No single mathematical procedure is recognized as the most appropriate for low-dose extrapolation in carcinogenesis. When relevant biological evidence on mechanisms of action exists (e.g., pharmacokinetics, target organ dose), the models or procedure employed should be consistent with the evidence…. The Agency will review each assessment as to the evidence on carcinogenesis mechanisms and other biological or statistical evidence that indicates the suitability of a particular extrapolation model.43

Contrary to this openminded, flexible statement, EPA's behavior has often been to cling to the linear approach no matter how well-documented and plausible a nonlinear model is.

[] Chloroform. The most notorious recent example of this phenomenon is the national primary drinking water standard for chloroform. Under the SDWA, EPA is required to set maximum contaminant level goals (MCLGs) for drinking water contaminants that represent "the level at which no known or anticipated adverse effects on the health of persons occur and which allows an adequate margin of safety."44 EPA had proposed a zero MCLG for chloroform in 1994, based on a linear extrapolation model.45

In 1996, Congress enacted revisions to the SDWA that imposed a deadline for the chloroform rulemaking, among others, and declared that, "to the degree that an Agency action is based on science, the Administrator shall use … the best available, peer-reviewed science and supporting studies conducted in accordance with sound and objective scientific [33 ELR 10311] practices."46 EPA commissioned an advisory committee to review the numerous toxicological studies that had been published since the proposal. It also published two notices of data availability in 1997 and 1998, the latter of which discussed the results of another expert panel whose work was independently peer reviewed. Discussing these efforts, EPA's 1998 notice concluded that although the precise mechanism of chloroform's carcinogenicity had not been established, its mode of action had been sufficiently well established to be cytotoxicity, followed by regenerative cell proliferation, rather than genotoxicity. As a result, it endorsed a nonlinear approach and solicited comment on a 0.3 milligrams per liter (mg/L) MCLG.47

Nonetheless, later that year EPA finalized a zero MCLG for chloroform. In a rather remarkable statement, it said that, although it "believes that the underlying science of using a nonlinear extrapolation approach … is well-founded," further deliberations with EPA's Science Advisory Board (SAB) were required "prior to departing from a long-held EPA policy."48 While this could be characterized as a conventional science charade (EPA delays action for yet another scientific consultation), I think it is better described as a reverse science charade: enough research data had been collected to make the choice of extrapolation model one of scientific judgment, rather than transcience or policy. Nonetheless, EPA refused to accept that result and held fast to its long-standing policy choice. On the day the matter was set for oral argument in the U.S. Court of Appeals for the D.C. Circuit, EPA issued a draft SAB report, which also concluded that chloroform has a cytotoxic mode of action, and that a nonlinear model was "scientifically reasonable." On that basis, EPA abandoned its earlier decision. The court nonetheless took the trouble to overturn the rule, holding that EPA had violated the SDWA's requirement that it use "the best available science."49 Since then, EPA has released a draft notice proposing an MCLG of 0.07 mg/L. Quoting the 1986 guidelines excerpted above, EPA states that it has "fully evaluated the science on chloroform" and "believes that the chloroform dose-response is nonlinear."50

[] The Integrated Risk Information System (IRIS) Database. Absent such a clear statutory directive and judicial enforcement, however, EPA has shown little willingness to implement the flexibility it announced almost 20 years ago. In 2000, EPA retained an expert panel to review the level of data uncertainty and variability in the IRIS database, EPA's central collection of Agency consensus values for the cancer and noncancer effects of chemicals.51 At the outset, it is interesting to note that the reviewers caught EPA engaging in the conventional science charade:

The practical effect of [the fact that EPA does not appear to have a solid set of established consensus guidelines on handling uncertainty] is that EPA has, at times, responded to these challenges by resorting to policy decisions, rather than handling uncertainty in a quantitative manner…. There have been occasions when EPA's decisions on how to handle data uncertainties and variability have been advertised as science-based when, in reality, it would have been more appropriate to describe them as grounded in policy decisions.52

The report goes on, however, to document that EPA is more commonly conducting a reverse science charade, failing to recognize the very advances in science that it says it cares about, and instead hewing to old policy shibboleths: "Discussion of such topics as the human relevance of the tumorigenic effect, the pharmacokinetics and dynamics of the compound in biological systems and species differences, and the mode of action is limited."53 Indeed, they noted:

All of the [sixteen] cancer assessments reviewed in this study employed a no-threshold model to derive cancer toxicity endpoints. Several of the reviewers objected vehemently to use of such a default no-threshold approach and offered other modeling options for quantifying cancer toxicity endpoints. In particular, reviewers objected to application of non-threshold models to the derivation of cancer risk values for apparent non-genotoxic or promoting chemicals. It is to be hoped that increasing use of the EPA's 1996 proposed Guidelines for Carcinogen Risk Assessment should result in application of other models and methods and update of older IRIS assessments.54

[] Pesticide Inerts. In spite of such broad and presumable influential advice, EPA persists in using policy choices to answer questions about carcinogenesis, rather than considering the weight of the evidence. Last year, EPA launched an effort to reclassify eight pesticide inerts from its "List 2" (potentially toxic inerts/high priority for testing) to "List 1" (inerts of toxicological concern).55 Its rationale was not any [33 ELR 10312] new studies, or a weight of the evidence analysis of existing studies, but rather the belated application of a 15-year-old policy that an inert should be placed on List 1 if it has been found to cause cancer in one sex of one animal species in a National Toxicology Program (NTP) study.56 This initiative is particularly puzzling because it represents the triumph of an old policy over not only the weight of the scientific evidence but even more recent EPA policies—like EPA's 1999 Cancer Risk Assessment Guidelines—that call for weight-of-the-evidence decisionmaking.57

Of the eight substances, the case of EGBE58 is most remarkable. EPA's current proposal is based on an NTP study from 1998 that found "some" evidence of carcinogenicity in mice, but not rats.59 That study was included, however, in a comprehensive review that EPA conducted for the IRIS database regarding the carcinogenicity of EGBE. The IRIS review, which included human data as well as other animal studies, specifically stated that the NTP study was of "uncertain relevance" to humans. The IRIS review—which officially represents EPA's "consensus" views on toxicological issues—concluded that the carcinogenic potential of EGBE "cannot be determined at this time."60 Other analyses that have considered the NTP study, such as one prepared by the U.S. Food and Drug Administration's Cosmetics Ingredient Review Expert Panel, also have concluded that EGBE should not be regarded as a human carcinogen.61 Rather than implementing EPA's own consensus view or the conclusion of other weight-of-the-evidence panels, the Agency has proposed changing labeling requirements for EGBE based on rigid application of its 1987 "one animal" policy.

Other Dose-Response Policy Choices

[] In General. Outside the area of low-dose extrapolation for carcinogens, EPA's approach, while less well-documented, is nonetheless consistent. EPA recognizes, in theory, the progress that is being made in the field of toxicology, and advertises a willingness to depart from hoary policy-based assumptions. In practice, however, EPA is exceedingly slow to take such action. Instead, it either ignores new developments, chooses not to accept them, or concludes that they do not sufficiently resolve the uncertainty they are intended to address. Because of the rigorous, comprehensive, and representative nature of its inquiry, the words of the IRIS expert panel referenced above are worth quoting at length:

EPA decisionmakers … have made only limited progress in replacing ad hoc procedures based on a few simple but sweeping assumptions with procedures based on the range of risk values consistent with data-derived information about biologic mechanisms of carcinogenic or other toxic effects, chemical disposition in the body, actual human exposures, and other factors influencing the range of biologically relevant risk values.62

In particular, it continued:

Still generally lacking is discussion explaining why: (i) humans are considered to be more sensitive than rodents, when (in some cases) data may exist to indicate that, for the selected critical effect, this may not be true; (ii) adjustment for less-than-lifetime to lifetime duration is needed when (for some chemicals) pharmacokinetic and physiochemical data may exist to indicate that bioaccumulation and tissue retention are unlikely to occur; and (iii) a particular animal health effect is being used to estimate human risk when human data demonstrate that the critical human health concerns are entirely different.63

Similarly, the authors of a recent article on PBPK/PD modeling note that a prime reason EPA has been so slow to actually base health effects values on such models is "regulatory staff … reluctance to accept apparently less conservative toxicity criteria when there still remains some uncertainty in using PBPK/PD models (albeit less than using [the current EPA approach]) to extrapolate from animals to humans and from high to low doses."64 In other words, even though PBPK/PD models reduce the uncertainty involved in these extrapolations, EPA staff use the residual uncertainty as a basis for refusing to depart from their long-standing policy choices.

[] Food Quality Protection Act (FQPA). An excellent example of the reverse science charade in EPA noncancer risk assessment is EPA's reassessment of the risks posed by various pesticides under the FQPA of 1996.65 The FQPA charges EPA with reevaluating pesticide tolerances, particularly to ensure the protection of children. Unfortunately, these reassessments are often based less on the review of new data than on the application of new, precautionary policy choices. While at least one of these policy choices was expressly directed by Congress,66 in other cases EPA seems to use the FQPA reassessment process as a way of proliferating new policy positions to offset the advances in science that would otherwise allow tolerances, in theory at least, to be relaxed.

A prime example is chlorpyrifos, a pesticide that has been studied for potential neurotoxicity.67 Three different physiological endpoints can be measured in an attempt to assess this possibility, all involving the inhibition of cholinesterase enzymes: (1) plasma cholinesterase (BuChE); (2) acetylchol-inesterase (AChE) of red blood cells (RBC); and (3) AChE within neuronal tissues. The first two of these are only biomarkers of exposure; neither has been associated with actual toxic effects.68 The third endpoint has been associated [33 ELR 10313] with impaired cognitive function, although it can only be feasibly measured in animals, not humans.

EPA had adequate rat data for all three endpoints. Yet, instead of relying on the one that actually has some association with toxicity (# 3 above), EPA chose to continue to rely on plasma BuChE, presumably because it is observed at the lowest doses.69 This decision is contrary to decisions by the World Health Organization (WHO) and the state of California, both of which rely on AChE of RBC and AChE within (animal) neuronal tissues. WHO declined to rely on plasma BuChE since "there is no evidence that [it] has any adverse effect."70

[] Refusal to Use Third-Party Human Test Data. The chlorpyrifos reassessment also raises one of EPA's most blatant reverse science charades: its recent categorical refusal, when doing risk assessments, to consider human test data produced by privately funded research. Data from human subjects research, conducted under strict ethical standards, can be very useful in risk assessment. For example, such data may show that humans are more than 10 times more sensitive than lab animals, or that a substance produces an effect in humans that it does not produce in animals.71 Human tests may also enable us to understand pharmacokinetic factors such as absorption, distribution, metabolism, and elimination, so that the 10x animal/human uncertainty factor used for extrapolating from animal data may be replaced by an actual value that may be higher or lower.72 For some tests, such as odor detection studies, obviously, only humans can be useful test subjects.

For almost 30 years, EPA followed other authoritative bodies by using human test data collected under the ethical standards applicable at the time. In 1998, however, EPA announced that it had not used human data in any FQPA assessments,73 and began effectively disregarding human test data from studies EPA had not conducted or funded. This policy was eventually implemented Agencywide in a December 14, 2001, press release that referred the issue to the National Academy of Sciences for an in-depth review.74 The Agency's stated purpose for this moratorium was concern about whether these studies have been conducted under adequate ethical safeguards.75 Clearly, human subjects research, whether conducted by EPA or private parties, should meet agreed-upon ethical standards. Where studies have been performed under ethical standards applicable at the time, however, it is arbitrary and capricious for EPA not to consider the data they produce. Why perpetuate policy-based uncertainty factors when, in fact, that uncertainty does not exist?

The result of EPA's moratorium, in the case of chlorpyrifos, was that after 14 years of basing the chlorpyrifos reference dose on human data, EPA reinstated a 10x uncertainty factor for extrapolating from rats to people.76 Again, both the WHO and California (and Australia) have taken a different tack, the former stating that "if the relevant endpoints have been assessed [human] studies are the most appropriate for setting the acute [reference dose]."77

Ironically, EPA's exclusion of human data in many cases may lead to standards that are less protective, not more so. An important paper by Michael Dourson and others surveyed the IRIS database and found that of the values calculated using human data, 36% were more stringent, i.e., lower, than they would have been if derived only from animal data, and 23% could not have been based on animal data at all because human studies identified a completely different endpoint of toxicity or because the available animal data were insufficient or inappropriate.78 Another analysis has shown that, for the 150 pharmaceutical compounds evaluated, there was no relationship between 30% of the human toxicities observed and those seen in animal tests.79 It is thus not reliable to assume that human data will result in less protective standards and animal data in more protective ones. Moreover, a ban on privately funded human testing will ensure that no such tests uncover future examples where humans are more or differently sensitive than animals.

Problems With the Reverse Science Charade

The narrow conception of science employed in Wagner's article leads to the most fundamental problem with the reverse science charade. The article expresses concern that waiting for science to answer questions that it cannot "leav[es] significant gaps in the regulation of toxics."80 Similarly, it blames the science charade for the "ossification" of rulemaking mentioned at the outset of this Article, arguing that the charade "bears some responsibility for the agencies' slow pace in setting toxic standards."81 Such statements raise a profound question: if the models we use to estimate the toxicity of substances and the risks they pose are all premised on multiple layers or concatenations of policy assumptions, then how do we know that gaps really do exist in the regulation of "toxics," or that more "toxic standards" are needed? Certainly such statements cannot be made with scientific certainty under the article's thesis, because they, too, are based on transcientific or policy choices. If "science" can tell you little more than "the effects of high doses of formaldehyde on the total number of nasal tumors in laboratory mice,"82 and all the remaining steps of the risk assessment process are "transcience" that "ultimately … must be based on policy factors,"83 then how do we have any idea what we are doing in the field of toxic chemical regulation? [33 ELR 10314] How do we know what substances really are toxic, and how toxic they are? Such a limited definition of "science" leads Wagner's article onto logically thin ice, and possibly to saw a circle around itself.

This result is not just an academic problem, moreover. If science is truly unable to answer or even inform policy choices, then how should we make them? In such a world, Public Citizen's assault on risk analysis—that "conclusions in risk management are based on policy values and categories that have little to do with science"—is indeed plausible, though not for reasons the authors fear. If science provides very little guidance for risk analysis, then under federal environmental statutes, the policy choices filling the gaps left by science's limitations will be guided by either of two things: hazard data or a simplistic use of precaution. As I explain below, either choice leads to bad public policy.

Problems With the Hazard Approach

Proponents of the reverse science charade contend, essentially, that all "science" can tell us about the risks posed to humans by most substances is their intrinsic hazard—that is, their absolute propensity to cause adverse consequences, without regard to whether or how much people are even exposed to them. Indeed, in the absence of epidemiological data, this view maintains that science only tells us about the intrinsic hazard these substances pose to animals at high doses.

The main problem with this approach is that it is on balance overprotective, almost certainly by a very wide margin. Bruce Ames, Lois Gold, and their colleagues have noted that roughly one-half of all synthetic chemicals tested on rats or mice have proven carcinogenic—but so have roughly one-half of all naturally occurring chemicals.84 Ames and Gold have also observed that a cup of coffee has more than 1,000 chemicals. Of the 26 that have been tested, 19 are rodent carcinogens.85 It simply begs credulity to believe that all—or even many—of these substances are in fact carcinogenic to humans at actual exposure levels. A hazard-based approach to regulation should also prove extremely costly, particularly if we will have to test—and then regulate—the other 974-odd chemicals contained in coffee.

These same statistics illustrate a second problem with a purely hazard-based approach: its arbitrariness. Man-made pesticides are intensively tested and tightly regulated. Yet roughly 50% of all natural plant pesticides, which are completely unregulated, are also rodent carcinogens, and are normally ingested by people in quantities many orders of magnitude greater than their intake of synthetic pesticides.86 On what basis do we justify such a thorough examination of synthetic pesticides and such complete disregard of natural pesticides? If we believe the rodent bioassay results for the former are meaningful, why are those for the latter of no concern?

In her article, Wagner argues that one consequence of the science charade is a regulatory system that emphasizes chemicals with "more scientifically established health effects … over less-studied substances."87 To the contrary, I believe that this result is more a consequence of the reverse science charade, with its emphasis on absolute hazard data, multiplied inexorably by default safety factors. A system that was more willing to recognize the progress that has occurred in the field of toxicology, and to take more account of exposure data, might be more able to concede that some well-studied chemicals are not as risky as once appeared, and that some others might warrant closer study.

By contrast, the National Research Council has written:

When scientific knowledge is unavailable or overlooked, regulations and policies may fail to address serious environmental problems or unnecessarily seek to overprotect every person or ecosystem against hazards that are minor and that few will actually experience. This can carry serious implications for public health and the environment or impose a heavy burden on society and the economy without providing appreciably better protection for most people or ecosystems.88

Problems With the Precautionary Approach

Practitioners of the reverse science charade are content for science to play a minor role in a process dominated by policy choices since the choices that have long held sway effectuate "the preventive goals of environmental legislation."89 Since they see these laws as embodying a precautionary philosophy, it is acceptable, indeed desirable, for decisions to be made on the basis of conservative policy assumptions except in those rare (or perhaps nonexistent) cases where the science really does resolve the relevant questions decisively.90

At the outset, it should be obvious that regulating based on precaution raises all the same problems of overinclusiveness and arbitrariness that were just discussed in connection with reliance on hazard data. In fact, those problems are aggravated, since the strong form of the precautionary approach begins by ruling out new activities or substances that cannot be shown, before use, to meet some standard of safety.91 The reverse science charade's resistance to methods that reduce or even quantify uncertainty makes it only more difficult to meet the level of certainty required to establish safety under the precautionary paradigm.

The precautionary approach is also vulnerable to the more radical critique I suggested at the beginning of this section: if science is unable to inform the risk assessment process in any significant way, how do we even know that particular substances are "toxic," or that more (rather than less) regulation is needed? A precautionary approach essentially results in risk-averse policy choices bootstrapping themselves into an ever-more protective cycle, increasingly divorced from any means of validating or measuring the risk being avoided.

[33 ELR 10315]

On the other hand, proponents of the precautionary approach may yet find that the "preventive goals" of federal environmental laws do not require the degree of precaution that they seek. This last prospect is explored in the final section of this Article.

American Trucking Associations and Margins of Safety

Without question, most federal environmental statutes embody a "prevention" orientation. As recently as 1990, Congress retained the Clean Air Act's (CAA's) "ample margin of safety" standard for hazardous air pollutants, at least as a "hammer" if Congress proved unable—as it indeed has—to devise another standard within eight years of EPA's promulgation of technology-based standards.92 What is less clear, however, is exactly what these sorts of standards mean in actual application. In this connection, I believe that Justice Breyer's concurrence in American Trucking Associations,93 does not bode well for those who would look to federal laws to mandate the reverse science charade.

One of the most baffling conundrums of federal environmental regulation is how to apply the CAA's "adequate margin of safety" standard for national ambient air quality standards to pollutants that exhibit nonthreshold effects. Section 109(b)(1) of the Act requires EPA to set "ambient air quality standards the attainment and maintenance of which in the judgment of the Administrator … and allowing an adequate margin of safety, are requisite to protect the public health."94 If an air pollutant poses some finite amount of risk at any level of exposure, such that the only safe dose is zero, how does one then provide for an adequate margin of safety—set the standard below zero? This quandary is nicely described in a Pierce article on the D.C. Circuit's decision in American Trucking Ass'n v. EPA.95 With his characteristic frankness, Pierce describes Congress' mandate here as an "incoherent" standard.96

The primary issue before the U.S. Supreme Court in American Trucking was whether this statutory standard was so standardless that it violated the nondelegation doctrine. The D.C. Circuit had declared that it was, at least the way that EPA had interpreted it. The lower court concluded that EPA's interpretation lacked any "intelligible principle" by which one could say "how much is too much."97 While the D.C. Circuit's holding did not depend on the nonthreshold "below zero" conundrum, the issue was certainly posed for the Court to address. Before the Court, the respondents argued that the invalidity of the statute was most plainly demonstrated in the case of ozone, a nonthreshold pollutant, because it forced EPA to choose an arbitrary stopping point along what really is a smooth continuum of risk at any level greater than zero.98

The Court was unimpressed, however, asserting blandly that the word "requisite" sufficiently bounded EPA's discretion so that it would not set a level "lower or higher than is necessary [] to protect the public health with an adequate margin of safety."99 Maddeningly, the Court in two consecutive sentences noted that any level of ozone exposure posed a risk, but then glibly asserted that EPA could somehow set a level greater than zero that was yet able to protect the public health and add a margin of safety.100 It conveniently avoided the question it begged, which is how that could be done.

Justice Breyer's concurrence, however, does explain how, and is worth quoting at length:

These words [requisite to protect the public health with an adequate margin of safety] do not describe a world that is free of all risk—an impossible and undesirable objective. Nor are the words "requisite" and "public health" to be understood independent of context. We consider football equipment "safe" even if its use entails a level of risk that would make drinking water "unsafe" for consumption. And what counts as "requisite" to protecting the public health will similarly vary with background circumstances, such as the public's ordinary tolerance of the particular health risk in the particular context at issue. The Administrator can consider such background circumstances when deciding what risks are acceptable in the world in which we live.

The statute also permits the Administrator to take account of comparative health risks. That is to say, she may consider whether a proposed rule promotes public safety overall. A rule likely to cause more harm than it prevents is not a rule that is "requisite to protect the public health." For example … the Administrator has the authority to determine to what extent possible health risks stemming from reductions in tropospheric ozone (which, it is claimed, helps prevent cataracts and skin cancer) should be taken into account in setting the ambient air quality standard for ozone.

The statute's words, then, authorize the Administrator to consider the severity of a pollutant's potential adverse health effects, the number of those likely to be affected, the distribution of the adverse effects, and the uncertainties surrounding each estimate.

This discretion would seem sufficient to avoid the extreme results that some of the industry parties fear. After all, the EPA, in setting standards that "protect the public health" with "an adequate margin of safety," retains discretionary authority to avoid regulating risks that it reasonably concludes are trivial in context.101

Admittedly, Justice Breyer's explanation is his own, not the majority's. But it is difficult to see any other way for the Court—indeed, any court—to explain how one can have an adequate or ample margin of safety in regulatory situations where some residual amount of risk remains. The result of this logic, moreover, is profound. It means that laws based on "preventive goals" do not necessarily require that the most protective or preventive approach be adopted by regulatory agencies. It means that "safety" does not necessarily require the elimination of risks. Most fundamentally, it means that these laws do not mandate a strictly precautionary or hazard-based approach, but rather incorporate the [33 ELR 10316] concepts of risk assessment and comparative risk analysis. Justice Breyer's concurrence has exposed a narrow—but deep—crevasse between much federal environmental legislation and the purely preventive agenda that some have argued it embodies. If more broadly followed, it will undermine the principal legal justification for the reverse science charade.

Conclusion

The science charade arises when agencies extend science beyond its proper bounds in an attempt to conceal policy choices. It has been motivated in large part by judicial reversal of decisions based explicitly on policy choices. The reverse science charade arises when agencies minimize the role of science to enable greater use of policy choices. It has been justified in large part by recourse to the "preventive goals" of federal laws. Justice Breyer's concurrence explains how the American Trucking decision provides EPA with leeway to make policy choices explicitly, without requiring them to be based on absolute notions of prevention. By doing so, it frees them from having to engage in either charade.

Figure 1

[SEE ILLUSTRATION IN ORIGINAL]

1. Wendy Wagner, The Science Charade in Toxic Risk Reduction, 95 COLUM. L. REV. 1613, 1617 (1995).

2. Id. at 1617.

3. E. Donald Elliott, Remarks at the Symposium on Assessing the Environmental Protection Agency After Twenty Years: Law, Politics, and Economics (Nov. 15, 1990), cited in Thomas McGarity, Some Thoughts on "Deossifying" the Rulemaking Process, 41 DUKE L.J. 1385, 1385-86 (1992).

4. See note 81 and accompanying text.

5. 531 U.S. 457, 31 ELR 20512 (2001).

6. See Wagner, supra note 1, at 1640-41.

7. See id. at 1644-49.

8. U.S. EPA, RESPIRATORY HEALTH EFFECTS OF PASSIVE SMOKING: LUNG CANCER AND OTHER DISORDERS (1993).

9. The author is no fan of tobacco smoke, but the fact remains that EPA's reassessment would not have found that second-hand smoke is a carcinogen had it stuck to the 95% confidence level for statistical significance. The post-hoc nature of EPA's science on this topic is described, albeit tendentiously, in Flue-Cured Tobacco Coop. Stabilization Corp. v. EPA, 4 F. Supp. 2d 435, 463-66, 28 ELR 21445, 21455-57 (M.D.N.C. 1998), rev'd on other grounds, 313 F.3d 852 (4th Cir. 2002).

10. See Wagner, supra note 1, at 1661-67. Noting the "disturbing … trend of reviewing courts to reverse agency policy decisions [when they are] set forth explicitly," Wagner warns that this trend could render "administrative policymaking an oxymoron." Id. at 1666 (quoting J. Mashaw & D. Harfst, THE STRUGGLE FOR AUTO SAFETY 227 (1990)).

11. See Wagner, supra note 1, at 1706-09.

12. John Hart Ely, DEMOCRACY AND DISTRUST (1980).

13. See, e.g., PUBLIC CITIZEN, SAFEGUARDS AT RISK: JOHN GRAHAM AND CORPORATE AMERICA'S BACK DOOR TO THE BUSH WHITE HOUSE 113 (2001), available at www.publicitizen.org/documents/grahamrpt.pdf (discussed infra); A. Raul, Judicial Oversight Can Restrain Regulators' Use of Junk Science, LEGAL BACKGROUNDER, Jan. 8, 1999 (EPA's use of 90%, rather than 95%, confidence interval in study of environmental to bacco smoke study is "junk science," "charade") (citing Wagner).

14. See Wagner, supra note 1, at 1619 n.21.

15. See id. at 1619-22.

16. Assumptions are also usually made about variability in susceptibility among humans.

17. Wagner cites as an example the Natural Resources Defense Council's (NRDC's) attack on Alar, which "cited the quantitative results of [its] statistics with misleading precision [and] failed to indicate the tremendous scientific uncertainty regarding its own risk assessment estimates." Wagner, supra note 1, at 1659.

18. Organizations with a substantial stake in maintaining and tightening health-related regulatory limits have begun a concerted effort not only to minimize the role of science in answering relevant questions but also to inveigh against industry-funded scientists and scientific work. See, e.g., Linda Greer & Rena I. Steinzor, Bad Science, ENVTL. F., Jan./Feb. 2002, at 28, 31 (arguing that "scientific evidence [can only be] called upon to resolve policy disputes where definitive answers are [] available" and that "EPA science is dominated by self-interested industry research and peer reviewed by self-interested industry experts"). NRDC's work in this connection has been supported by a grant from the Beldon Foundation "to implement [NRDC's] Public Interest Service Initiative, a campaign to remove industry-funded scientists from EPA advisory boards and to appoint scientists dedicated to protecting human health and the environment." BELDON FUND 2000 GRANTS, (2001) available at www.beldon.org/grants2000_07.html (last modified Feb. 26, 2001) (copy on file with author).

19. Wagner, supra note 1, at 1626.

20. Id. at 1632.

21. The discussion in this paragraph is drawn from Hays et al., Potential Uses of PBPK Modeling to Improve the Regulation of Exposure to Toxic Compounds, RISK POL'Y REP., July 8, 1998, at 37.

22. The discussion in this Article is drawn from Gary Marchant, The Genome Cometh, CHEMISTRY BUS., 2002, at 12. The many legal issues implicated by the field are elucidated in Lynn L. Bergeson et al., Toxicogenomics, ENVTL. F., Nov./Dec. 2002, at 28.

23. See http://lowdose.tricity.wsu.edu (last visited Jan. 28, 2003).

24. John Applegate & Celia Campbell-Mohn, Risk Assessment: Science, Law & Policy, 14 NAT. RESOURCES & ENV'T 219 (2000).

25. Id. at 272.

26. Id.

27. Id. at 222.

28. Id. at 221-22.

29. U.S. EPA, POLICY FOR RISK CHARACTERIZATION (1995).

30. 42 U.S.C. § 300g-1(b)(3)(B), ELR STAT. SDWA § 1412(b)(3)(B). This standard has been adopted or adapted by the Office of Management and Budget and federal agencies implementing the so-called Information Quality Act, Pub. L. No. 106-554, § 515. See 67 Fed. Reg. 8452, 8457-58 (Feb. 22, 2002).

31. U.S. EPA, MEMORANDUM ON RISK CHARACTERIZATION PROGRAM (1995).

32. Richard Pierce, The Inherent Limits on Judicial Control of Agency Discretion: The D.C. Circuit and the Nondelegation Doctrine, 52 ADMIN. L. REV. 63, 82, 86-87 (2000).

33. Id. at 84.

34. See id.

35. In other words, while I think it may be a bad idea for Congress to establish (or courts to infer) cost-effectiveness as a statutory criterion that a rule should have to meet to be valid, I do think that where an agency has discretion to consider cost-effectiveness in setting rules, its use of that discretion should be subject to at least some degree of judicial review under the arbitrary and capricious standard of the Administrative Procedure Act. For a highly nuanced defense of a substantive, qualified cost-benefit test for agency rulemaking, see CASS SUNSTEIN, THE COST-BENEFIT STATE (2002).

36. PUBLIC CITIZEN, supra note 13, at 112 (emphasis in original).

37. Id. at 113 (emphasis in original).

38. See id. at 115, fig. 1.

39. Id. at 114 (emphasis in original).

40. KEYSTONE CENTER & CENTER FOR SCIENCE, POLICY & OUTCOMES, NEW ROLES FOR SCIENCE IN ENVIRONMENTAL DECISION MAKING (2000).

41. Id. at 7.

42. A vastly preferable solution to the battle of the experts is described in Gail Charnley's paper Democratic Science: Enhancing the Role of Science in Stakeholder-Based Risk Management Decision Making (2000), at www.riskworld.com/Nreports/2000/Charnley/NR00GC00.htm. Based on a series of case studies where science was successfully integrated into public policy decisions, Charnley argues that these cases worked because stakeholders worked together, at the outset, to (1) articulate what questions had to be answered, (2) determine what factual information would count as an answer, and (3) identify which experts would gather the needed data. When the data were collected, the stakeholders were then able either to reframe and reorient the problem and goals, or to progress to a risk management decision. While not a complete solution, this process at least minimizes the likelihood that one or both sides will argue that the other dominated the process by choosing what questions to answer, what answers would qualify, or which experts did the work.

43. U.S. EPA, GUIDELINES FOR CARCINOGEN RISK ASSESSMENT 12-13 (1986).

44. 42 U.S.C. § 300g-1(b)(4)(A), ELR STAT. SDWA § 1412(b)(4)(A). Enforceable maximum contaminant levels are then established, taking practical considerations into account but remaining "as close to the [MCLG] as is feasible." Id. § 300g-1(b)(4)(B), ELR STAT. SDWA § 1412(b)(4)(B).

45. 59 Fed. Reg. 38668 (July 26, 1994).

46. Safe Drinking Water Act Amendments of 1996, Pub. L. No. 104-182, codified in relevant part at 42 U.S.C. § 300g-1(b)(3)(A), ELR STAT. SDWA § 1412(b)(3)(A). Those concerned about the science charade should be pleased by the introductory clause of this mandate—"to the degree that an Agency action is based on science"—thus clarifying implicitly that scientific issues are not relevant to the extent a decision is based on policy grounds.

47. 63 Fed. Reg. 15674, 15685 (Mar. 31, 1998).

48. Id. at 69389, 69401 (Dec. 16, 1998).

49. Chlorine Chemistry Council v. EPA, 206 F.3d 1286, 30 ELR 20473 (D.C. Cir. 2000). The court actually disagreed that employing a threshold model in this case would require EPA to "depart[] from a long-held policy":

This is a change in result, not in policy. The change in outcome occurs simply as a result of steadfast application of … EPA's Carcinogen Risk Assessment guidelines, stating that when "adequate data on mode of action show that linearity is not the most reasonable working judgment and provide sufficient evidence to support a nonlinear mode of action," the default assumption of linearity drops out.

Id. at 1290, 30 ELR at 20474.

50. National Primary Drinking Water Regulations: Stage 2 Disinfectants and Disinfection Byproducts Rule at 181-82 (draft proposed rule Oct. 17, 2001), available at www.epa.gov/safewater/mdbp/st2dis-preamble.pdf. Some critics of EPA continue to call for additional scientific study to learn more about the precise mechanism of chloroform's carcinogenicity before departing from EPA's nonthreshold model. See, e.g., Carolyn Raffensperger, How Much Chloroform Is Good for You?, ENVTL. F., May/June 2000, at 14. In the author's view, such critics begin to resemble their own caricature of industry, calling endlessly for more scientific study before any decisions can be made.

51. See www.epa.gov/iris.

52. VERSAR, INC., CHARACTERIZATION OF DATA UNCERTAINTY AND VARIABILITY IN IRIS ASSESSMENTS PRE-PILOT VS. PILOT/POST-PILOT 40 (2000), available at www.epa.gov/ncea [hereinafter VERSAR REPORT].

53. Id. at 41.

54. Id. The 16 assessments were randomly chosen from the IRIS database, 8 from before EPA initiated its 1995 "Pilot Project" and 8 from afterward.

55. 67 Fed. Reg. 10718 (Mar. 8, 2002).

56. Id. at 10719 (citing Inert Ingredients in Pesticide Products Policy Statement, 52 Fed. Reg. 13305 (1987)).

57. See www.epa.gov/ncea/raf/pdfs/cancer_gls.pdf at 2-34.

58. Ethylene glycol monobutyl ether, a.k.a. 2-Butoxyethanol.

59. The study found "some evidence" of different sorts of carcinogenic activity in male and female mice. 67 Fed. Reg. at 10720.

60. See www.epa.gov/iris/subst/0500.htm.

61. The Expert Panel's report is published at 21 INT'L J. OF TOXICOLOGY 9-62 (2002).

62. VERSAR REPORT, supra note 52, at 40.

63. Id. at 41.

64. Hays et al., supra note 21, at 39.

65. Pub. L. No. 104-170, tit. IV.

66. See 21 U.S.C. § 346a(b)(2)(C) (tenfold margin of safety is to be applied unless a lower margin "will be safe for infants and children [based on] reliable data").

67. This discussion is drawn from C.B. Cleveland et al., Risk Assessment Under FQPA: Case Study With Chlorpyrifos, 22 NEUROTOXICITY 699 (2001).

68. A biomarker is a physical concomitant of exposure that may or may not indicate an adverse effect. For example, both perspiration and sunburn are biomarkers of exposure to sunlight. The former does not necessarily indicate an adverse effect, the latter does.

69. See www.epa.gov/pesticides/op/chlorpyrifos-methyl/rev_toxicology.pdf.

70. UNITED NATIONS FOOD & AGRICULTURAL ORGANIZATION/WHO, 1998 JOINT MEETING OF THE FAO PANEL OF EXPERTS ON PESTICIDE RESIDUES IN FOOD AND THE ENVIRONMENT/WHO CORE ASSESSMENT GROUP (1998) [hereinafter 1998 JOINT MEETING].

71. See Michael Dourson et al., Using Human Data to Protect the Public's Health, 33 REG. TOXICOLOGY & PHARMACOLOGY 234, 242-43, 250-52 (2001).

72. See id. at 237, 242; Cleveland, supra note 67, at 701.

73. See www.epa.gov/scipoly/sap/1998/december/epastmt.htm.

74. See http://yosemite.epa.gov/opa/admpress.nsf/blab9f485b098972852562e7004dc686/c232a45f5473717085256b2200740ad4?OpenDocument.

75. See id.

76. See www.epa.gov/oppsrrd1/op/chlorpyrifos/reevaluation.pdf; Cleveland, supra note 67, at 702.

77. 1998 JOINT MEETING, supra note 70, at 17.

78. See Dourson, supra note 71.

79. See H. Olson et al., Concordance of the Toxicity of Pharmaceuticals in Humans and Animals, 32 REG. TOXICOLOGY & PHARMACOLOGY 56 (2000).

80. Wagner, supra note 1, at 95.

81. Id. at 1677.

82. Id. at 1619.

83. Id. at 1624.

84. LOIS S. GOLD ET AL., HANDBOOK OF CARCINOGENIC POTENCY AND GENOTOXIC DATABASES ch. 4 & tbl. 5 (1997) (excerpted in relevant part at http://potency.berkeley.edu/herp.html#excerpt). See generally NATIONAL RESEARCH COUNCIL, CARCINOGENS & ANTICARCINOGENS IN THE HUMAN DIET: A COMPARISON OF NATURALLY OCCURRING AND SYNTHETIC SUBSTANCES (1996).

85. See GOLD ET AL., supra note 84.

86. See id.

87. Wagner, supra note 1, at 1681-82.

88. NATIONAL RESEARCH COUNCIL, STRENGTHENING SCIENCE AT THE U.S. ENVIRONMENTAL PROTECTION AGENCY: RESEARCH MANAGEMENT AND PEER REVIEW PRACTICES 3 (2000).

89. See Applegate & Campbell-Mohn, supra note 24, at 222.

90. See, e.g., PUBLIC CITIZEN, supra note 13, at 25 ("The precautionary principle … is the underpinning of many of the regulatory statutes that Graham seeks to undermine ….").

91. See id.

92. Under the Clean Air Act Amendments of 1990, EPA was to report to Congress by November 1996 on the degree of "residual risk" remaining after imposition of technology-based controls on air toxins. If Congress did not act on that report, EPA was to proceed with regulations embodying an "ample margin of safety" eight years after promulgation of technology-based rules. See 42 U.S.C. § 7412(f)(1), (2)(A), ELR STAT. CAA § 112(f)(1), (2)(A).

93. 531 U.S. at 457, 31 ELR at 20512.

94. 42 U.S.C. § 7409(b)(1), ELR STAT. CAA § 109(b)(1).

95. 175 F.3d 1027, 29 ELR 21071 (D.C. Cir. 1999).

96. See Pierce, supra note 32, at 74.

97. 175 F.3d at 1034-38, 29 ELR at 21076.

98. See 531 U.S. at 475, 31 ELR at 20514-15.

99. Id. at 475-76, 31 ELR at 20515.

100. See id.

101. Id. at 494-95, 31 ELR at 20519 (citations omitted).


33 ELR 10306 | Environmental Law Reporter | copyright © 2003 | All rights reserved