33 ELR 10751 | Environmental Law Reporter | copyright © 2003 | All rights reserved


Legal Aspects of the Regulatory Use of Environmental Modeling

Thomas O. McGarity and Wendy E. Wagner

Thomas O. McGarity holds the W. James Kronzer Chair in Trial and Appellate Advocacy at the University of Texas School of Law. Wendy E. Wagner is a Professor of Law at the University of Texas School of Law. The authors are grateful for helpful comments from participants at a workshop on U.S. Environmental Protection Agency models convened by the Woodrow Wilson International Center for Scholars, and especially thank Pasky Pascual for valuable comments on an earlier draft.

[33 ELR 10752]

I. Introduction

At the request of the Woodrow Wilson International Center for Scholars, we have analyzed the past 30 years of judicial challenges to U.S. Environmental Protection Agency (EPA) rulemakings to identify the types of constraints the courts impose, primarily under the Administrative Procedure Act (APA),1 on EPA modeling exercises. After outlining the litigation, we distill several major lessons from the courts' review of EPA models. We also consider the extent to which the Data Quality Act (DQA)2 might alter the legal landscape and conclude that with respect to the judicial review of modeling exercises, the DQA is likely to have a limited effect, at most.

A basic description of models, judicial review, and the limits of our study are provided briefly, before we delve into the details of judicial review.

First, we assume in this analysis that most of the models used by EPA have similar, key features that can serve as benchmarks for organizing judicial challenges to disparate modeling exercises.3 EPA's models generally are comprised of two main components: (1) points for the input of data; and (2) one or more assumed correlations or equations that link these data together (hereinafter "model assumptions," more accurately referred to as model algorithms by modelers) to yield a prediction or assessment.4 See Figure 1. Although ideally these assumed correlations would be firmly grounded in past data from well-conducted studies, because of the dearth of such data in many areas of public health and environmental science, the correlations in EPA's models are often based instead on untested or theoretical predictions and even policy judgments, e.g., conservative assumptions about linear dose-response for carcinogens or decisions about the appropriate percentage risk of exceedance that should be tolerated. Often models zigzag between these model assumptions, e.g., an equation that produces estimates of chlorophyll-a based on combining data on river flow, nitrogen concentrations, temperature, and algal density, and the points for data input, e.g., nitrogen concentrations, river flow, temperature, and algal density. Both the data and the model algorithms involve a number of assumptions. For example, settling on one or more data sets will involve decisions about whether the data are representative of the larger system; whether the methods for data collection are reliable; whether the data set is large enough, etc. When all of the assumptions and data are entered, the model produces a final prediction, i.e., a chlorophyll-a surrogate to predict fish kills: a second model could also be developed to better relate chlorophyll-a and fish kills. The modeler may select one of several statistical methods for analyzing the data, although some statistical techniques might not be appropriate for limited data sets.5 As new data are produced, the model can be recalibrated to refine the model algorithms, making the model a constant work-in-progress.6 The points at which affected parties might challenge models are provided in Figure 2. These form the basis for organizing the case law.

[SEE Figure 1: The Anatomy of a Model IN ORIGINAL]

[33 ELR 10753]

Second, we assume that the primary basis for legal challenges will be the APA or statutory provisions that provide similar authority for challenging agency action in court. The courts' authority to review agency regulations stems from the APA or other authorizing legislation, unless the constitutionality of a regulation is being challenged. In the APA, the U.S. Congress designated the court as its agent to ensure that the agency is following requirements set in the authorizing statute, e.g., deadlines; is not promulgating rules that violate or fall outside the bounds of the authority delegated to the agency by statute; is following prescribed processes for promulgating rules, such as notice and comment; and is not promulgating rules that are arbitrary in relationship to the facts. Since the courts are not directly accountable to the public the way that we assume the executive branch is, the courts must walk a fine line by ensuring that agencies are developing policies faithful to the statute without intruding on the province of the executive branch to administer the laws in the way it sees fit or the province of the legislature to determine regulatory policy. Policy decisions, then, lie squarely within the province of the agency's discretion and may not be disturbed by the courts so long as those judgments are within the limits set by the authorizing statute and not otherwise arbitrary.7

Third, in order to avoid lumping too many disparate issues together, we limit the types of models we consider in this analysis to those that attempt predictions about the natural world. Challenges to cost-benefit analyses and other economic models are specifically excluded from consideration. We also, with few exceptions, consider only challenges to EPA's models and exclude cases involving challenges to environmental or public health models developed by other agencies. Because both the Consumer Product Safety Commission (CPSC) and the Occupational Safety and Health Administration (OSHA) have mandates that arguably require the agency to shoulder a greater burden in supporting their models, we use these cases sparingly and signal their important differences.

II. Timing and Process Challenges

Before a reviewing court will reach the merits of a substantive challenge to an agency's modeling exercise, it will ordinarily dispose of timing and process challenges. The agency may at the outset take the position that judicial review is inappropriate because it has not yet engaged in final agency action or that the final action taken is not yet "ripe" for judicial review. This argument may be appropriate when a party challenges a modeling exercise that is not associated with any particular agency regulatory action. An affected party may take the position that an agency modeling exercise must be set aside because the agency did not follow procedures prescribed by statute for carrying out that modeling exercise. Such procedural challenges usually go to the transparency of the modeling exercise and the adequacy of notice and opportunity for public comment that the agency provided to outsiders.

A. Timing and Availability of Review

Unless a particular statute specifies otherwise, only "final" agency actions are subject to judicial review.8 During the course of judicial review of action that meets the test of finality, the court can consider allegations that the agency erred in some regard in earlier stages of the administrative proceeding.9 Ordinarily, an action is not "final" until it carries direct consequences for the person or entity attempting to challenge that action in court.10 If no consequences flow from the action until after the agency or some other agency takes additional action, the first action is not final. If, however, the agency action has "altered the legal regime" in a way that renders it highly likely that consequences will flow from that action, the action is final.11

The courts have further developed a "ripeness" doctrine that is designed to "prevent the courts, through avoidance of premature adjudication, from entangling themselves in abstract disagreements over administrative policies, and also to protect the agencies from judicial interference until an administrative decision has been formalized and its effects felt in a concrete way by the challenging parties."12 Under this doctrine, the court must examine "the fitness of the issues [raised by the challenge] for judicial decision and the hardship to the parties of withholding court consideration."13

In exploring the impact of the "finality" and "ripeness" requirements on judicial review of models, it may be useful to divide the way that EPA employs models into three categories: (1) EPA's use of a model in connection with an ongoing proceeding that has a defined endpoint, e.g., a rule, a pesticide tolerance, a SIP approval, a permit, or a pesticide cancellation; (2) EPA's use of a model for internal decision-making purposes, e.g., priority setting, resource allocation, other internal guidance; and (3) EPA's use of a model for the purpose of disseminating information to the general public outside of an ongoing proceeding. Category (3) includes EPA's development or use of a model that might be employed in a future administrative proceeding with a defined endpoint (category (1)).

With respect to category (1), judicial review of the agency action in which it employs the model will generally be available [33 ELR 10754] once the action reaches the discrete endpoint. Thus, if EPA employs a model during a rulemaking exercise, judicial review of the use of the model will generally be available upon EPA's promulgation of the final rule. If the "arbitrary and capricious" or "substantial evidence" test defines the scope of review of the rule,14 then a party might ask a court to set aside the rule on one or more of the substantive grounds discussed below. The challenge might go to the model itself or to EPA's use of the model in the particular context of the issues addressed by the rulemaking.

Arguably, when judicial review of EPA's use of a model is available at the end of a relevant proceeding with a definite endpoint, it would not be appropriate prior to the completion of the proceeding. Thus, if EPA signaled in a notice of proposed rulemaking that it was planning to employ a particular model in a particular way in promulgating the rule, affected parties would be able to comment upon the appropriateness of EPA's use of the model, but would be obliged to wait until after EPA actually relied upon the modeling exercise in promulgating the final rule before seeking judicial review of the model or EPA's use of the model in that proceeding.15 Ripeness considerations would probably justify the Agency in denying a challenge to the model under the DQA until after completion of the rulemaking process. Indeed, the rulemaking proceeding itself would seem to meet the DQA requirement for an "administrative mechanism[] allowing affected persons to seek and obtain correction of the information."16

EPA's use of models in category (2) (internal decision-making purposes) has not traditionally been subject to judicial review.17 Until the Agency has used the model in a way that has "direct consequences" for a party with standing to challenge the action, there has been no "final agency action" to challenge. Purely internal use of a model is not likely to alter the legal regime in a way that renders it highly likely that adverse consequences will flow from that action without the Agency having to take additional action that will itself be subject to judicial review.18 Moreover, the especially high risk that judicial review of internal agency modeling exercises will result in the courts "entangling themselves in abstract disagreements over administrative policies" suggests that such disputes will very rarely be "ripe" for review.19

Category (3) presents a closer question. The dissemination of the results of a modeling exercise might well have "direct consequences" for parties, especially when it is associated with an official agency statement or "guidance document" that could be characterized by a court as a final agency action.20 On the other hand, merely making the results of a modeling exercise available to the public, e.g., on the agency's website, will often not have concrete impact on individuals or companies.

The leading case on ripeness in the category (3) context is Flue-Cured Tobacco Cooperative Stabilization Corp. v. U.S. Environmental Protection Agency.21 In that case, various representatives of the tobacco industry filed an action in federal district court challenging EPA's risk assessment for environmental tobacco smoke (ETS). Congress had required EPA to establish a research program to collect data on indoor air pollutants, but it had specifically declined to give EPA any authority to regulate indoor air quality. Despite the clear absence of any threat of direct regulation by EPA, the industry claimed that its challenge to the risk assessment was final agency action that was ripe for review upon the promulgation of the risk assessment in the Federal Register.22 The U.S. Court of Appeals for the Fourth Circuit disagreed.

The court first noted that since the plaintiffs' claims were based upon § 702 of the APA, they had to demonstrate that the risk assessment constituted "final" agency action.23 Employing the two-part test articulated by the U.S. Supreme [33 ELR 10755] Court in Bennett v. Spear,24 the Fourth Circuit held that the risk assessment marked the consummation of the Agency's decisionmaking process, but legal consequences did not directly flow from its publication. Although the risk assessment could certainly induce other federal agencies and state and local governments to limit smoking in public places, such regulations would be "the product of independent agency decisionmaking" and not "direct consequences" of EPA's action.25 Even if other governmental entities had relied upon the risk assessment in imposing smoking restrictions, those requirements were not "direct consequences" of the publication of the risk assessment, but were "the product of independent agency decisionmaking."26 In a statement of direct relevance to judicial review of modeling exercises, the court observed:

We do not think that Congress intended to create private rights of actions to challenge the inevitable objectionable impressions created whenever controversial research by a federal agency is published. Such policy statements are properly challenged through the political process and not the courts.27

The same could probably be said for most EPA modeling efforts conducted outside the context of formal Agency proceedings.

The DQA is not likely to change the foregoing analysis of the availability of judicial review of modeling exercises in situations in categories (2) and (3). Under the statute, modeling exercises that the Agency plans to disseminate publicly are subject to the "administrative mechanism" that EPA has established to allow affected persons to seek and obtain correction of information contained in those exercises. Although the use of a model during the internal decisionmaking process would not ordinarily involve the "dissemination" of data, if the model or its results are "disseminated," an adversely affected party could argue that information that the Agency "maintained" to support the model or its application in the agency's internal resolution of an issue is subject to "correction" under the Agency's DQA procedures. In either case, the Agency will at some point either accept or reject the challenge. The rejection of a DQA challenge might be held to be a "final agency action" subject to review under the APA.

The Agency's rejection of a DQA challenge would mark the "consummation" of the Agency's decisionmaking process with respect to the challenge, but the entity seeking judicial review would still have to demonstrate that direct legal consequences would flow from the dissemination of the results of the modeling exercise. Although there may be cases in which direct legal consequences do flow from dissemination or maintenance of modeling exercises, they are likely to be rare.

B. Process Challenges

When EPA relies upon a model to support a particular rule, the APA requires the Agency to provide the public an opportunity to comment upon the assumptions and algorithms that are built into the model.28 In particular, the Agency must provide clear notice of the possibility that it will rely upon a particular model and provide sufficient information about that model to allow the public to comment upon its use of the model in the rulemaking proceeding.

In McLouth Steel Products Corp. v. Thomas,29 a steel company petitioned EPA to de-list a waste stream from its list of hazardous wastes. In rejecting the petition, EPA employed a vertical and horizontal spread (VHS) model to predict the "leachate" levels of the hazardous components of McLouth's waste. Noting that EPA had never subjected the model to notice and comment, the petitioner challenged the use of the model in this very limited rulemaking proceeding. The court agreed, rejecting EPA's contention that the model was just a policy statement and not a legislative rule. The court noted:

EPA has evidenced almost no readiness to re-examine the basic propositions that make up the VHS model, i.e., propositions about the numerical relationship between leachate concentrations and waste quantities on one hand and groundwater contamination on the other. It treated those issues as resolved.30

To EPA's argument that it had given the petitioner an opportunity to comment on the model during the rulemaking in which it had listed the petitioner's wastes, the court believed that the Agency's very brief (almost hidden) reference to the model in both proceedings was not sufficient to put the petitioner on notice of its intention to rely on it. Furthermore, the Agency did not cure the problem by responding to the petitioner's comments during the rulemaking proceedings that accompanied the delisting petition, because the court believed that the Agency's response betrayed a "closed mind" with respect to the petitioner's critiques.

In Chemical Manufacturers Ass'n v. U.S. Environmental Protection Agency,31 the petitioners objected to EPA's decision to list methylene diphenyl diisocyanate (MDI) as an air toxic "for which high risks of adverse public health effects may be associated with exposure to small quantities." In selecting such pollutants, EPA relied upon its Integrated Risk Information System (IRIS) database to determine the inhalation reference concentration (RfC) for candidate pollutants and compared that concentration to the concentration resulting from the application of a generic air dispersion model to predict the ambient concentration, at a certain radius from the source, of a hypothetical, i.e., generic, air pollutant emitted from a typical industrial facility under average meteorologic conditions. In its Notice of Proposed Rulemaking, the Agency proposed, as a "reality check," an additional step in which it determined whether at least one [33 ELR 10756] facility emitting the toxic pollutant emitted at least 10 tons per year of that pollutant. The Agency originally thought that this would avoid designating as a high risk any pollutant actually emitted in a quantity too insignificant to pose a high risk to human health. EPA eliminated the reality check in the final rule because it only had actual emissions data from large facilities and some small facilities might emit more than 10 tons per year of some of the candidate pollutants.

The Chemical Manufacturers Association (CMA) argued that EPA's decision to remove the reality check of actual emissions had the effect of hinging the listing decision entirely upon the IRIS database and the generic air dispersion model, and the Agency had not put it on notice that it would give so much weight to the data and analysis that went into preparing that database. In addressing that claim, the court noted in dicta that EPA had adequately subjected the generic air dispersion model to notice-and-comment rulemaking:

The EPA set out the basis for the model in the notice of proposed rulemaking, stated its rationale for making various assumptions, requested comments on the assumptions it made, addressed significant comments in the background paper referred to in its final rule document, and revised some modeling parameters based upon the comments it received.32

The court's assessment provides a useful checklist for providing notice of modeling exercises in the rulemaking context.

In sum, an EPA modeling exercise conducted in the context of notice-and-comment rulemaking should not suffer reversal on notice grounds if the Agency is careful to describe the model in some detail; identify the assumptions upon which the model relies; explain why those assumptions are valid in the particular context in which it is applying the model; and specifically request comments on the validity of the assumptions and their use in the modeling exercise.

III. Substantive Review of Agency Models

A. Statutory Tests for Substantive Judicial Review

The APA specifies the scope of substantive judicial review of agency action, and reviewing courts are obliged to comply with the APA, absent a conflicting provision in the agency's own statute.33 Unfortunately, several environmental statutes do specify a different scope of judicial review, and this may have an impact upon the stringency with which the courts review EPA's modeling exercises. Under the APA model, the scope of judicial review of formal agency action (adjudication and formal rulemaking) is "substantial evidence on the record as a whole."34 For informal action (rulemaking, informal adjudication, agency guidance, and policy statements), the scope of review is "arbitrary and capricious, an abuse of discretion or not otherwise in accordance with law."35

Under the "substantial evidence" test, the court must consider the entire record including "facts that detract from the agency as well as those that support it."36 The court must determine "whether the record contains 'such relevant evidence as a reasonable mind might accept as adequate to support a conclusion.'"37 If it does, the agency has sustained its burden of adducing "substantial evidence on the record as a whole" and the rule must be affirmed. If not, the rule must be vacated.

Under the "arbitrary and capricious test," the court must conduct a "searching and careful" review of the record and must set aside the agency action if

the agency has relied on factors which Congress has not intended it to consider, entirely failed to consider an important aspect of the problem, offered an explanation for its decision that runs counter to the evidence before the agency, or is so implausible that it could not be ascribed to a difference in view or the product of agency expertise.38

Because the two tests were meant to be applied in different procedural contexts, the courts applying the APA model did not have to answer the question whether one test was more stringent than the other.

Congress has, however, muddied the waters in several statutes, including the Toxic Substances Control Act, the Consumer Product Safety Act (CPSA) and the Occupational Safety and Health Act (OSH Act), by prescribing "substantial evidence" review for informal rulemaking. This has raised a "lively debate among scholars and judges" over the question whether Congress meant for courts in such statutes to apply a more stringent scope of review than in the statutes that allow for "arbitrary and capricious" review.39 The Court has suggested that the arbitrary and capricious test is "more lenient" than the substantial evidence test.40 The U.S. Court of Appeals for the Fifth Circuit has squarely held that "Congress put the substantial evidence test in the statute because it wanted the courts to scrutinize the Commission's actions more closely than an 'arbitrary and capricious' standard would allow."41 Thus, when an agency is addressing a statute that specifies a "substantial evidence" scope of review, it would be well advised to take greater care to ensure that any factual premises underlying its modeling exercises are well supported in the administrative record.42

[33 ELR 10757]

B. Substantive Challenges

The general rule for the courts' substantive review of technical models under the informal rulemaking provisions of the APA is deference to the agency's technical and policy choices as long as the agency explains its choices, especially the controverted ones, in an accessible and complete way.

As long as an agency reveals the data and assumptions upon which a computer model is based, allows and considers public comment on the use or results of the model, and ensures that the ultimate decision rests with the agency, not the computer model, then the agency use of a computer model to assist in decision making is not arbitrary and capricious.43

Some courts, particularly in the earlier years of regulation, have not even insisted on a full explanation.44

Despite the high level of judicial deference, EPA's models are frequently subject to tedious, technical nitpicking, as Figure 2 demonstrates. Virtually every substantive challenge mounted against an EPA model involves multiple technical disagreements on virtually every facet of the model.45 In several cases, moreover, the disagreements appear to be manufactured challenges that enjoy little support from the record.46 Courts often consider these challenges in detail, but if after this analysis the courts discover that the disagreement concerns a battle of the experts, they typically defer to the Agency's judgment. "It is 'not for the judicial branch to undertake comparative evaluations of conflicting scientific evidence.' This Court must not undertake an independent review of EPA's scientific judgments; our inquiry focuses only on whether the Agency has met the statutory requirement for 'sufficient evidence.'"47

[33 ELR 10758]

[SEE Figure 2: A Flow Chart of Model Challenges IN ORIGINAL]

[33 ELR 10759]

The case law that emerges under this general standard of deferential review provides three major lessons. First, the success of the Agency's model typically depends on how well the Agency responds to relevant comments in the preamble to its final rule. Of course, if the Agency does not receive critical comments on its model during the notice-and-comment period, the model is likely to survive judicial review because opponents have not entered their objections at the appropriate time. If the Agency has received critical comments on its models during the notice-and-comment period, it must respond directly and in detail to the comments in order to ensure that its model survives review. Successful challenges to EPA's modeling exercises have almost uniformly involved unhelpful and nonspecific Agency responses to plausible and well-documented criticisms.48

Second, judicial review of Agency models is not always consistent. We identify several outlier cases on both tails of the bell curve—those providing unbridled deference to agencies in situations in which the Agency rationale was not especially compelling and those evincing very little deference to agencies in situations in which the court's rationale was not especially compelling. These cases, we conclude, are not necessarily helpful in gauging what most courts will do, and even the worst-case decisions, where agencies receive no deference, provide little constructive guidance to agencies in anticipating and avoiding damaging judicial review.

Third, there appear to be two general areas of vulnerability in Agency models where even the mainstream courts do not tend to defer to agencies. The first set of challenges alleges that an Agency model is not applicable to a particular subset of industries, activities, or locations. Once the petitioner has made a case that the model fails in a particular circumstance, i.e., an argument that the model is arbitrary with respect to a subset of activities, courts place the burden on the Agency to show that its model is nevertheless rational. The courts appear to give little deference to the Agency's responses to these discrete challenges. Second, a surprising number of courts second-guess an Agency's policy assumptions built into risk assessment models. These cases are also difficult for the Agency to defend, as discussed below in section V.

The substantive challenges to EPA's models, and, where relevant, the health or public risk models developed by other agencies, are discussed below according to the types of challenges. The categorization of disputes is imperfect, but provides a basic framework for organizing and analyzing the cases.

1. Challenges to the Assumptions Embedded in the Model

Petitioners routinely dedicate the bulk of their fire power to challenging fundamental features of the agency's model, such as the model's level of abstraction from reality; the model's specific application to a subset of activities, wastes, or locations; and working assumptions of the model that cannot be validated. If the petitioner does not provide specific evidence on how the model will fail, but instead suggests only that the model could be more accurate, the courts typically uphold the model.49 Most of the early litigation against EPA's air models claimed that the models were arbitrary because they involved a large source of error. The courts generally rejected these challenges since the petitioners failed to show that there was a substantially better model or a more accurate set of assumptions that EPA could have utilized, and because the courts recognized that models are inherently imperfect.50 Once the petitioner does prove that the model will fail in a subset of cases, however, the courts generally demand that the Agency provide evidence (not speculation) that the model is nevertheless rationally related to its initial purpose. "If the methodology [used in preparing a model] is challenged, [the Agency must] 'provide a "complete analytic defense."'"51 Courts will invalidate a model if "there is no rational relationship between the model chosen and the situation to which it is applied."52 Thus, unless the Agency can explain why the model should nevertheless apply or still has powerful predictive power, the Agency will lose the challenge.

Three varieties of model challenges, all of which follow this same general pattern for judicial review, are discussed below.

a. The Agency's General Model Is Oversimplified

EPA models are often challenged for making "arbitrary" oversimplifications. Most of these challenges do not succeed because the courts recognize the need for abstraction and defer to the Agency's judgment provided the Agency explains why it chose the level of abstraction that it did. In one of the early cases involving EPA modeling exercises, the U.S. Court of Appeals for the District of Columbia (D.C.) Circuit explained:

Any model is an abstraction from and simplification of the real world. Nevertheless, administrative agencies have undoubted power to use predictive models. [Citations omitted.] We will … look for evidence that the agency is conscious of the limits of the model. [Citation omitted.] Ultimately, however, we must defer to the agency's decision on how to balance the cost and complexity of a more elaborate model against the oversimplification of a simpler model. We can reverse only if the model is so oversimplified that the agency's conclusions from it are unreasonable.53

[33 ELR 10760]

The court held that the record supported assumptions in a U.S. Department of Energy (DOE) economic model that EPA used to determine the feasibility of its phaseout of tetraethyl lead in gasoline for small refiners.

(1) The Agency Explains Its Oversimplifications

The courts have upheld models against challenges that allege that the models did not take into account important variables that affected the predictive power of the models.54 In each of these cases, the Agency provided explanations for its refusal to make the requested adjustments, referring either to other features of the model that sufficiently accommodated the concern55 or the lack of data upon which to base the requested adjustments.56 In affirming EPA's decisions, the courts generally base their deference on the Agency's response to the petitioners' allegations during the rule-making57 and the courts' reluctance to second-guess the Agency's scientific decisionmaking.58

(2) The Agency Does Not Explain Its Oversimplifications

By contrast, in cases where the courts invalidate EPA's models because the Agency did not make adjustments to its generic assumptions, EPA, in the courts' view, did not provide an adequate explanation for failing to make the requested adjustments. For example, in Leather Industries of America, Inc. v. U.S. Environmental Protection Agency,59 the court set aside EPA's standards for "clean sludge" disposal on the ground that the Agency failed to explain the basis for its estimates for the rate and duration of sludge application. The court held that the Agency's failure to account for the established difference in the application rate for one type of sludge (heat-dried sludge) within the larger category of clean sludges was arbitrary. EPA had acknowledged a significant difference in the rate of application of heat-dried sludge relative to other types of sludge, but it had declined to take that difference into account in calculating generic application rates, which the court also found to be based on unsupported approximations.60

In Appalachian Power Co. v. U.S. Environmental Protection Agency (II),61 the petitioner successfully challenged EPA's model for predicting growth rates for electricity usage in setting emissions controls for a nonattainment area. While noting EPA's authority as a general matter to develop generic, abstracted models for such predictions, the court found EPA's application of the model in this instance to be arbitrary because the model predicted a decrease in usage that was directly contradicted by the available evidence. EPA's inability to explain why it relied on the model in light of this apparent error led the court to remand that portion of EPA's decision "so that the agency may … explain why results that appear arbitrary on their face are, in fact, reasonable determinations."62

In a decision that is probably an outlier, the district court in Flue-Cured Tobacco v. U.S. Environmental Protection Agency63 condemned EPA's risk assessment for ETS in part [33 ELR 10761] because the Agency failed to explain its basis for concluding that similarities between indirect exposures to ETS by non-smokers and direct exposures to tobacco smoke by smokers supported its conclusion, which was also based upon a meta-analysis of epidemiological data, that ETS was a Group A human carcinogen. In the court's view, EPA had not adequately explained why it relied upon data from direct smoking to support the conclusion that ETS caused lung cancer, but declined the industry's suggested model for using data from direct smoking in its attempt to quantify the lung cancer risks of ETS exposure.64

b. Adjustments to Generic Models When Applied to Particular Circumstances

Many challenges to EPA models involve allegations that the Agency acted arbitrarily in failing to make adjustments in applying a generic model to a particular chemical, location, or facility. Since individual cases often lie outside "average" assumptions, one might expect courts to have some sympathy for the challenges in these cases, and the cases bear this prediction out. Despite the generally recognized proposition that "it is no criticism of a model 'that [it] does not fit every application perfectly,'"65 most of the challenges in which the challenger presents convincing evidence that a model mis-predicts reality when applied to the particular setting of the case are successful absent equally compelling counter-evidence or other justifications. In these cases, the courts essentially conclude that the model is being applied outside of its "application niche," a term coined by modelers to refer to the set of conditions for which the model was designed to be useful.66 The courts "cannot excuse the EPA's reliance upon a methodology that generates apparently arbitrary results particularly where … the agency has failed to justify its choice."67

(1) Peculiarities of the Particular Location

One basis for challenging a model is the simple geographical fact that the model applies to broad and varied terrain, yet the application in a particular case is to an unusual type of terrain that is likely to deviate to a considerable extent from the mean.

(a) The Agency Adequately Justifies Its Generic Model for Application to Specific Settings

In several challenges, petitioners failed to provide concrete support for their argument that the peculiarities of the location needed to be considered before applying a contested generic air model, and their challenges failed.68 In another case, the U.S. Forest Service (Forest Service) satisfied the court that its application of a generic model to a particular location did adequately account for unique features of the resource, in large part because the Forest Service supplemented its model with other predictive tools and studies.69

(b) The Agency Does Not Justify Application of a Generic Model to a Peculiar Location

One of the most infamous and arguably aberrational model cases, Ohio v. U.S. Environmental Protection Agency,70 involved two reviews by the U.S. Court of Appeals for the Sixth Circuit of a computerized atmospheric model, CRSTER, used to predict the dispersion of emissions from two power plants located on Lake Erie. The court observed that with respect to modeling, the legislative history of the Clean Air Act (CAA) instructed the courts to "conduct a 'searching review'" of the Agency's model.71 Accordingly, the court concluded that EPA had not adequately demonstrated that the CRSTER model took into account the "specific meteorological and geographic problems" of the plants, both of which were situated on the shores of Lake Erie. It was therefore arbitrary and capricious for EPA to allow a 400% increase in emissions "without evaluation, validation, or empirical testing of the model at the site." In a separate opinion, however, the court took pains to point out that "by no means does the court insist that all models be validated at all sites. We find only that the accuracy of the CRSTER at the site has not been sufficiently demonstrated to meet the arbitrary and capricious standard of review."72

(2) Peculiarities of the Particular Activity

Models often assume average features of regulated activities, such as disposal or emitting patterns. The Agency's aggregation of many industries under the umbrella of one model is a frequent and often a successful basis for attack.

[33 ELR 10762]

(a) The Agency Provides a Justification for Aggregating Different Sized Industries Within a Single Model

In Appalachian Power Co. (II), the failure of the petitioners to provide any evidence that finer distinctions in the model would matter to the outcome caused the court to disregard the challenge.73 In Small Refiner Lead Phase-Down Task Force v. U.S. Environmental Protection Agency,74 a more substantive challenge to an EPA rule setting lead-content limits for leaded gasoline produced by small refineries, petitioners challenged EPA's decision to average all small refineries together regardless of whether they possessed octane-enhancing equipment. The court upheld this aggregation feature of EPA's model because the Agency explained the decision and accounted for it by documenting a smoothly functioning market in which the small refineries could buy the needed components or lead credits.

(b) The Agency Failed to Justify Its Application of a Model to an Activity That the Model Did Not Contemplate

The most compelling challenge to EPA's inflexible application of a generic model to a particular activity is Chemical Manufacturers Ass'n.75 In that case, industry challenged EPA's use of a generic air dispersion model to predict the concentrations of an air toxin, MDI, in the ambient air surrounding emitting facilities and presented uncontested evidence that MDI is a solid at 20 (and even 37) degrees Centigrade and thus would not be emitted in a gaseous form. Even in light of the "conservative" CAA mandate that directs the Agency to identify pollutants for "which exposure to small quantities 'may' be associated with significant adverse human health effects,"76 the court found that the petitioners had successfully proved that EPA's application of the model to MDI emissions was arbitrary. EPA's explanation provided no "rational relationship between the model and the known behavior of the hazardous air pollutant to which it is applied."77

In several other successful challenges, all involving EPA's Toxicity Characteristic Leaching Potential Test (TCLP) under the Resource Conservation and Recovery Act (RCRA), petitioners have argued that EPA applied the TCLP test, which simulates "worst-case mismanagement" of disposal of hazardous wastes at a landfill, to wastes that would be disposed of in ways that departed from the TCLP test worst-case assumptions. Petitioners thus essentially argue that the TCLP test (a simple predictive model as opposed to a complex computer model) is being used outside its "application niche" for disposal conditions not accounted for in that test's original assumptions.

One pair of cases reveals EPA's continuing struggle to support its assumed, worst-case landfill disposal assumptions for the disposal of mining waste. In Edison Electric Institute v. U.S. Environmental Protection Agency,78 the court held that the worst-case mismanagement assumptions in EPA's generic TCLP test were justified by RCRA,79 but agreed that the Agency had failed to justify application of TCLP to a particular set of mineral wastes in light of industry's showing that the worst-case assumption was inapplicable. The court explained that "EPA need not demonstrate that mineral wastes are typically or commonly deposited in municipal solid waste landfills, but the Agency must at least provide some factual support for its conclusion that such a mismanagement scenario is plausible."80 The modeling exercise would also pass muster "if there were evidence on the record that mineral wastes were exposed to conditions similar to those simulated by the TCLP."81 The Agency, however, made no effort to "justify a conclusion that mineral wastes ever come into contact with any form of acidic leaching medium."82

In Association of Battery Recyclers, Inc. v. U.S. Environmental Protection Agency,83 the court considered a similar challenge, but by that time EPA had disseminated a report in which it provided concrete evidence that at least some of the mining waste did in fact end up in municipal landfills precisely as anticipated in the TCLP test.84 Even though the evidence that the report relied on was limited and inconclusive, i.e., the Agency relied on eyewitnesses who could not document the characteristics or custody of the wastes, the court found that EPA's effort satisfied the "rational relationship" test.85 However, another challenge in the same case to the model's application of this "worst-case assumption" to a subset of mineral wastes (manufactured gas plant wastes) was successful. Because this type of waste has not been generated for over 40 years, it only would be disposed pro-spectively as remediation waste. EPA had no concrete evidence, however, that there was any active disposal of this waste into landfills. The court thus concluded that EPA's application of TCLP to this subset of wastes was arbitrary and capricious because EPA had not proved that there was a rational relationship between the disposal of the waste and the worst-case landfill mismanagement scenario in the TCLP test.86

In Columbia Falls Aluminum Co. v. U.S. Environmental Protection Agency,87 the Agency's attempt to adopt the TCLP test as the standard for treatment of spent potliner waste for land disposal was similarly invalidated.88 Since the only disposal site for this waste was a monofill that was chemically different from the municipal landfill that the TCLP model simulated, the court invalidated EPA's application of the model to this particular waste stream. The failure of EPA to respond to comments and then justify its model in light of the significant differences between the [33 ELR 10763] worst-case assumptions in the TCLP test and the monofill where potliner wastes were disposed appeared to be the primary basis for the courts' conclusion that EPA's application of the TCLP-based treatment standard to the wastes was arbitrary.89 Quoting from previous cases, the court reasoned that "models need not fit every application perfectly, nor need an agency 'justify the model on an ad hoc basis for every chemical to which the model is applied.' If, however, 'the model is challenged, the agency must provide a full analytical defense.'"90

c. EPA's Working Assumptions in Models, Often Based on Policy, Are Inappropriate

Since models are placeholders for reality, they necessarily involve assumptions about the real world. Some of these assumptions are proxies for reality and are successfully rebutted by petitioners, particularly during the application of the model to particular circumstances. Other assumptions serve as more indefinite placeholders because of prevailing scientific uncertainties. Human health risk assessments, by necessity, generally contain more of these policy assumptions than other models because of the impediments to validating assessments of chronic toxic effects on humans.

Most environmental statutes demand, or at least strongly suggest, that agencies "err on the side of public health" in filling the factual gaps left by scientific uncertainties.91 Litigation under these mandates thus rarely involves challenges to whether the Agency is entitled to use unsupported assumptions, provided they are consistent with the statute's protective directive, in its risk assessments and other protective models. By contrast, when the statute suggests a different policy "lean" by placing a heavy burden on the Agency to justify regulation, the courts appear less willing to tolerate EPA's evocation of uncertainties to justify regulations and instead demand some evidence to support its assumptions.92

A good example of the latter situation is the much criticized case of Gulf South Insulation v. Consumer Product Safety Commission,93 in which the court overturned the CPSC's ban on the use of urea-formaldehyde insulation (UFFI) in residences and schools. Following its prior precedent,94 the court placed the burden on CPSC to demonstrate to the court's satisfaction that the benefits of the regulation outweighed the costs that it imposed on the housing industry. The court rejected the Agency's application of the "Global 79" risk assessment model to the results of an animal carcinogenicity test and data that the Agency had gathered on human exposure to formaldehyde in homes to predict that the increased risk of cancer to a person living in a UFFI home for nine years would range from 0 to 51 in one million. The court also cast doubt on the validity of CPSC's reliance on animal studies to estimate human cancer risks.95 The court concluded: "To make precise estimates, precise data are required."96

Although the Agency's statute did not contain an explicit requirement that the CPSC make "precise" risk estimates, it did provide a cost-benefit decision criterion and it employed a "substantial evidence" scope of judicial review. Both statutory provisions were, in the court's view, signals that a reviewing court should view Agency decisionmaking less deferentially. The court cited prior precedent for the proposition that "Congress put the substantial evidence test in the statute because it wanted the courts to scrutinize the Commission's actions more closely than an 'arbitrary and capricious' standard would allow."97 Under the cost-benefit balancing approach that the statute mandated, it was not enough for the Agency to explain why it rejected the epidemiological data that the industry submitted to rebut the Agency's modeling exercise. The Agency had the affirmative obligation to justify its application of the Global 79 model to the "imprecise" animal data and human exposure estimates.98 The court was not satisfied with the justification that the Agency proffered.

Working policy-based assumptions in agency models are challenged in two overlapping ways. First and most generally, the agency's selection of one assumption over plausible alternative assumptions is challenged. Petitioners might argue, for example, that the conservative assumptions are too conservative or less plausible than alternative assumptions. Second, the challengers might offer evidence that effectively [33 ELR 10764] rebuts the agency's assumption. In these cases, the protective orientation of the statute would seem critical, but courts often abstract away from the statutory context. The agency "may resolve even substantial factual uncertainties in the exercise of its informed expert judgment; but it may not tolerate needless uncertainties in its central assumptions when the evidence fairly allows investigation and solution of those uncertainties."99

(1) Has the Agency Selected a Permissible Policy-Based Assumption Among Alternatives?

The courts vary considerably in their reaction to challenges that the agency did not select the best policy assumption among alternatives. In a challenge to water quality standards in American Iron & Steel Institute v. U.S. Environmental Protection Agency,100 the petitioners argued that the Agency adopted assumptions that were "too conservative" and by implication more conservative than necessary.101 In American Forest & Paper Ass'n v. U.S. Environmental Protection Agency,102 the petitioners challenged EPA's reliance on conservative assumptions regarding how to extrapolate from toxicity studies on animals to humans, assumptions that were pivotal to EPA's refusal to delist a hazardous substance under the CAA.103 The courts rejected both of these challenges, finding that the conservative assumptions were well supported and fully justified by EPA.104

In two other (possibly outlier) rulings, the courts were sympathetic to arguments that EPA's working assumptions, made necessary by prevailing uncertainty, were not justified or the best among alternatives. As discussed above, the Fifth Circuit in Gulf South held that the CPSC's conservative policy-based assumption that one large rat study could be extrapolated to humans in the wake of inconclusive epidemiological studies was insufficient.105 The court's opinion, which flew in the face of many cases upholding the frequently encountered Agency assumption that laboratory animal data are relevant in assessing human risks,106 did not point to any language in the Agency's statute that was inconsistent with the Agency's policy choice.

In Leather Industries, discussed above, the D.C. Circuit not only invalidated EPA's model for failing to make needed refinements to address specific activities, but it also struck down the Agency's working assumption regarding the phytotoxicity of selenium, an assumption made necessary by limitations in available evidence. The court agreed with petitioners that EPA's seat-of-the-pants determination of the level at which selenium would be toxic to plants was unsupported and potentially contradicted by one study.107 Even though EPA relied upon some preliminary research and the absence of evidence that the levels were in fact "safe" to justify the level it selected, the court held that more justification was required. "While the EPA 'may err' on the side of overprotection," it "may not engage in sheer guesswork."108 The court did not suggest, however, that the Agency had ignored relevant information, nor did it explain how EPA should go about gathering additional information.

(2) Has the Evidence Effectively Rebutted the Working Assumption in the Agency's Model?

The courts acknowledge that there will be evidence that tends to refute the agency's working assumptions, but the courts vary their determination of when the evidence has developed to the point that it effectively rebuts the assumption and causes the agency's model to be arbitrary, at least in the absence of an adequate justification or explanation.

EPA was successful in Huls America Inc. v. Browner109 in defending its refusal to delist a toxic substance, isophorone diisocyanate (IPDI), from the list of extremely hazardous substances under the Emergency Planning and Community Right-To-Know Act (EPCRA). The petitioners argued and EPA conceded that the toxicity studies of the effects of IPDI aerosols on animals did not approximate realistic exposure scenarios except in the most unusual, catastrophic circumstances, like an explosion. The court nevertheless held that EPA's reliance on these studies to list IPDI was justified because the Agency acknowledged the limitations of the study and the need to rely on worst-case circumstances for IPDI for a variety of reasons.110

By contrast, in two (possibly outlier) decisions, the D.C. Circuit was unwilling to defer to the Agency's working assumptions in the face of contradictory evidence. In Chlorine Chemistry Council v. U.S. Environmental Protection Agency,111 the D.C. Circuit set aside EPA's assumption of a linear dose-response model for determining the carcinogenicity of chloroform because of a developing consensus by a scientific expert panel that chloroform was not harmful at low doses.112 The court held that the best available evidence established that EPA's zero standard, based on a working assumption of a linear dose-response curve, was arbitrary. The court was not persuaded by EPA's arguments that the expert panel's opinion had not been subject to full Science Advisory Board (SAB) review, that EPA had actually moved to vacate its decision to adjust the standard in light of the consensus, and that no "safe" level had been identified quantitatively.113

[33 ELR 10765]

In Leather Industries, petitioners also successfully argued that EPA had unjustifiably adopted ultra-conservative exposure assumptions (namely that a small child would ingest sludge in a high-occupancy setting) in assessing the risks posed by disposal of the industry's treated sludge. The court held that EPA lacked "adequate support" for adopting conservative assumptions when the evidence suggested that "a significant proportion of sewage sludge application involves sites with low potential for public and child contact."114 The court was unsympathetic to EPA's argument that in a "'rulemaking of staggering complexity, the Agency was not required to refine its analysis so precisely as to devise a separate exposure analysis for children who ingest sludge on highway medians or in cemeteries.'"115 Although EPA was "not held to a standard of precise refinement," it was held to a standard of "rationality," and it was obliged to supply "a reasoned basis for its regulatory choices" in light of petitioners' evidence.116

2. Challenges to the Data and Statistics Used in a Model

Petitioners face an uphill battle in challenging the validity of inputs to agency models and are rarely successful. With few exceptions, the courts have afforded the agencies "great deference" in this area of technical decisionmaking. "Where the agency presents scientifically respectable evidence which the petitioner can continually dispute with rival and, we will assume, equally respectable evidence, the court must not second-guess the particular way the agency chooses to weigh the conflicting evidence or resolve the dispute."117

a. Challenges to Data or Studies Inputted Into the Model

(1) Unrepresentative Data

In an extensive challenge to EPA's Final Water Quality Guidance for the Great Lakes, industry groups argued in American Iron & Steel Institute that the Agency impermissibly provided that regulation could be based on only one measurement of a discharge in determining the concentration of pollutants in effluent, rather than requiring multiple data sets that better represented variability. The court rejected the challenge because, among other things, EPA had conditioned the regulation on the requirement that the permitting authority first make a finding that the single measurement was "valid and representative." This condition provided the necessary assurance that the practical application of the regulation would be based upon sound data.118

As discussed above, the CPSC's attempt to ban UFFI did not receive a great deal of deference from the Fifth Circuit in Gulf South, a much-criticized case. The court held, among other things, that the CPSC's use of a study to determine potential formaldehyde exposure for use in its carcinogenicity model was arbitrary because the methods for measuring the formaldehyde in the homes did not provide assurance that the samples would be representative. Moreover, the homes themselves were not representative because two-thirds of them were homes that had previously lodged complaints about odors from the insulation. In the court's view, the CPSC did not adequately "explain its reliance on a database comprised largely of complaint houses," nor did it justify its failure to conduct a study of "randomly selected UFFI homes before issuing the product ban."119

(2) Flawed Studies or Calculations

Courts generally give EPA "wide latitude in determining the extent of data-gathering necessary to solve a problem."120 They generally "defer to an agency's decision to proceed on the basis of imperfect scientific information, rather than to invest the resources to conduct the perfect study."121 Consistent with this general rule, challenges to the quality of the data that EPA inputs into its models are rarely successful.122 In a number of challenges to the validity of individual studies incorporated in agency models, the courts have afforded EPA considerable leeway, in some cases going no further than labeling the dispute as "scientific" and deferring to the Agency.123 Even when the courts probe deeper into technical disagreements, they still tend to defer to the Agency when a dispute appears to involve a battle of the experts over the adequacy of data.

The court in Cement Kiln Recycling Coalition v. U.S. Environmental Protection Agency124 upheld EPA's reliance on [33 ELR 10766] emissions data generated during trial burns under RCRA for use in determining "best-case" emissions capabilities under the CAA, even though the RCRA data did not involve the "best-case" emissions data. In addition to reciting the general rule of deference quoted above, the court noted that "we think it not at all unreasonable for the Agency to read this language as permitting it to rely on 'information' in its [existing] database—i.e., the RCRA data.'"125

In an extensive challenge to EPA's water guidance for the Great Lakes described above, industry petitioners in American Iron & Steel Institute also challenged several studies that formed the empirical basis for various factors, e.g., bioconcentration factors for particular chemicals, that EPA employed in its water quality models.126 Without discussing the technical details of the disagreement, the court recognized the issues as "a question of scientific judgment" upon which the court "must be 'at its most deferential.'"127 Since the Agency responded directly to the criticisms, explained the basis for its rejection of industry's complaint about flaws in a fish bioconcentration study,128 and agreed to rely on a newer study (per industry's suggestion) for determining the cancer potency factor for polychlorinated biphenyls (PCBs),129 the court found EPA's responses adequate.

The courts seem also to appreciate or at least tolerate Agency efforts to correct models expeditiously when blatant errors are discovered in the Agency's mathematical calculations. When petitioners also identified a mathematical error in EPA's calculation of a final weighted health value for PCBs for water quality in American Iron & Steel Institute, the Agency moved (prior to oral argument) to remand the calculation for further consideration.130 EPA even managed to publish a new rule before the opinion considering petitioner's other challenges was published. Although the court did not commend EPA explicitly for its voluntary recognition of the error, the opinion implies that the court viewed the Agency's action with approval.

(3) Outdated Studies

The courts generally expect agencies to use current research in promulgating standards, but do not require agencies to adjust models when evidence enters the picture shortly before or after a rule has been promulgated. For example, in Central Arizona Conservation District v. U.S. Environmental Protection Agency,131 the court rejected the petitioner's effort to introduce a new report that provided a different conclusion than those in the record supporting EPA's ecosystem model. The court noted that "nothing in the statute or its legislative history indicates that a party or the agency may reopen the record by placing additional materials (other than those required by the statute and wrongfully omitted by EPA) in the docket after promulgation of the rule."132

Similarly, in American Iron & Steel Institute the court upheld EPA's reliance upon outmoded data in determining a reference dose for mercury (used to set water quality standards), even though the Agency acknowledged that the science had changed shortly before it finalized its guidelines. Because EPA was under a court order to promulgate the guidelines, and had made other efforts to update its standards with the new information, the court found that its reliance on the outmoded data was not arbitrary. The court reasoned that "the agency was not obliged to stop the entire process because a new piece of evidence emerged."133 At the same time, the court recognized that agencies "have an obligation to deal with newly acquired evidence in some reasonable fashion."134 In this case, it was sufficient that EPA "mentioned the new evidence in the Guidance itself and subsequently announced that states could base their mercury human health criterion on the revised figure."135

On the other hand, when the information is clearly outdated, the courts are less deferential. In Natural Resources Defense Council, Inc. v. Herrington,136 the court scolded DOE for relying on "antique" information in its slow promulgation of energy-efficiency standards for household products under the National Energy Policy and Conservation Act. The court read the Act, including "provisions intended to set a specific outer limit on the time that could elapse between the closing of the record and the promulgation of final rules," to require the Agency to use recent information (rather than "antique" information).137 Since the Agency would be engaged in new data collection, the court did not need to decide "whether DOE's reliance on arguably obsolete information, were it the only potential difficulty in this rulemaking, would justify overturning the rules under review."138

As discussed earlier, in Chlorine Chemistry Council, the court gave the Agency little leeway in accommodating its rule to new evidence. A panel of experts convened by EPA had concluded that the dose-response curve for chloroform did not appear to follow the linear dose-response curve assumed by the Agency in its risk assessment. The Agency resisted using this report in its final maximum contaminant level guidance standard because the departure from the default linear model was precedent-setting and, in the Agency's view, merited peer review by EPA's SAB, a process that was not yet complete. The court held that EPA's explanation for not using the report was insufficient: "However desirable it may be for EPA to consult an SAB [33 ELR 10767] and even to revise its conclusion in the future, that is no reason for acting against its own science findings in the meantime."139 The court also denied EPA's motion to vacate its standard, finding that EPA provided no assurance that it would take the report into account in its decisionmaking.140

(4) Data Are Wrongly Excluded

When an agency consciously decides to omit potentially relevant data from a modeling exercise, it must explain why it elected to use some data and to disregard other data. In Flue-Cured Tobacco, the court found EPA's efforts to explain why some epidemiological studies were excluded in a meta-analysis of ETS risks to be incomplete. The Agency, according to the court, failed to explain, much less justify, its selection criteria for the included studies, creating a suspicion that the Agency had "cherry-picked" studies to reach the Agency's desired outcome.141

(5) Data Are Insufficient for a Particular Application

In challenges brought under various protective mandates arguing that the data are insufficient to permit risk assessment or modeling, the courts generally agree with the agencies that Congress does not expect full information as a condition to regulation and they tend to defer to EPA's judgment about when the data are insufficiently robust to use in a model or risk assessment.

In American Iron & Steel Institute, the industry argued that EPA's methodology for deriving aquatic health values for water quality was scientifically flawed because EPA permitted the use of minimal data to calculate health quality. In rejecting the challenge, the court noted that EPA had established two methods for deriving numerical values, the second of which (Tier II) derived aquatic health values when the data were incomplete. The court refused to equate modeling exercises that made use of the available science, even as "little as a single acute test for a single species," with "inadequate science." The court referred specifically to the Clean Water Act's demand that EPA specify numerical limits on pollutants in the Great Lakes without first requiring that those limits be backed by a full data set. EPA's use of the "best available science" in its methods, which acknowledge that sometimes the relevant scientific data will be limited, was consistent with the statutory command.142

The D.C. Circuit, in Small Refiner Lead Phase-Down,143 similarly rejected petitioners' argument that EPA had insufficient evidence of health benefits to support a uniform standard for lead limits for leaded gasoline by reciting EPA's "ample basis" for seeking lower standards and for targeting small refiners in setting the standards.144

In National Oilseed Processors Ass'n v. Browner,145 a federal district court considered petitioners' multiple challenges to EPA's decision to place various chemicals on the toxic release inventory (TRI) list. The primary basis for industry's objection was that the weight of the evidence was insufficient to justify listing each of the contested chemicals. In rejecting virtually all of the challenges, the court considered the statutory command and found that EPA did have credible evidence that was sufficient, within the statute's "sufficient for listing" standard, to justify listing. EPA was entitled to deference when it explained why it listed a chemical based on a weight-of-evidence judgment that others disagreed with146; when it relied primarily on several animal studies within the available evidence147; when it relied on old and poorly explained "Russian studies" in part because they had not been contradicted by subsequent research148; and when studies provided some evidence of hazard and there was no evidence to the contrary.149 The court in National Oilseed also upheld EPA's carefully explained decision to list a category of chemicals based on studies that showed chronic problems in the larger structural family of chemicals, even though there were no studies available on the specific chemicals that EPA listed.150 At several points, the court noted that its job was not "to make an independent evaluation of the scientific evidence" but instead was to determine whether the challengers proved that "EPA's conclusions" were unreasonable.151

The court of appeals, however, rejected EPA's determination that one chemical, bronopol, caused both acute and chronic effects because this conclusion was inconsistent with the Agency's preexisting definitions of "chronic" and "acute" (based on biological repair time), which required a chemical to be categorized as one or the other.152

As discussed above, the Gulf South case again diverges from this general rule, perhaps in part because of the CPSA's "substantial evidence" requirement for rulemakings and its cost-benefit endpoint for setting regulations. In that case the [33 ELR 10768] Fifth Circuit agreed with petitioners that a single rat study was an insufficient basis for banning UFFI. Without elaborating on its credentials as a scientific evaluator, the court opined that it was "not good science to rely on a single experiment, particularly one involving only 240 subjects, to make precise estimates of cancer risks."153

b. Statistical Decisions

Most courts defer to an agency's use of statistics in modeling exercises when an adequate explanation is provided. For example, the D.C. Circuit, in American Iron & Steel Institute, deferred to EPA's selection of a confidence interval for determining permissible effluent quality or monthly maximum effluent limitations because the Agency adequately explained the selection and demonstrated that it was consistent with the statute and past Agency practices.154 In a less detailed opinion, the U.S. Court of Appeals for the First Circuit, in Mision Industries, Inc. v. U.S. Environmental Protection Agency,155 also deferred to EPA's use of an air diffusion model that was afflicted with a large statistical potential for error. The court was satisfied with the Agency's explanation that the errors were high generally only with respect to short-term concentrations and that the model's conservative assumptions compensated for the errors.156

In contrast, in Flue-Cured Tobacco the district court questioned EPA's selection of a lower margin of error for its ETS risk assessment in large part because the court found that EPA had not satisfactorily explained a statistical decision that deviated from past practices. Although the court did not rule specifically on EPA's use of a 90%, rather than the traditional 95% confidence interval, for its meta-analysis of ETS, it strongly implied that the Agency's deviation from the traditional confidence interval was insufficiently explained and, as a result, likely to be arbitrary. The most problematic aspect of EPA's statistical decision was its departure from a previous draft risk assessment for ETS, a deviation that was not explained in the administrative record and, according to the court, unsatisfactorily explained in the litigation briefs. The Agency's decision was also weakened by comments of one reviewer who was critical of the use of the lower confidence interval. This, along with the deviation from traditional statistical practice, made the Agency's silence supporting its choice all the more dissatisfying.157

3. Validation and Peer Review of EPA's Models

Substantive challenges to EPA's models occasionally include allegations that a model has not been adequately validated or peer-reviewed.

a. Validation of Models

The courts seem to appreciate that imperfect information and limited resources require an agency to develop models to approximate reality. Because of the need to cut evidentiary corners, an agency's decision to forego the validatation or calibration of models is usually, but not always, respected by the courts.158 So long as the agency provides a rational explanation for why the model fits the factual context to which it is applied, the courts generally do not demand that the agency test the model before relying upon it.

The notorious exception to this general rule is the Sixth Circuit's previously discussed holding in Ohio159 that EPA had applied its CRSTER emissions diffusion model in an arbitrary manner because it had not validated the model for the relatively unique location in which it was being used. Even though EPA had refined CRSTER in four separate validation exercises,160 petitioners objected that the model was likely to be unreliable in predicting diffusion of emissions from two power plants located on Lake Erie.161 The court held that under the circumstances, EPA's failure to attempt to validate the model at this unique location was arbitrary and ordered EPA to develop a plan for validating the CRSTER estimates of emissions from the two plants.162 Acknowledging that at least one circuit (the First) had decided differently and had "not required EPA to test model predictions against monitored air quality data,"163 the court based its nondeferential review in part on its reading of the legislative history of the CAA that indicated Congress' expectation that the courts would conduct a "'searching review' of the basis of EPA's modeling and test procedures."164 The [33 ELR 10769] court later cautioned, however, that it was not insisting that "all models be validated at all sites."165

b. Peer Review of Models

Despite considerable attention to refining and enlarging peer review processes by the White House and various administrative agencies, inadequate peer review of models has not appeared to surface in litigation to date. It is possible that EPA's peer review processes exceed the letter of the law, leaving challengers without a legal argument. Indeed, the only case that speaks to the subject suggests that EPA may not delay a standard simply in order to obtain peer review of a model since "the possibility of contradiction in the future … will always be present" and could be used to postpone a standard indefinitely.166

In a case challenging EPA's risk assessment for ETS, however, petitioners were able to argue that the type of peer review EPA employed did not satisfy Congress' specific requirements for peer review of that particular document. In the Radon Gas and Indoor Air Quality Research Act, Congress specified a particular type of mandatory peer review process for the Agency's ETS risk assessment. The court held that EPA's deployment of an analogous peer review process through its SAB was not a satisfactory equivalent for complying with the Act.167 The mandatory peer review requirement, in the court's view, was so fundamental to the legitimacy of the risk assessment that the Agency's failure to follow it was fatal. The court invalidated the risk assessment,168 but that decision was reversed on other grounds by the Fourth Circuit.169

c. Conflict With Peer Review Recommendations

Several cases suggest that the courts may impose a higher burden on agencies to justify their preferred models and assumptions when they conflict with the views of a "neutral" body of experts such as the National Academy of Sciences. For example, the court in Chlorine Chemistry Council170 set aside EPA's decision to postpone revising a drinking water standard for chloroform in the face of the consensus of a scientific panel that the standard should be revised.171 The court held that if EPA chose to swim against the scientific consensus, it needed to provide a justification.172 The Agency's failure to provide an explanation that satisfied the courts, even though EPA did go to great lengths to explain and even revised its decision, led the court to invalidate the Agency's regulations.

4. Agency's Placement of Its Model Within the Larger Context

Challengers also argue that agency models are arbitrary when they depart from prior models without adequate explanation; when there is an alternative model that is preferable; or simply because the agency fails to explain the basis for the result it derives from a model. In reviewing these challenges, courts tend to defer to agencies once the agency provides a clear explanation for its decision.

a. Departure From Prior Models

Courts consider a sudden change in policy, whether or not it involves a model, to be a "'danger signal[]' and demand that the agency 'supply a reasoned analysis indicating that prior policies and standards are being deliberately changed, not casually ignored.'"173 In the cases decided thus far, the courts have not resolved allegations of sudden changes to EPA's models on the merits because they have found that the Agency did not in fact change its model. For example, in National Oilseed, petitioners argued that EPA abruptly departed from its policy of considering exposure in deciding whether to add chemicals to the TRI list. The district court held that EPA had followed past practice of considering exposure only for substances of lower toxicity, while the substances that were the subject of challenge were more toxic.174 In Small Refiner Lead Phase-Down, the court similarly rejected a "change of policy" argument in a challenge to EPA's decision to impose lead standards on small refineries, finding that the Agency had not in fact changed its position.175

b. Alternative Models

Petitioners occasionally argue that an agency has arbitrarily adopted an inferior model relative to alternative models. Since these challenges resemble the "battle of the experts" that the courts traditionally reject, the challenges have generally been unsuccessful. In upholding EPA's preferred method (over alternatives) for estimating bioaccumulation [33 ELR 10770] in American Iron & Steel Institute,176 the court observed that the Agency's explanations were adequate and sufficiently supported the final standard.177 In affirming EPA's choice of models in two CAA cases, the courts similarly noted that they would defer to the "Agency's expertise," since the Agency maintained that the modeling it had used to establish emissions limits under the CAA was more accurate and sensible than the models for determining emissions limits advanced by petitioners.178 In these latter two cases, the courts did not even demand technical support from the Agency establishing that its models were superior.

c. Inadequate Explanation for the Final Result or Number

In a few cases, petitioners have argued that the agency's decision was arbitrary because it provided no explanation for why it selected one number over another when the model did not provide a definitive result. Despite what would seem to be a fatal flaw in the agency's rulemaking, however, courts have sometimes forgiven this lapse. Perhaps the most generous review occurred in the mid-1970s in the D.C. Circuit's rejection of a challenge to EPA's lead standard for leaded gasoline produced by small refineries. In Small Refiner Lead Phase-Down, the court noted that while the Agency justified the need for some standard based on health concerns, it did not "adequately explain the health basis for choosing a 1.10 gplg standard rather than some other standard."179 Nevertheless, the court was able to extract an explanation from the larger administrative record and concluded that "in the circumstances of this case, we think that EPA has not crossed 'the line from the tolerably terse to the intolerably mute.'"180

IV. Lessons to Draw From the Substantive Review Cases

A. The Good News: Tolerating Imperfection, With Explanation

1. Judges Know That Models Simplify Reality

The courts understand that agencies must employ models in health and environmental decisionmaking and that models are imperfect representations of reality. Courts are willing to give EPA a great deal of leeway in choosing and applying models in regulatory decisionmaking so long as EPA makes the assumptions that drive the models explicit and accessible.

2. Modeling Does Not Require Perfect Data

The courts understand that the empirical data upon which models are based and the input data from which models extrapolate to make predictions are critical to the accuracy of those predictions. They also understand, however, that the perfect can be the enemy of the good. An agency may rely upon unrepresentative data and imperfect studies if better data cannot be obtained in a reasonable time and at a reasonable expense and the agency explains.

3. Courts Do Not Expect Agencies to Hit Moving Targets

Although the courts expect agencies to use the recently produced data when they are available prior to the time that the agency takes final action, they do not have to postpone decisionmaking to take into account new data that arrive after the decisionmaking process has been completed. The courts recognize that if agencies were required to postpone decisionmaking merely because relevant new information became available, they would rarely take any action.

Agencies can help insulate decisions from attacks on this ground if they allow new information to be considered at later decisionmaking points, e.g., at the individual permitting stage, or make it clear that they are prepared to modify the decision in light of newly arriving information through the same notice-and-comment procedures that the agencies use to promulgate the original rules.

4. Consistency Is Helpful, but Not Required

Although an agency's decision to substitute one model for another constitutes a "danger signal" for reviewing courts, the agency may switch models if it carefully explains its reasons for doing so.

B. Learning From Experience: Model Context

1. The Statute Makes a Difference

The courts are likely to be more deferential to an agency's conservative modeling exercise if the agency's statute sends a clear signal for the agency to err on the side of overprotection. When the agency's statute articulates a cost-benefit decision criterion, the courts are less likely to be deferential to the agency's application of an unproved model to imprecise data.

When relying upon a particular regulatory policy, e.g., "err on the side of safety," to justify assumptions underlying the models it employs, an agency is well advised to invoke language in the statute under which it acts or other indicators suggesting that Congress meant for it to advance that policy.

2. The Test for the Scope of Judicial Review May Make a Difference

Of the two primary standards for judicial review of rulemaking and other informal agency action, the APA prescribes the "arbitrary and capricious" test. Some agency statutes, however, prescribe the "substantial evidence" test [33 ELR 10771] for judicial review of agency rules. There is precedent in some courts of appeals for the proposition that courts scrutinize agency actions more carefully under the "substantial evidence" test than under the "arbitrary and capricious" test.

3. Peer Review May Make a Difference

Courts rarely require agencies to subject modeling exercises to peer review absent an explicit statutory requirement that they do so. When an agency does request peer review, the courts appear to place a higher burden on the agency to justify a modeling exercise that conflicts with the advice provided by a neutral body of scientific experts.

C. Learning From Experience: Model Merits

1. An Ounce of Ex Ante Explanation Is Worth a Pound of Post-Hoc Rationalization

In the preambles to both the notice of proposed rulemaking and the notice of final rulemaking, the agency should explain why it employed the models that it employed, why it rejected other available models, and why it concluded that the model it employed was appropriate to the particular factual setting. The cogency and comprehensiveness of the agency's explanation and response to challenges may be directly correlated to the agency's likelihood of surviving the challenge without reversal. The clearer and more coherent the explanation, the safer the model.

2. If the Model Doesn't Fit, Don't Wear It

Courts rarely require agencies to validate models by measuring their predictions against real-world data prior to using them in regulatory contexts. When the agency attempts to use a model developed in one factual setting to make predictions in a different factual setting that was not considered in constructing the model, however, it should explain why the model's output should remain within a margin of acceptable error or it should adjust the model to fit the new factual setting and explain the adjustments.

3. Reality Usually Trumps a Model's Representation of Reality

When a commenter presents evidence from the real world that appears to contradict a model's prediction, the agency should gather its own evidence from the real world to rebut the implications of the commenter's evidence, explain why the commenter's evidence does not in fact contradict the model's prediction, or explain why the model's apparent departure from reality does not matter.

D. Judges Are Human

1. Outlier Cases Are Unfortunate but Unavoidable

The courts themselves have a great deal of discretion to permit arbitrary agency modeling or to search for one or more aspects of a modeling exercise that appears arbitrary to the judges. Judicial discretion is especially broad in reviewing challenges to the adequacy of assumptions that go into models and the adequacy of data that the agency relies upon in modeling exercises. This broad discretion means that a judicial holding can deviate greatly from the mean and still fall within the range allowed by applicable legal doctrine.

Judges deviate from mainstream judicial review most frequently in their demand for an unrealistically large amount of scientific support for occasional assumptions in agency models.181 These opinions effectively penalize EPA and other agencies for being explicit about inevitable uncertainties and inherent weaknesses in their models. Yet EPA has at times overcompensated for these aberrant judicial requirements by concealing weaknesses and limitations in its data and model assumptions, leading to mainstream reversals because key features of their models are insufficiently explained or justified.

In the final analysis, there is very little that a party can do to force a deferential court to overturn arbitrary agency action and there is very little that an agency can do to avoid reversal by a panel of activist judges who are determined to advance policy agendas that differ from those of the agency. Nevertheless, if the agency explains its choices and makes the scientific and policy bases for the assumptions that it employs in its modeling exercises transparent, the courts are less likely to engage in overt usurpation of the agency's properly delegated decisionmaking power. And while this transparency could lead an outlier panel to reverse an agency rule because they believe a model is inadequately supported, the proper response for an agency cannot be to hide the uncertainties from both the judiciary and the public. The answer has to be that ultimately the "policy lean" that the agency uses to resolve modeling uncertainties must be derived from the signals in the agency's statute. Accordingly, the review of agency policy choices in the first instance is simply a matter of statutory interpretation, and the agencies should receive Chevron (Chevron, U.S.A., Inc. v. Natural Resources Defense Council, Inc.182) deference when they adopt assumptions and model decisions that are in keeping with their authorizing statute.

V. The DQA

Although the DQA provides an entirely new opportunity for affected parties to challenge EPA models, it is unlikely (but of course not impossible given section D.1. above) that the DQA will alter how and when the courts review agency models. First, as discussed more fully above at section II.A., courts will probably refuse to entertain DQA challenges to models that are not part of a larger final agency rulemaking. Since the DQA does not provide a separate mechanism for judicial review, DQA challenges must be brought to the courts through the APA or other authorizing statute. The APA allows only "final agency action" to be subject to review. Thus even if the complainant succeeds in convincing the court that an agency's denial of a request for correction is "final," the courts require a "direct" legal consequence to flow to the challenger.183 It will be a rare complainant that can establish that there will be such a "direct" consequence [33 ELR 10772] from the dissemination of a model without accompanying regulatory requirements.

Once a DQA challenge does reach the court—either as one of several claims against a final rulemaking or as an independent challenge—the court must apply the same standard for review to DQA challenges that it applies to models and rulemakings under the APA. The DQA's requirement that the agencies establish processes to ensure the quality of information they disseminate184 does not affect the courts' approach to reviewing how well the agency complied with these procedures, once established. In considering a challenge to an agency's denial of a request for correction, the courts will continue to review the agency's decision under the APA and determine whether the decision is "arbitrary" in light of the facts and the statutory directive.185 The case law discussed above at section III.B. would thus guide the courts in deciding whether an agency's refusal to correct information related to a model is arbitrary in light of the evidence offered by the challenger.

It is not even clear that the DQA presents any new sources of arguments for criticism of agency models. While on paper the DQA offers challengers additional claims, for example that an agency violated its DQA guidelines in one way or another in developing a model, in practice these claims appear to do little more than duplicate generic APA claims that an agency's model was arbitrary because it was not accurate, objective, or useful.186 EPA's adaptation of the Safe Drinking Water Act principles for risk assessments similarly does not appear to create a new set of claims since EPA reserves for itself considerable flexibility in applying the principles only "to the extent practicable."187

Despite this preliminary conclusion that the DQA is unlikely to alter the nature of judicial review, EPA models will be subject to an entirely new set of administrative challenges under the DQA. The petitions to date indicate that the types of challenges to EPA models will be substantively similar to those EPA has encountered in court challenges in the past,188 although EPA is now required to administratively respond to the complaints within a limited time, irrespective of the status of the model or larger rulemaking. Since a model is always a work in progress, the ability of affected parties to challenge a model under the DQA as soon as new data become available or at other points in its development could lead to high administrative costs. Agencies may thus need to consider on a model-by-model basis whether the benefits of public input on a disseminated model, especially a preliminary model posted in advance of a rulemaking, outweigh the expected costs in responding to DQA complaints. For these and other considerations EPA must make under the DQA, see Table 1.

Table 1: Model Survival Strategies

Pre-rulemaking(NEW COLUMN)Proposed rulemaking(NEW COLUMN)Final rulemaking preamble

(category 3)(NEW COLUMN)(category 1)(NEW COLUMN)(category 1)

* think twice before you(NEW COLUMN) * describe the model in(NEW COLUMN) * an ounce of ex ante

disseminate a model prior(NEW COLUMN)some detail(NEW COLUMN)explanation is worth a

to a rulemaking(NEW COLUMN)(NEW COLUMN)pound of post-hoc

(NEW COLUMN) * identify the(NEW COLUMN)rationalization: Explain in

* provide comprehensive(NEW COLUMN)assumptions upon which(NEW COLUMN)detail why comments and

disclaimers for the model(NEW COLUMN)the model relies(NEW COLUMN)critiques have been

that explain its(NEW COLUMN)(NEW COLUMN)rejected

preliminary status; the(NEW COLUMN) * explain why those

basis for the model(NEW COLUMN)assumptions are valid(NEW COLUMN) * if the model doesn't fit,

assumptions and data; and(NEW COLUMN)in the particular(NEW COLUMN)don't wear it

the ways it is incomplete(NEW COLUMN)context in which it is

(NEW COLUMN)applying the model(NEW COLUMN) * reality usually trumps a

* in responding to DQA(NEW COLUMN)(NEW COLUMN)model's representation of

requests for correction,(NEW COLUMN) * specifically request(NEW COLUMN)reality

identify those features(NEW COLUMN)comments on the

of the model that are not(NEW COLUMN)validity of the(NEW COLUMN) * the statute makes a

"information"(NEW COLUMN)assumptions and their(NEW COLUMN)difference

(NEW COLUMN)use in the modeling

(NEW COLUMN)exercise

[33 ELR 10773]

Based on the DQA's exclusive focus on "information quality" and its sole source of relief—to provide a process for "correcting" "information"—one would expect the DQA to apply only to features of models that are standardized, i.e., accurate data, rather than to features of the model that depend on the circumstances under which the model is being used.189 Petitioners might complain, for example, that a data source is unrepresentative or that the data is not "objective" if it is submitted by private parties who have placed contractual constraints on their scientists' ability to publish their findings. If a petitioner's complaint about a data source had merit, EPA then might have to decide whether to exclude that data entirely or use it but weight it according to its flaws. Petitioners might also object if EPA uses models prepared by third parties if those models are not fully explained.190 These types of complaints would all target weaknesses in the transparency of "information" used in EPA's modeling exercises, and EPA would need to explain why these features of the model cannot be disclosed. By contrast, when EPA uses a model because it offers a means of predicting harm to health or ecosystems, even though the model's assumptions cannot be validated and the data sets are by necessity incomplete, this decision and the underlying model is not "information" per se. Disagreements with use of the model in that case involve differences over regulatory policy rather than "information" or "data." Allowing these challenges to be resolved (or even filed) under the DQA, which only establishes a process for ensuring the quality of science or data used in regulation, would confuse and potentially overshadow the underlying policy disagreements. Using the DQA as the method for resolving the dispute would also allow regulators and challengers to sidestep the authorizing statutes in making policy determinations about the appropriateness of a model in any given regulatory setting.

Interestingly, however, the petitions to date filed against EPA generally do not take issue with standard features of the data used in EPA models or the transparency of third-party models, but instead attack the appropriateness and general conservatism of EPA's model assumptions.191 Since model assumptions are not strictly "information" as defined by the Office of Management and Budget,192 EPA should be vigilant in mapping the various challenges against the anatomy of the model itself193 and explicitly identify when it is the particular use of a model, or even a model assumption, rather than some erroneously reported fact, that is being challenged. If a petitioner demonstrates that the model (data or assumptions) is invalid and flatly refuted by reality and that an alternate model or data set (without added expense) will be more accurate,194 then the Agency should take the challenge seriously and consider revising its model or explain why the adjustment is not needed. When challenges are not supported by evidence that the model is clearly refuted by reality, but instead take issue with the agency's unvalidatable model assumptions or underlying policy decisions to use a model, agencies should be able to fend off a DQA attack by arguing that the challenge does not fall within the information quality procedures that the DQA requires. At most, after noting that this type of challenge does not go to "information" per se, the agency, when challenged, should provide a clear explanation of why the agency used the model in the way it did, why it used a particular data set, and why it adopted some challenged assumptions over alternatives. Where possible, EPA should also identify how its use of the model, its selection of the data, and its model assumptions are consistent with the directions of the authorizing statute.

Based on the history of judicial review of agency models described above and the nature of the DQA petitions filed to date, it is likely that petitioners filing administrative petitions under the DQA will continue in the future to take full advantage of this nebulous fact/policy quality of models and characterize models as if they were exclusively factual/technical exercises.195 Challengers will find this strategy beneficial first and foremost because DQA challenges must be directed to "information." It does not strengthen a petition to acknowledge that many facets of a model under attack do not concern technical issues, facts, or data but instead the agency's policy choices regarding use of the model or adoption of certain model assumptions. Second, petitions to date have been filed against EPA, with one exception, by those who benefit from delay. Since challenges that confuse fact and policy and raise detailed, technical criticisms are more likely to delay an agency, even if they ultimately fail, this feature is consistent with some of the petitioners' interests. Third, the DQA challengers' attempt to blur policy into science runs counter to protective laws and regulations that Congress has enacted, because it rests on the insistence that regulations can be based on solid science rather than a mix of scientific data and predictions. Again, facets of some DQA challenges, most notably arguments that models or [33 ELR 10774] risk assessments must be validated, must produce statistically significant predictions before use, or must employ protocols that have been preapproved by the agency, not only threaten to delay agency action, but also endeavor to impose onerous added requirements on agency models before the agency can rely on them. Even though these challenges are really aimed at agency policy, the complaints are misleadingly framed as if questions about the extent of peer review, validation, and statistical significance are factual determinations that have ascertainably "correct" answers. Protective health and environmental statutes, by contrast, recognize that agencies must act before all of the scientific facts can be ascertained.

VI. Conclusion

EPA's 30 years of experience in defending its models provide the most helpful guide for anticipating and defending challenges in the future. First and foremost, the Agency should resist petitioners' efforts to mischaracterize models as technical exercises that are always capable of validation, calibration, and "accurate" results or predictions. Once the technical disagreements, i.e., whether data is representative or accurate, are separated from model assumptions that often must be based on very crude approximations and policy choices, the Agency can better explain and defend its judgments. The courts have also made it clear that the Agency need not provide definitive evidence that its model assumptions or its choice of one model over others is superior, merely that its decision is justified and not arbitrary. EPA is thus well-advised to provide detailed descriptions of its model in its proposed rulemaking, and, most importantly, to provide careful, coherent explanations for those decisions that are challenged (or alter them if the challenger is correct) in its preamble to a final rule or its resolution of a DQA complaint.

1. 5 U.S.C. §§ 551-559, 701-706, available in ELR STAT. ADMIN. PROC.

2. The Data Quality Act, Section 515 of the Treasury and General Government Appropriations Act for Fiscal Year 2001, Pub. L. No. 106-554.

3. See Figure 1. Although EPA uses both mechanistic (or theory-based) models and empirical (data-based) models, the models discussed in this Article are predominantly mechanistic models or a hybrid of both types. See generally Kenneth H. Reckhow & Steven Chapra, Modeling Excessive Nutrient Loading in the Environment 2 (Nov. 12, 1998) (unpublished manuscript, on file with the Nicholas School of the Environment, Duke University) (discussing these types of models and recent developments in the modeling of nutrient loading).

4. For a more complete discussion of the assumptions involved in selecting the data (or "observations") that are entered into models, see ASTM, STANDARD GUIDE FOR STATISTICAL EVALUATION OF ATMOSPHERIC DISPERSION MODEL PERFORMANCE, D-6589, at 3-4 (2000).

5. Reckhow & Chapra, supra note 3, at 4.

6. See generally Charles D. Case, Problems in Judicial Review Arising From the Use of Computer Models and Other Quantitative Methodologies in Environmental Decisionmaking, 10 B.C. ENVTL. AFF. L. REV. 251 (1982); Camille V. Otero-Phillips, What's in the Forecast? A Look at the EPA's Use of Computer Models in Emissions Trading, 24 RUTGERS COMPUTER & TECH. L.J. 187, 206-08 (1998) (discussing four different classes of atmospheric models used by EPA that provide differing levels of accuracy and predictive power and thus offer different tools depending on the regulatory goals and available data).

7. The courts' periodic fall from this tightrope walk, evidenced in several outlier opinions discussed later where the courts second-guess agency policy choices, is a great source of concern among administrative law scholars. See JERRY L. MASHAW & DAVID L. HARFST, THE STRUGGLE FOR AUTO SAFETY 225 (1990) (observing that judicial second-guessing of agency policy choices played a significant role in causing the National Highway Traffic Safety Administration to abandon efforts to set systematic policy and to resort instead to ad hoc recalls of automobile defects, with "the result of judicial requirements for comprehensive rationality [being] a general suppression of the use of rules"); Thomas O. McGarity, Politics by Other Means; Law, Science, and Policy in EPA's Implementation of the Food Quality Protection Act, 53 ADMIN. L. REV. 103, 219 (2001) (observing that "if EPA can expect a lawsuit every time it engages in macro-policymaking in a generic rule or policy statement, it may engage in transparent generic policymaking less frequently and move regulatory policymaking to lower levels within the agency"); Richard J. Pierce Jr., Two Problems in Administrative Law: Political Polarity on the District of Columbia Circuit and Judicial Deterrence of Agency Rulemaking, 1988 DUKE L.J. 300, 301-02 (arguing that judges on the U.S. Court of Appeals for the District of Columbia (D.C.) Circuit may be substituting their own interpretations of ambiguous statutes for agencies' and randomly reversing agency policymaking in rulemakings).

8. 5 U.S.C. § 704, available in ELR STAT. ADMIN. PROC.

9. RICHARD J. PIERCE JR. ET AL., ADMINISTRATIVE LAW AND PROCESS § 5.7.1 (1999).

10. Dalton v. Specter, 511 U.S. 462 (1994).

11. Bennett v. Spear, 520 U.S. 154, 27 ELR 20824 (1997).

12. Abbott Laboratories v. Gardner, 387 U.S. 136 (1967).

13. Id. at 149.

14. See Part III.A. infra.

15. The courts might be willing to entertain a challenge to a model that the Agency proposed to rely upon in a particular proceeding prior to completion of that proceeding if the Agency was also using the model in other contexts that met the relevant requirements for finality and ripeness. In other words, EPA should not be allowed to use a pending rulemaking action as an excuse to forestall judicial review of a model that it is employing in other contexts when the use in those contexts has direct consequences and is otherwise fit for judicial resolution.

16. DQA § 515(b)(2)(B). This appears to be the position that EPA has taken in its DQA guidelines. Those guidelines provide:

When EPA provides opportunities for public participation by seeking comments on information, the public comment process should address concerns about EPA's information. For example, when EPA issues a notice of proposed rulemaking supported by studies and other information described in the proposal or included in the rulemaking docket, it disseminates this information within the meaning of the Guidelines. The public may then raise issues in comments regarding the information. If a group or an individual raises a question regarding information supporting a proposed rule, EPA generally expects to treat it procedurally like a comment to the rulemaking, addressing it in the response to comments rather than through a separate response mechanism.

U.S. EPA, GUIDELINES FOR ENSURING AND MAXIMIZING THE QUALITY, OBJECTIVITY, UTILITY, AND INTEGRITY OF INFORMATION DISSEMINATED BY THE ENVIRONMENTAL PROTECTION AGENCY § 8.5 (2003).

17. See Heckler v. Chaney, 470 U.S. 821, 15 ELR 20335 (1985) (agency decision not to prosecute is not judicially reviewable in absence of clear statutory guidelines for decision); Sierra Club v. Thomas, 828 F.2d 783, 17 ELR 21198 (D.C. Cir. 1987); Texas v. Department of Energy, 764 F.2d 278, 15 ELR 20711 (5th Cir. 1985) (designation of two sites as potentially acceptable sites for nuclear waste repository not ripe for judicial review). But see Eagle-Picher Indus. v. EPA, 759 F.2d 905, 15 ELR 20467 (D.C. Cir. 1985) (judicial review of placing sites on national priorities list is appropriate when statute articulates clear criteria for placing sites on the list). For a comprehensive discussion of the finality and ripeness doctrines in the context of EPA's use of risk assessment models in internal priority-setting, see John S. Applegate, Worst Things First: Risk, Information, and Regulatory Structure in Toxic Substances Control, 9 YALE J. ON REG. 277, 336-37 (1992) ("Typically, judicial review of agency action has been limited to the last two stages of the regulatory process: the choice of response (promulgation of a rule) and enforcement. Review of a prior stage, including priority setting, has been precluded by the doctrines of finality and ripeness.").

18. Bennett v. Spear, 520 U.S. 154, 27 ELR 20824 (1997).

19. Abbott Laboratories v. Gardner, 387 U.S. 136 (1967).

20. See Robert A. Anthony & David A. Codevilla, Pro-Ossification: A Harder Look at Agency Policy Statements, 31 WAKE FOREST L. REV. 667, 672 n.21 (1996) (collecting cases).

21. 313 F.3d 852, 33 ELR 20113 (4th Cir. 2002). The Competitive Enterprise Institute's DQA request for correction of the report, U.S. EPA, CLIMATE ACTION REPORT 2002 (2003), available at http://www.epa.gov/oei/qualityguidelines/afreqcorrectionsub/7428.pdf. if it is ultimately taken to court, presents another example of a Class III challenge.

22. EPA had voluntarily employed notice-and-comment rulemaking procedures in drafting the risk assessment.

23. 5 U.S.C. § 702, available in ELR STAT. ADMIN. PROC.

24. 520 U.S. 154, 177-78, 27 ELR 20824, 20829 (1997) (to be "final," the action must mark the "consummation" of the agency's decisionmaking process and it must be one by which "rights or obligations have been determined," or from which "legal consequences will flow").

25. 313 F.3d at 860, 33 ELR at 20113.

26. Id. at 861, 33 ELR at 20113.

27. Id.

28. EPA can, of course, fix minor technical errors in models or modeling outputs without allowing notice and comment on the minor technical changes. Appalachian Power Co. v. EPA (III), 251 F.3d 1026, 1039, 31 ELR 20670, 20674 (D.C. Cir. 2001) (notice and comment not required on minor technical changes to emissions budgets based upon modeling exercise).

29. 838 F.2d 1317, 18 ELR 20473 (D.C. Cir. 1988).

30. Id. at 1321-22, 18 ELR at 20476.

31. 28 F.3d 1259, 24 ELR 21210 (D.C. Cir. 1994).

32. Id. at 1263, 24 ELR at 21212 (citations omitted).

33. 5 U.S.C. § 706(2), available in ELR STAT. ADMIN. PROC.

34. Id. § 706(2)(E), available in ELR STAT. ADMIN. PROC.

35. Id. § 706(2)(A), available in ELR STAT. ADMIN. PROC.

36. Universal Camera v. National Labor Relations Bd., 340 U.S. 474 (1951).

37. Id. (quoting Consolidated Edison Co. v. National Labor Relations Bd., 305 U.S. 197, 229 (1938).

38. Motor Vehicle Mfrs. Ass'n of the United States v. State Farm Mut. Auto. Ins. Co., 463 U.S. 29, 42, 13 ELR 20672, 20676 (1983).

39. PIERCE ET AL., supra note 9, § 7.3.3.

40. American Paper Inst. v. American Elec. Power Serv. Corp., 461 U.S. 402, 412 n.7 (1983).

41. Aqua Slide 'N' Dive v. Consumer Prod. Safety Comm'n, 569 F.2d 831 (5th Cir. 1978). See also Gulf S. Insulation v. Consumer Prod. Safety Comm'n, 701 F.2d 1137, 1142-43, 13 ELR Digest 20850 (5th Cir. 1983).

42. It is also possible that the courts will review modeling exercises less deferentially when they are not associated with notice-and-comment rulemaking proceedings. See Anthony & Codevilla, supra note 20, at 667 (suggesting that courts employ a less deferential test for substantive judicial review of policy statements than for legislative rules promulgated through informal rulemaking procedures). No court, however, has adopted this position.

43. Sierra Club v. U.S. Forest Serv., 878 F. Supp. 1295, 1310, 25 ELR 20313, 20318 (D.S.D. 1993) (citations omitted), aff'd, 46 F.3d 835, 25 ELR 20799 (8th Cir. 1995). See also National Oilseed Processors Ass'n v. Browner, 924 F. Supp. 1193, 1217, 26 ELR 21453, 21464 (D.D.C. 1996), aff'd in part & rev'd in part on other grounds, Troy v. Browner, 120 F.3d 277, 27 ELR 21548 (D.C. Cir. 1997) (after noting the extensive effort EPA dedicated to identifying appropriate candidates for listing on the toxic release inventory (TRI) list, the district court concludes that although EPA's explanations were not always as complete as the court wished, "the principles of administrative law do not demand perfection in the administrative process …. EPA went to great lengths to separately evaluate each and every chemical on the basis of the relevant data, and to make scientific and technical judgments which are clearly outside the expertise of a reviewing court.").

44. The Court seemed to encourage this super-deference. See Baltimore Gas & Elec. Co. v. Natural Resources Defense Council, 462 U.S. 87, 103, 13 ELR 20544, 20548 (1983) ("When examining this kind of scientific determination [i.e., one at the 'frontiers of scientific knowledge'], as opposed to simple findings of fact, a reviewing court must generally be at its most deferential.").

45. Even when multifaceted, technical challenges such as these fail, they demand considerable resources from the court and the Agency to litigate. The absence of sanctions, such as imposing court costs, however, can cause this type of challenge to be relatively costless for the plaintiffs and thus EPA should expect a fair amount of groundless, but highly detailed and technical complaints throughout the notice-and-comment process and into litigation.

46. In one case, for example, the plaintiffs' dozen or more complaints about an EPA model were so poorly explained and unsupported by the record, that the court rejected the arguments on their face since they did not present a credible challenge to the validity of EPA's model. In Appalachian Power Co. v. EPA (I), the electric utilities and industry groups challenged a variety of aspects of EPA's inputs to its models, all of which the court found without basis, support, and sometimes simply not supported in the briefs themselves. 135 F.3d 791, 804, 812-16, 28 ELR 20521, 20524, 20526, 20528-30 (D.C. Cir. 1998) (rejecting challenges to the comprehensiveness of the model's database; to minor assumptions in model that were unsupported by plaintiffs; the significance of certain variables such as cost; the weighting of smaller boilers; and the calculation of cost-effectiveness of certain burners and processes). The district court similarly reiterated and rejected in footnotes multifaceted challenges to EPA's listing of several chemicals, citing EPA's comprehensive responses in guidelines, the record, and in briefs, see National Oilseed, 924 F. Supp. at 1207 n. 19, 26 ELR at 21459 n.19, although the D.C. Circuit reversed the listing of two chemicals on appeal, citing EPA's insufficient explanation or citation to the relevant studies. Troy, 120 F.3d at 291, 293, 27 ELR at 21554, 21555.

47. National Oilseed, 924 F. Supp. at 1209, 26 ELR at 21459 (citations omitted).

48. Moreover, since substantive challenge to agency models take issue with the agency's response to comments, extensive or anticipatory defenses of agency models, assumptions, data sets, etc. in the proposed rule may not be necessary to survive judicial review.

49. Courts seem to dismiss speculative assertions that the model will fail without more proof. See, e.g., Association of Battery Recyclers, Inc. v. EPA, 208 F.3d 1047, 1062, 30 ELR 20512, 20518 (D.C. Cir. 2000); Edison Elec. Inst. v. EPA, 2 F.3d 438, 23 ELR 21173 (D.C. Cir. 1993).

50. For a detailed discussion of the extensive litigation against EPA air models, see Case, supra note 6, at 304-36; see also Michael S. McMahon & Steven D. Hinkle, State of Ohio v. EPA: Does the Sixth Circuit Have a New Standard for Its Review of the EPA's Use of Air Quality Modeling?, 18 U. TOL. L. REV. 569, 582-84 (1987).

51. Small Refiner Lead Phase-Down Task Force v. EPA, 705 F.2d 506, 535, 13 ELR 20490, 20505 (D.C. Cir. 1983) (quoting American Pub. Gas Ass'n v. Federal Power Comm'n, 567 F.2d 1016, 1039 (D.C. Cir. 1977) (emphasis added)).

52. American Iron & Steel Inst. v. EPA, 115 F.3d 979, 1005, 27 ELR 21241, 21252 (citations omitted).

53. Small Refiner Lead Phase-Down, 705 F.2d at 535, 13 ELR at 20505. Borrowing from the employment discrimination case law, the D.C. Circuit 15 years later reiterated the standard for review of these challenges:

As we have previously noted, the party challenging the use of such a model "cannot undermine a regression analysis simply by pointing to variables not taken into account that might conceivably have pulled the analysis's sting." [Citations omitted.] Rather, that party must identify clearly major variables the omission of which renders the analysis suspect. This conclusion, derived from employment discrimination cases, holds equally true in this context, even more so because of the deference due to an agency's scientific analysis.

Appalachian Power Co. v. EPA (I), 135 F.3d 791, 804, 28 ELR 20521, 20525 (D.C. Cir. 1998) (petitioners challenged the failure of EPA to consider the aging and impaired performance of various boilers in setting boiler emissions standards).

54. Appalachian Power Co. (I), 135 F.3d at 804, 28 ELR at 20525; Eagle-Picher Indus. v. EPA, 759 F.2d 905, 15 ELR 20467 (D.C. Cir. 1985) (upholding EPA's Hazard Ranking System (HRS) against a challenge that it did not make sufficient adjustments to accommodate the unique features of mining sites); Edison Elec. Inst. v. EPA, 2 F.3d 438, 23 ELR 21173 (D.C. Cir. 1993) (challenging EPA's failure to account for biodegradation in toxicity characteristic leaching procedure model for predicting the leaching of potential wastes); National Wildlife Fed'n v. EPA, 286 F.3d 554, 565, 32 ELR 20607, 20610 (D.C. Cir. 2002) (petitioners challenge EPA's use of a bankruptcy model because it is outdated and has a 15% error rate); American Iron & Steel Inst., 115 F.3d at 979, 27 ELR at 21241 (petitioners challenge EPA's calculation of water quality value for mercury because EPA's model was oversimplistic in not accounting for the wide variations in mercury concentrations that occur in nature and in not accounting for the ingestion by fish of mercury from sediments). There is necessarily overlap with arguments that in applying the model the agency insufficiently accounted for the unique features of a particular location or activity. See section 1.b. infra. The category of arguments discussed here generally attacks the models earlier, before they have been applied to reach specific conclusions. Petitioners argue the overarching model should include additional variables, rather than arguments that the model is inapposite for a particular location or activity.

55. See, e.g., Eagle-Picher Indus., 759 F.2d at 905, 15 ELR at 20467 (the Agency notes that challenged HRS is only used to make a preliminary division between sites for ranking, allowing other features to be considered later and throughout the rulemaking, the Agency "clearly indicated its awareness of the limitations of the model, including those regarding its application to mining waste sites"); American Iron & Steel Inst., 115 F.3d at 1004, 27 ELR at 21252 (the Agency did incorporate site-specific data into mercury calculation and observed that fish consumption of mercury in sediments related to concentrations in water column, and guidance allows for further adjustments if needed); cf. National Wildlife Fed'n, 286 F.3d at 565, 32 ELR at 20610 (in upholding a bankruptcy model, the court rather than EPA provided justifications for finding that the model was nevertheless reliable).

56. See, e.g., Edison Elec. Inst., 2 F.3d at 448-49, 23 ELR at 21179 (EPA argued that it could not derive a biodegradation rate because of inadequate data, particularly given the variability in degradation due to differing pH and temperature variations). The Agency made both arguments in Appalachian Power Co. (I), 135 F.3d at 804-05, 28 ELR at 20525.

57. Eagle-Picher Indus., 759 F.2d at 921, 15 ELR at 20475 ("the EPA adequately addressed the substance of each of the petitioners' complaints in its response to comments. We find that the agency's explanations are reasonable and see no reason to rehearse them here."); Edison Elec. Inst., 2 F.3d at 449, 23 ELR at 21179 (EPA "reasonably rejected" petitioners' arguments and data and its refusal to add an adjustment for biodegradation in the TCLP model was not arbitrary); American Iron & Steel Inst., 115 F.3d at 1005, 27 ELR at 21253 ("Petitioners have not demonstrated to us that the agency's explanation is irrational. We therefore reject their contention that use of the model was arbitrary.").

58. See, e.g., Appalachian Power Co. (I), 135 F.3d at 804-05, 28 ELR at 20525. But see National Wildlife Fed'n, 286 F.3d at 565-66, 32 ELR at 20610 (arguing that the bankruptcy model itself was reliable, rather than discussing Agency's support for its continued reliance on the model).

59. 40 F.3d 392, 25 ELR 20158 (D.C. Cir. 1994).

60. Id. at 403-05, 25 ELR at 20163.

61. Appalachian Power Co. v. EPA (II), 249 F.3d 1032, 31 ELR 20635 (D.C. Cir. 2001), agreeing in analogous challenge, Appalachian Power Co. v. EPA (III), 251 F.3d 1026, 1034-35, 31 ELR 20670, 20672 (D.C. Cir. 2001).

62. 249 F.3d at 1033-35, 31 ELR at 20672.

63. 4 F. Supp. 2d 435, 28 ELR 21445 (M.D.N.C. 1998), vacated by 313 F.3d 852, 33 ELR 20113 (4th Cir. 2002).

64. See, e.g., id. at 457, 28 ELR at 21455 ("The record presents no evidence of EPA establishing similarity criteria before the Assessment.").

65. Association of Battery Recyclers, Inc. v. EPA, 208 F.3d 1047, 1062, 30 ELR 20512, 20518 (D.C. Cir. 2000).

66. See, e.g., U.S. EPA, AGENCY GUIDANCE FOR CONDUCTING EXTERNAL PEER REVIEW OF ENVIRONMENTAL REGULATORY MODELING § II.A. (2002), available at http://cfpub.epa.gov/crem/modelpr.cfm (describing "application niche").

67. Appalachian Power Co. (III), 251 F.3d at 1035, 31 ELR at 20672.

68. Mision Indus., Inc. v. EPA, 547 F.2d 123, 7 ELR 20096 (1st Cir. 1976) (rejecting petitioners' challenge that EPA's diffusion model was inadequate because it did not take terrain turbulence or incorporate more than three weather stations since EPA explained that the simplifications led to more conservative, worst-case predictions (presumably in keeping with the statute) than if these features had been accounted for); South Terminal Corp. v. EPA, 504 F.2d 646, 4 ELR 20768 (1st Cir. 1974) (rejecting petitioners' attack on the accuracy of EPA's rollback model "because of its purported failure to take account of local topography and meteorology" which were considered in EPA's technical support document for the area).

69. The court held that the Forest Service provided multiple grounds for supporting its prediction of the effects of timber sales on habitat in one area, and did not rely solely on a challenged, generic habitat model. Sierra Club v. U.S. Forest Serv., 878 F. Supp. 1295, 25 ELR 20313 (D.S.D. 1993), aff'd, 46 F.3d 835, 25 ELR 20799 (8th Cir. 1995). It also conducted surveys to support its conclusions, and inputted specific site-specific characteristics into the model. Id. at 1308-10, 25 ELR at 20317-19. All of these features, the court held, supported the resulting predictions:

As long as an agency reveals the data and assumptions upon which a computer model is based, allows and considers public comment on the use or results of the model, and ensures that the ultimate decision rests with the agency, not the computer model, then the agency use of a computer model to assist in decision making is not arbitrary and capricious.

Id. at 1310, 25 ELR at 20319.

70. 784 F.2d 224, 16 ELR 20447 (6th Cir.), on reh'g, 798 F.2d 880, 16 ELR 20870 (6th Cir. 1986). For a sharp critique of the approach taken by the court in Ohio, see McMahon & Hinkle, supra note 50, at 582-85.

71. 798 F.2d at 882, 16 ELR at 20870.

72. Id.

73. Appalachian Power Co. (II), 249 F.3d at 1050, 31 ELR at 20645 (petitioners argue that EPA's air model failed to model individual sources and thus ignores the effects of industrial sources' having lower smoke stacks, but since petitioners did not quantify the effect or show it matters, they did not manage to shift the burden to EPA to defend its model).

74. 705 F.2d 506, 13 ELR 20391 (D.C. Cir. 1983).

75. 28 F.3d at 1259, 24 ELR at 21210.

76. Id. at 1264, 24 ELR at 21212.

77. Id. at 1265, 24 ELR at 21213.

78. 2 F.3d 438, 23 ELR 21173 (D.C. Cir. 1993).

79. Id. at 443-46, 23 ELR at 21176-79.

80. Id. at 446, 23 ELR at 21177.

81. Id., 23 ELR at 21178.

82. Id. at 447, 23 ELR at 21178.

83. 208 F.3d 1047, 30 ELR 20512 (D.C. Cir. 2000).

84. Id. at 1062, 30 ELR at 20518.

85. Id. at 1063, 30 ELR at 20518.

86. Id. at 1064, 30 ELR at 20518.

87. 139 F.3d 914, 28 ELR 21106 (D.C. Cir. 1998).

88. Id. at 922-24, 28 ELR at 21109-10.

89. Id.

90. Id. at 923, 28 ELR at 21109 (citations omitted) (emphasis in original).

91. See Thomas O. McGarity, Substantive and Procedural Discretion in Administrative Resolution of Science Policy Questions: Regulating Carcinogens in EPA and OSHA, 67 GEO. L.J. 729 (1979).

92. See, e.g., International Harvester Co. v. Ruckelshaus, 478 F.2d 615, 3 ELR 20133 (D.C. Cir. 1973) (viewing with doubt EPA's regulatory assumptions, such as maintenance by owners, replacement of catalytic converter, and the lead levels in gasoline in future years, in its decision that auto manufacturers could meet 1975 emissions standards since the assumptions were unsupported by evidence).

93. 701 F.2d 1137, 13 ELR Digest 20850 (5th Cir. 1983).

94. Aqua Slide 'N' Dive v. Consumer Prod. Safety Comm'n, 569 F.2d 831 (5th Cir. 1978).

95. As discussed below, the court was critical of the Agency for not randomly selecting the 1,164 test homes in estimating formaldehyde exposure levels and found that the model's predictions were tainted by the failure to use exposure data from "average" formaldehyde-treated homes. 701 F.2d at 1146, 13 ELR Digest at 20850.

96. Id. Legal scholars and scientists have been very critical of the Gulf South opinion, accusing the court of overreaching in an area that was beyond judicial competence. See Thomas O. McGarity, Some Thoughts on "De-Ossifying" the Rulemaking Process, 41 DUKE L.J. 1385, 1417-19 (1992); Nicholas A. Ashford et al., A Hard Look at Federal Regulation of Formaldehyde: A Departure From Reasoned Decisionmaking, 7 HARV. ENVTL. L. REV. 297, 368 (1983) ("We find the Fifth Circuit's analysis to be unpersuasive in its evaluation of CPSC's cancer risk assessment for formaldehyde."); Devra L. Davis, The "Shotgun Wedding" of Science and Law: Risk Assessment and Judicial Review, 10 COLUM. J. ENVTL. L. 67, 85 (1985) ("The decision stands simply as a remarkable judicial probe of an agency's record on a narrow question."); Howard Latin, Good Science, Bad Regulation, and Toxic Risk Assessment, 5 YALE J. ON REG. 89, 131 (1988) ("The court's opinion reflects … a fundamental misunderstanding of the limited evidence on which most risk assessments of carcinogens are based."); Richard A. Merrill, The Legal System's Response to Scientific Uncertainty: The Role of Judicial Review, 4 FUNDAMENTAL & APPLIED TOXICOLOGY S418, S425 (1984) ("The opinion's close scrutiny of an exercise that is fraught with uncertainty, but yet promises improvement in regulation of health hazards, is disconcerting."). But see Cass R. Sunstein, In Defense of the Hard Look: Judicial Activism and Administrative Law, 7 HARV. J.L. & PUB. POL'Y 51, 53 (1984) (praising the Fifth Circuit for relying on the hard look doctrine to "ensure that regulatory controls are well-founded" and to promote "private ordering").

97. 701 F.2d at 1142, 13 ELR Digest at 20850 (quoting Aqua Slide 'N' Dive, 569 F.2d at 837).

98. Although the court did not reach the industry's critiques of the assumptions that were built into the Global 79 model, it criticized two of the assumptions in dicta. The court was apparently not convinced that it was appropriate for the Agency to assume that at identical exposure levels the effective dose for rats is the same as that for humans. The court also questioned the assumption that the carcinogenicity dose-response curve was linear at low doses. 701 F.2d at 1147 n.19, 13 ELR Digest at 20850.

99. Natural Resources Defense Council v. Herrington, 768 F.2d 1355, 1391, 15 ELR Digest 20781 (D.C. Cir. 1985).

100. 115 F.3d 979, 27 ELR 21241 (D.C. Cir. 1997).

101. Id. at 993, 1000, 27 ELR at 21247, 21250 (Tier II standards have large, conservative safety margins and EPA erroneously creates a presumption that when fish tissues exceed a standard, those discharging the substance can be presumed to contribute to the exceedance).

102. 294 F.3d 113, 32 ELR 20744 (D.C. Cir. 2002).

103. Id. at 121-22, 32 ELR at 20746.

104. Id.; American Iron & Steel Inst., 115 F.3d at 993, 1000, 27 ELR at 21247, 21250.

105. 701 F.2d at 1146, 13 ELR Digest at 20850.

106. See, e.g., Environmental Defense Fund v. EPA, 598 F.2d 62, 8 ELR 20765 (D.C. Cir. 1978); Hercules, Inc. v. EPA, 598 F.2d 91, 8 ELR 20811 (D.C. Cir. 1978); Environmental Defense Fund v. Costle, 578 F.2d 337, 8 ELR 20200 (D.C. Cir. 1978); Environmental Defense Fund v. EPA, 548 F.2d 998, 7 ELR 20114 (D.C. Cir. 1976), cert. denied, 431 U.S. 925 (1977); Environmental Defense Fund v. EPA, 510 F.2d 1292, 5 ELR 20243 (D.C. Cir. 1975); Society of Plastics Indus. v. Occupational Safety & Health Admin., 509 F.2d 1301 (D.C. Cir. 1975); Synthetic Organic Chemical Mfrs. Ass'n v. Brennan, 503 F.2d 1155 (3d Cir. 1974); Environmental Defense Fund v. Ruckelshaus, 439 F.2d 584, 1 ELR 20059 (D.C. Cir. 1971).

107. 40 F.3d at 408, 25 ELR at 20166.

108. Id.

109. 83 F.3d 445, 26 ELR 21130 (D.C. Cir. 1996).

110. Id. at 452-53, 26 ELR at 21133-34.

111. 206 F.3d 1286, 30 ELR 20473 (D.C. Cir. 2000).

112. Id. at 1287-88, 30 ELR at 20473-74.

113. Id. at 1290-91, 30 ELR at 20475.

114. Leather Indus., 40 F.3d at 405, 25 ELR at 20164.

115. Id.

116. Id.

117. United Steelworkers of Am., AFL-CIO-CLC v. Marshall, 647 F.2d 1189, 1263, 10 ELR Digest 20784 (D.C. Cir. 1980).

118. 115 F.3d at 1000, 27 ELR at 21250.

119. Gulf South, 701 F.2d at 1145|}, 13 ELR Digest at 20850.

120. Cement Kiln Recycling Coalition v. EPA, 255 F.3d 855, 867, 31 ELR 20834, 20838 (D.C. Cir. 2001) (citations omitted).

121. Id. (citations omitted).

122. The court in United Steelworkers also rejected challenges to the data used by OSHA in setting worker protection standards for lead. The court considered each of the petitioners' technical disagreements carefully and in each case found that OSHA had adequately explained its choices and justified its models as nonarbitrary. 647 F.2d at 1259-63, 10 ELR Digest at 20784.

123. In Save Our Springs, 2002 WL 3757473, at *5 (W.D. Tex. 2002), an environmental group challenged EPA's model for estimating pollutant loading into a surface body, contesting EPA's "decision to rely on certain data and discount others." The court did not identify the specific disputes, but rejected the challenge, finding that "this Court's role in APA cases is not to evaluate alleged improper choices among data made by an agency well-practiced in making such decisions." Id.

In Natural Resources Defense Council v. Herrington, the court also upheld the DOE's decision to rely on a "thermal integrity" study as an input for its larger economic model for determining the appropriate energy-efficiency standard for various household appliances, despite criticisms (undisclosed in the opinion) of that study. The court provided no indication of the technical grounds for the challenge or the adequacy of the DOE's response, but instead noted that the DOE's analysis "of a technical engineering study is entitled to great deference from judges, who are hardly equipped to match the expertise of DOE's scientists" and the court simply could not hold that the DOE acted unreasonably "in concluding that the study furnished some support" for its model, 768 F.2d 1355, 1389, 15 ELR Digest 20781 (D.C. Cir. 1985).

In Central Ariz. Conservation Dist. v. EPA, 990 F.2d 1531, 23 ELR 20678 (9th Cir. 1993), the court held that EPA's partial reliance on the National Park Service's reanalysis of a study was not arbitrary since it had acknowledged the limitations of the study and provided multiple other grounds for determining the cause for visibility impairment at the Grand Canyon. Citing Ethyl Corp. v. EPA, 541 F.2d 1, 6 ELR 20267 (D.C. Cir. 1976), cert. denied, 426 U.S. 941 (1976), the court repeated that "the Administrator may apply his expertise to draw conclusions from suspected, but no completely substantiated, relationships between facts, from trends among facts, from theoretical projections from imperfect data, from probative preliminary not yet certifiable as 'fact' and the like." Id. at 1543, 23 ELR at 20684.

124. 255 F.3d 855, 31 ELR 20834 (D.C. Cir. 2001).

125. Id. at 867, 31 ELR at 20838.

126. American Iron & Steel Inst., 115 F.3d at 1006, 27 ELR at 21253.

127. Id. (citing Baltimore Gas & Elec. Co. v. Natural Resources Defense Council, 462 U.S. 87, 13 ELR 20544 (1983)).

128. Industry challenged EPA's reliance on a mercury fish study (the "Olson" study) in calculating the bioconcentration factor that the Agency's model used for methylmercury because the fish were exposed to mercury through their food as well as through the water column (the exposure addressed by the model). Id. The Agency acknowledged this problem in response to a comment, but determined that it "was unlikely to have affected the results … because 'the food that is added is unlikely to absorb a substantial amount of mercury before it is eaten.'" Id. After discussing the great deference owed the Agency, the court concluded that "the agency considered the very criticism that petitioners now raise before this court. It gave a rational explanation for why it used the study despite its supposed flaw. Nothing more is required."

129. EPA had agreed to review its PCB standard, but in the process also conceded per industry challenge that a newer study should be used to determine the cancer potency factor for PCB. This concession obviously mooted the challenge. Id. at 1008, 27 ELR at 21254.

130. Id. at 1007-08. 27 ELR at 21253-54.

131. 990 F.2d 1531, 23 ELR 20678 (9th Cir. 1993).

132. Id. at 1544, 23 ELR at 20685 (citation omitted).

133. American Iron & Steel Inst., 115 F.3d at 1007, 27 ELR at 21254.

134. Id.

135. Id.

136. 768 F.2d 1355, 1391, 15 ELR Digest 20781 (D.C. Cir. 1985).

137. Id. at 1409.

138. Id. at 1410.

139. 206 F.3d at 1290, 30 ELR at 20475.

140. Id.

141. 4 F. Supp. 2d at 458-60, 28 ELR at 21455-46.

142. 115 F.3d at 992-93, 27 ELR at 21246. Specifically, the court said that

given that the CWA requires that permit limits be set for any pollutant that may contribute to a violation of a narrative criteria [citations omitted], the best scientific approach that determines a value is permissible. The proper question is not whether using Tier II produces results as good as Tier I. What is the alternative when one cannot use Tier I? AISI lists other options, but it has not even attempted to convince us that these are superior to EPA's methodology.

Id. at 993, 27 ELR at 21246. See also supra section l.c. for general working assumptions in models.

143. 705 F.2d at 506, 13 ELR at 20490.

144. Id. at 527, 13 ELR at 20500.

145. 924 F. Supp. 1193, 26 ELR 21453 (D.D.C. 1996), aff'd in part & rev'd in part on other grounds, Troy Corp. v. Browner, 120 F.3d 277, 27 ELR 21548 (D.C. Cir. 1997).

146. 924 F. Supp. at 1205-07, 26 ELR at 21458 (EPA's decision to list n-hexane was not arbitrary, even though no single study was conclusive and others were weak, based on a weight-of-evidence judgment, explained in part in EPA's IRIS database).

147. Id. at 1207-08, 26 ELR at 21458-59 (EPA might have been more thorough, but it did appear to base its decision on the weight-of-evidence in determining to list IPBC, giving greater weight to two adverse rat studies).

148. Id. at 1210-11, 26 ELR at 21461 ("While it would be desirable to know more about the Russian studies, EPA has provided a reasonable explanation for why they are sufficient to meet the statutory standard and justify listing 2, 6 DMP on the TRI.").

149. Id. at 1213-16, 26 ELR at 21462-63 (the evidence suggested a hazard but was equivocal in delineating the precise substances or hazards, but in this setting EPA's decision to list the two sets of chemicals was not arbitrary).

150. Id. at 1208-10, 26 ELR at 21459-60 (EPA provided an adequate justification for listing three chemicals in the diisocyanates category based on their structural similarities to other, toxic diisocyanates).

151. Id. at 1210, 26 ELR at 21460.

152. Troy Corp. v. Browner, 120 F.3d 277, 291, 27 ELR 21548, 21554 (D.C. Cir. 1997).

153. 701 F.2d at 1146, 13 ELR Digest at 20850.

154. 115 F.3d at 1001, 27 ELR at 21250 (finding that EPA's decision to set projected effluent quality measures at the upper bound of a 95% confidence interval at the 95th percentile of expected effluent concentration was not unreasonably overcautious since it was consistent with the statute and the Agency's past practices). See also National Wildlife Fed'n v. EPA, 286 F.3d 554, 572-73, 32 ELR 20607, 20614 (D.C. Cir. 2002) (EPA had carefully explained its reasons for adopting a lower monthly confidence interval and provided flexibility in the calculation).

155. 547 F.2d 123, 7 ELR 20096 (1st Cir. 1976).

156. Id. at 128, 7 ELR at 20099.

157. 4 F. Supp. 2d at 462, 28 ELR at 21457. The court repeatedly chastised the Agency for failing to

disclose in the record or in the Assessment: its inability to demonstrate a statistically significant relationship under normal methodology, the reasoning behind adopting a one-tailed test, or that only after adjusting the Agency's methodology could a weak relative risk be demonstrated. Instead of disclosing information, the Agency withheld significant portions of its findings and reasoning in striving to confirm its a priori hypothesis.

Id. (emphasis in original).

158. See Natural Resources Defense Council v. Herrington, 768 F.2d 1355, 1390, 15 ELR Digest 20781 (D.C. Cir. 1985), observing that

commenters pointed out these shortcomings [in DOE's model to predict energy savings] at length, but they did not show that any other method of predicting future market distortion—including the assumption that it would remain stable—would produce demonstrably more accurate predictions…. DOE responsibly addressed alleged defects in the model by changing the model or explaining why defects were both extremely difficult to fix and of relatively minor moment to the rulemaking.

Mision Indus., 547 F.2d at 128, 7 ELR at 20099 (EPA adequately justified its decision not to attempt to calibrate its air quality model because of the incomplete data available).

159. 784 F.2d at 224, 16 ELR at 20447, on reh'g, 798 F.2d at 880, 16 ELR at 20870.

160. 784 F.2d at 229, 16 ELR at 20450.

161. Id. at 230, 16 ELR at 20450.

162. Id. at 230-31, 16 ELR at 20450, aff'd, 798 F.2d at 882, 16 ELR at 20870 (reaffirming that "on the present record the use of the CRSTER model without validation to forecast SO2 diffusion at the two plants adjacent to Lake Erie is arbitrary and capricious").

163. Id. at 230, 16 ELR at 20450.

164. 798 F.2d at 882, 16 ELR at 20870. The court also appeared frustrated with EPA's noncompliance with a prior, unrelated order directing EPA to monitor sulfur dioxide emissions and ambient air quality. 784 F.2d at 229, 16 ELR at 20450. It is possible that the court was frustrated by EPA's failure to conduct baseline monitoring and then to resist adjusting its model to unique circumstances presented by the two power plants in part because it lacked the necessary evidence.

165. 798 F.2d at 882, 16 ELR at 20870.

166. Chlorine Chemistry Council v. EPA, 206 F.3d 1286, 1290-91, 30 ELR 20473, 20475 (D.C. Cir. 2000).

167. Flue-Cured Tobacco Coop, Stabilization Corp. v. EPA, 4 F. Supp. 2d 435, 444-50, 28 ELR 21445, 21449-50 (M.D.N.C. 1998).

168. Id. at 449, 28 ELR at 21451.

169. Flue-Cured Tobacco Coop. Stabilization Corp. v. EPA, 313 F.3d 852, 860, 33 ELR 20113 (4th Cir. 2002).

170. 206 F.3d at 1286, 30 ELR at 20473.

171. In International Harvester Co. v. Ruckelshaus, 478 F.2d 615, 3 ELR 20133 (D.C. Cir. 1973), the court rejected EPA's conclusion that emissions reduction technology would be available for certain vehicles by 1975, despite a mandated National Academy of Sciences report reaching the contrary conclusion. The court observed that

while … EPA was not necessarily bound by NAS's approach, particularly as to matters interlaced with policy and legal aspects, we do not think that it was contemplated that EPA could alter the conclusion of NAS by revising the NAS assumptions, or injecting new ones, unless it states its reasons for finding reliability—possibly by challenging the NAS approach in terms of later-acquired research and experience.

Id. at 649, 3 ELR at 20148.

172. Id.

173. Small Refiner Lead Phase-Down Task Force v. EPA, 705 F.2d 506, 526, 13 ELR 20490, 20500 (D.C. Cir. 1983).

174. "While a more clearly and fully articulated policy would be preferable, the Court cannot conclude that EPA was unreasonable in exercising its discretion by continuing to exclude consideration of exposure when chemicals are high to moderately-high toxicity." National Oilseed, 924 F. Supp. at 1203, 26 ELR at 21456, aff'd in part & reversed in part, Troy Corp. v. Browner, 120 F.3d at 287, 27 ELR at 21552 (agreeing).

175. 705 F.2d at 526-27, 13 ELR at 20500. See also the discussion of Flue-Cured Tobacco in section 2.b supra.

176. 115 F.3d at 1005, 27 ELR at 21253.

177. See, e.g., id. ("In this case, the record demonstrates that the agency considered the 'relevant factors' raised by the suggested alternatives" even if it failed to provide a complete explanation of why it did not adopt specific alternative models.).

178. Appalachian Power Co. v. EPA (III), 251 F.3d 1026, 1037, 31 ELR 20670, 20674 (D.C. Cir. 2001) ("The EPA has sufficient discretion to use the IPM model in the first instance even if states believe that some other state-specific modeling is more accurate. When it comes to these sorts of technical matters, the EPA is entitled to great deference."); Connecticut Fund for the Env't v. EPA, 696 F.2d 169, 178, 13 ELR 20146, 20151 (2d Cir. 1982).

179. 705 F.2d at 531, 13 ELR at 20502.

180. Id. at 534, 13 ELR at 20504 (citations omitted) (also defending its opinion by observing that "this is not a case where EPA failed to give any reasons or gave unsupported reasons for its belief that small refiner lead use threatens health. Rather, EPA merely failed to articulate its reasons in any detail, forcing us to dig into the record to understand them fully.").

181. See, e.g., Part III.B.1.c., supra.

182. 467 U.S. 837, 14 ELR 20507 (1984).

183. "We do not think that Congress intended to create private rights of actions to challenge the inevitable objectionable impressions created whenever controversial research by a federal agency is published." Flue-Cured Tobacco Coop. Stabilization Corp. v. EPA, 313 F.3d 852, 861, 33 ELR 20113 (4th Cir. 2002).

184. The Data Quality Act, Section 515 of the Treasury and General Government Appropriations Act for Fiscal Year 2001, Pub. L. No. 106-554.

185. The appropriate quality standards for information depend on the statutory circumstances. A statute that seeks protective regulation will be more tolerant of limited data than a statute that depends on the Agency to support a "reasonable" regulation with "substantial evidence." EPA has acknowledged this contextual feature in determining the appropriate quality endpoint in its own DQA regulations.

186. Several top administrative law scholars have similarly opined that the new requirements under the DQA might add little to the existing judicial review of agency science. See, e.g., NATIONAL ACADEMY OF SCIENCES, DATA QUALITY TRANSCRIPT, Day 1, at http://www7.nationalacademies.org/stl/4-21-02_Transcript.doc, at 225-26. Moderator Prof. Richard Merrill observes that

at one level it seems to me that for an agency to adopt the stance of the guidelines in its evaluation of comments is not really fundamentally different from what agencies are now expected to do under the Administrative Procedure Act, that is, to examine the probative weight and the reliability of the information that is submitted by any commenter.

Prof. Richard J. Pierce Jr. agrees with this conclusion.

187. U.S. EPA, GUIDELINES FOR ENSURING AND MAXIMIZING THE QUALITY, OBJECTIVITY, UTILITY, AND INTEGRITY OF INFORMATION DISSEMINATED BY THE ENVIRONMENTAL PROTECTION AGENCY 21-27 (2002).

188. See Figure 2. For the requests for correction filed to date, see http://www.epa.gov/oeiinter/qualityguidelines/af_req_correction_sub.htm.

189. See Figure 2.

190. Some third-party models might be exempt from the DQA, however. The U.S. Office of Management and Budget (OMB) appears to have exempted proprietary models from the DQA, which could preclude EPA from ensuring that this category of models is reliable. See U.S. OMB, Guidelines for Ensuring and Maximizing the Quality, Objectivity, Utility, and Integrity of Information Disseminated by Federal Agencies; Republication, 67 Fed. Reg. 8452, 8460 § V.3.b.ii.B.i. (Feb. 22, 2002) [hereinafter U.S. OMB, Data Quality Act Guidelines]. Models prepared by third parties and submitted as "public filings," i.e., TRI data, or used in "adjudications," i.e., for licenses and permits, are also technically exempt from the DQA under OMB's guidelines. Id. § V.8. Depending on how EPA interprets these terms, OMB's exemptions may leave a rather large category of privately prepared models that do not need to meet DQA standards.

191. See, e.g., Kansas Corn Growers Association, the Triazine Network, and the Center for Regulatory Effectiveness, Request for Correction of Information Contained in the Atrazine Environmental Risk Assessment, Docket No. OPP-34237A (Nov. 25, 2002); Chemical Products Division, Request for Correction of the IRIS Barium substance file (Oct. 29, 2002); Letter from Paul Gilman, Assistant Administrator, EPA, to Jerry Cook, Chemical Products Division (Jan. 30, 2003); Competitive Enterprise Institute, Petitions to Cease Dissemination of the National Assessment on Climate Change (Feb. 20, 2003), all available at http://www.epa.gov/oeiinter/qualityguidelines/af_req_correction_sub.htm.

192. "Information" means any communication or representation of knowledge such as facts or data, in any medium or form, including textual, numerical, graphic, cartographic, narrative, or audiovisual forms. This definition includes information that an agency disseminates from a web page, but does not include the provision of hyperlinks to information that others disseminate. This definition does not include opinions, where the agency's presentation makes it clear that what is being offered is someone's opinion rather than fact or the agency's views. U.S. OMB, Data Quality Act Guidelines, supra note 190, at 8460.

193. See Figure 1.

194. See sections C.1. through C.3. at Part IV, supra.

195. See also Case, supra note 6, at 278-81 (describing over 20 years ago this same problem of models "masking" uncertainties with the appearance of quantitative precision).


33 ELR 10751 | Environmental Law Reporter | copyright © 2003 | All rights reserved