19 ELR 10486 | Environmental Law Reporter | copyright © 1989 | All rights reserved
KEYNOTE ADDRESSThe Honorable Stephen F. WilliamsJudge Williams serves on the U.S. Court of Appeals for the D.C. Circuit.
[19 ELR 10486]
My approach today will be to discuss risk and risk regulation in a general way. First, I want to discard some red herrings that seem to play far too great a role in the public's discussion of health risks: jobs and profits.
So long as regulation proceeds at a moderate pace, the likelihood of a devastating effect on jobs seems slight. Instead, risk regulation will cause a shift in jobs. Some enterprises will go out of business or have to cut back on production, involving a reduction in jobs, while others will appear or grow, creating more jobs. This sort of shift, of course, is similar to what technological innovation brings about. Just as we accept them in the context of technological innovation, the burdens seem acceptable so long as the regulation in question is acceptable on other grounds and the changes are not too abrupt.
This is not to say that particular individuals will not suffer permanent losses. If the making of a specific product requires unique skills, and risk regulation raises the price and reduces sales of the product, some workers with those skills will lose jobs; as the skills were unique (not demanded elsewhere), they will be unable to secure as good a job unless they acquire new skills.
The same process occurs with profits. We have received briefs in my court that allege that the Food and Drug Administration has in a particular case favored "company profits over public health." Certainly, the effect on short-term profits — i.e., the return on specific capital assets — provides the motivation for many litigants. (This parallels the effect of risk regulation on human capital.) But it seems to me quite improbable that in the long run risk regulation will seriously affect the rate of the return on capital.
Putting discussion of jobs and profits to one side leads one to a more realistic view of risk regulation. The issue that remains is compliance costs — they are real. The tooth fairy does not finance the various costs that must be incurred to diminish risk — additional labor, additional investment, and acceptance of products that are less satisfactory (except in the risk dimension). So at least one of the questions is how far we should go in incurring compliance costs.
Compliance costs are primarily suffered by labor and capital not as "labor" and "capital," but as consumers — either by paying more, or again, by accepting products that are less satisfactory except in affording reduced risk.
In one dimension, of course, it is clear that health must come first. A society where everyone is sick, or in the more extreme case, dead, is no society at all, and worrying about other things is irrelevant.
But a key question is, "How much health?" More health is not always better. To illustrate: As you age, you have an opportunity to take a great variety of tests to ensure that you don't have this, that, or the other disease. But at some point, the increment in health that you expect to gain, in terms of discovering and perhaps curing some incredibly improbable disease, simply is not worth the pain, the time, the humiliation, or the aggravation of taking the test.
As a more public example, suppose a child with AIDS could transmit the disease by biting another child. Most people would not say that it follows automatically that the child with AIDS should be excluded from school. There are some risks that we are willing to take, even though they do jeopardize health. In both individual and collective choices, we often do not insist on every possible increment of health.
But for the sake of argument let me assume that as a society we wish to pursue health with little or no regard for other things. Would it follow that we should restrict automatically every innovation until we are assured that it would not impose any serious health risks?
Here are some reasons why we might resist that view: (1) the health component of "non-health" things, or "wealth" (using the term as a surrogate for all "non-health" values); (2) the health component of things that are dangerous than any realistic (3) dangerous things that are less dangerous than any realistic substitute. All three counsel against taking the view that nothing should be allowed until it is exonerated.
The first factor, the health component of wealth, is illustrated by Aaron Wildavsky's happy maxim, "richer is safer."1 This is a proposition so obvious that we tend to overlook it. It is perfectly plain that the health of the citizenry is greater in the richer countries. Is this simply a happy coincidence? Clearly not.
A society's readiness to spend money on individual and collective medical care increases as wealth increases. But the contributions of wealth outside of medicine may be more important. One is to make better hygiene possible. Another is that for individuals leisure promotes health. Also, though it may hurt to admit it for those of us in jobs that require no heavy lifting, the substitution of relatively pleasant jobs for unpleasant ones seems healthy.
Wealth also may contribute to better morale. Norman Cousins wrote a book about how he cured himself of a supposedly incurable disease by taking out videotapes of the Marx Brothers and other comedies and essentially laughing himself to health. Grinding poverty makes the laughs come more slowly; upward shifts in wealth, even from a base well above grinding poverty, are likely to improve morale.
Wealth also contributes to health by increasing the physical resources of a society for handling disaster. When floods occur in Bangladesh, it is not easy to get food and medical help in, or to get people out. When floods occur on the Mississippi, those things are far easier. Thus wealth may not only avert hazards but help control them once they've struck.
The second health reason for allowing hazardous products or activities is that they may contribute to health — in a way that is more important than the harm they do. The most familiar items in this category are drugs. The introduction of a drug may cost lives through its side effects, but delaying the availability of some drugs may cost lives. Of course right now the issue has erupted into public glare because of [19 ELR 10487] delays in FDA approval of AZT. But nothing I have seen suggests that it is atypical.
A second illustration is foods. Broccoli, cabbage, cauliflower and mustard — to take only a few examples — all contain a carcinogenic element. Of course they may not be your favorites. But I suspect that if you cut from your diet anything with the slightest carcinogen, you'd find it pretty lean.
The third health reason for allowing hazards is that sometimes the substitute for the dangerous thing is itself more dangerous. The Delaney Clause of the Federal Food, Drug and Cosmetic Act2 has been understood historically and recently construed as absolutely banning any carcinogen in color additives. The curious result is that cosmetics manufacturers, when barred from using a mildly carcinogenic color additive, may substitute a non-carcinogen that poses more serious health risks than a rejected carcinogen.
Another puzzling example is paper diapers. The chemicals in the plastic of paper diapers pose some risks. The conventional substitute is the old, reliable cloth diaper. But it sadly turns out that these are also hazardous. There are drownings every year associated with the buckets parents use for cloth diapers. Banning paper diapers might actually increase the risks from diapers.
I certainly do not want to be interpreted as arguing that health regulation should be abandoned. All I am saying is that extremism in the pursuit of health may be unhealthy.
At least three arguments can be made against this rather cautious view. The first revolves around the idea of the risk of total loss. Suppose one regards elimination of the human species as an infinite loss, which seems fair enough. As a mathematical matter, if one multiplies infinity by any number or fraction, the result (by convention) is infinity. Thus one can argue that any enterprise that carries a risk of human extinction should be banned on that evidence alone.
Consider plutonium. One can easily depict a scenario under which plutonium causes human extinction. Applying the "total loss" argument would lead to complete abandonment of any activity involving its use. But the difficulty with the argument is that it can be turned around — one can also picture circumstances under which the non-use of plutonium leads to human extinction. If, for example, the Western democracies refrain from the use of nuclear power, energy scarcities might follow, aggravating matters in the Persian Gulf and precipitating nuclear war.3 Of course, this hypothetical assumes that we have not eliminated all uses of plutonium, so it may be unfair.
But let me illustrate with the subject of this conference, biotechnology. Assume that some experiments could get out of hand, and that some uses of biotechnological discoveries could bring about human extinction. If one were convinced that there was even a minute chance of this scenario happening, would it follow that further research and development in biotechnology should be barred?
Again, a counter-scenario exists. Suppose there were a disease, let's call it AIDS-plus, similar to AIDS but more virulent and more easily spread — and, indeed, so virulent that it is sure to wipe out the species — except for one thing: a biotechnology cure. On these facts, failure to pursue biotechnology leads to the extinction of human life.
One can run worst-case scenarios both ways; for every worst-case scenario arising from action, others arise from inaction. I am not suggesting that one eliminate consideration of worst-case scenarios. It is simply that one must evaluate probabilities rather than having a flat rule that any risk of infinite loss justifies a ban.
A second argument for caution is the understandable (and admirable) fear of inflicting involuntary risks upon people. There is a difference between the risks that people take on themselves and those they inflict upon others. Smoking is the best example of this concept. Most people are fairly willing to let others smoke themselves to oblivion, if that is their choice. On the other hand, the same people often feel quite powerfully that they should be protected from those who may inflict smoke on them.
The trouble with the argument is that — as lawyers have found of many arguments — it proves too much. Consider air travel. Most of you here arrived by air, and in so doing, you inflicted some risk on the people who were not traveling at all. Planes can collide in the sky and crash to the ground, injuring or killing people who were in no way involved in air travel. Car crashes may kill essentially innocent pedestrian bystanders. I think we accept these partly because they are relatively small-scale, partly because most are somewhat reciprocal: even if I never fly or drive, I surely consume products whose production has benefitted from others' use of aviation, cars and trucks.
A third special concern is the problem of risks that fall lopsidedly on specific groups that may have gotten a raw deal in allkinds of other respects. For example, to load up the poor with disproportionate risk, on top of every other affliction, may seem to violate elementary fairness.
At least two responses seem possible: First, if such groups also benefit disproportionately from risk-taking, then the argument fails. That may or may not be so. For those of the poor who are only marginally involved in the economy, the rising tides metaphor doesn't apply. Second, even as to persons who may not benefit from risk-taking by the broader society, it is not clear that a distributional problem should dominate the solution of health regulatory issues. Where there are deprivations that could be cured by some sensible policy change, it seems better to make the change than to use risk regulatory policy as a device for producing what I suspect would be rather mild mitigations of harm. Even if for some reason a "sensible" change cannot be made, I'm not sure that risk regulation should be held hostage to that feature of the policy's defects.
What follows from the point that the effort to stamp out health risks has health costs? I am not arguing that cost/benefit analysis should control risk regulation; that is a quite separate issue. Nor am I suggesting a de minimus approach to risk regulation. To put it affirmatively, I would suggest that Lester Lave's idea that some choices are "risk-risk" choices4 is in fact true of virtually any risk decision.
To put this in a programmatic terms, one option would be a "health opportunity cost impact statement." Such a statement would require every agency that regulates health to evaluate the health costs of restricting the risks associated-with a given decision. Agencies would not ban actions having a health opportunity cost higher than the health benefits. An [19 ELR 10488] essential part of such a program would be that the impact statement be exempt from judicial review.
There is also a procedural side, where my view is in apparent conflict with the maxim that "an ounce of prevention is worth a pound of cure." This is certainly a wise principle, but it assumes that the prevention costs an ounce and the cure a pound. It doesn't mean that we should give up a trillion ounces of prevention just to avert a million pounds of cure.
Consider two examples, one of personal interest and one from biotechnology. The example which especially interests me is the greenhouse effect. If we are to believe many distinguished scientists, the expected heat increase may flood large populous areas of the globe near sea level and drastically reduce world food supply.
Does it follow that we should take immediate action against the various possible causes of the greenhouse effect? There are at least some reasons for doubt. In the first place, there is much uncertainty as to which of its various causes are most important. It would be difficult to target our efforts at this point.
In addition, some of the factors I have noted, particularly Aaron Wildavsky's proposition that "richer is safer," come into play. To take immediate measures against the likely causes, we would have to forego economic and technological growth. Gradual advances in technology and in other fields should provide more resources, both intellectual and physical, to address the catastrophe as it develops.
Recent reports on the greenhouse effect suggest it is far more perilous than the majority of problems currently subject to intensive scrutiny and regulation. So one asks why, in fact, we do not take drastic action here. One answer would be that our leaders are taking the view I just suggested. Not likely. Another is that the inaction is an example of institutional failure. Vast amounts of capital and labor are committed to the forces said to be generating the greenhouse effect. Because of that institutional blockage, perhaps, our political institutions are not taking steps that we otherwise would. But even if that is the reason for our inaction in this area, that does not mean it isn't the right approach — the wrong reason for (possibly) the right action.
Turning to biotechnology, there is nothing remotely similar to the investment in time and intellectual energy and physical capital that there is in the processes producing the greenhouse effect. There is a distinct possibility that without institutional momentum, biotechnology risks will be resolved in the way that I've questioned here: that is, we may see it stifled because the kinds of interests necessary for its progress — necessary to offset a working presumption in favor of prevention, even if it costs a zillion pounds and may save only half a zillion of cure — may be absent.
1. See Wildavsky, Searching for Safety, 59 THE PUBLIC INTEREST (1988), and Wildavsky, Richer Is Safer, 60 THE PUBLIC INTEREST 23 (Summer 1980).
2. § 706(b)(5)(B) of the Act, 21 U.S.C. § 376(b)(5)(B).
3. See Wildavsky, Searching for Safety, supra, at 52.
4. Lave, Health and Safety Risk Analyses: Information for Better Decisions, 236 SCIENCE 291 (April 17, 1987).
19 ELR 10486 | Environmental Law Reporter | copyright © 1989 | All rights reserved
|