16 ELR 10226 | Environmental Law Reporter | copyright © 1986 | All rights reserved
Summary and AnalysisRichard WilsonProfessor Wilson is Mallinckrodt Professor and Chairman of the Department of Physics at Harvard University. He is also a member of the National Academy of Sciences Energy Engineering Board.
[16 ELR 10226]
I think we should start by reminding ourselves why we are at a meeting of this sort. It is not because we are radicals trying to attack the capitalist system with its dangerous chemical companies, nor it it because we are trying to provide the chemical companies with ammunition to defend themselves against the radicals. We are here because we want to play our part in reducing the risk to life, in improving the expectation of life, and, more importantly, in improving the quality of life by making sure that the environment is safe.
I would like to make two points that were not made today. First, the major improvement in life expectancy over the past 100 years has resulted from the almost complete elimination of infectious diseases as a cause of death. This constitutes a significant change in one of the risks facing us. We used to cope with infectious diseases by means of a zero-risk approach. We have eliminated the last bacterium of smallpox. This means that there is almost zero risk of infection by smallpox. This was not very expensive, compared with most risk reduction measures.
Second, the elimination of infectious diseases as a cause of death came about through the application of general principles, not through specific risk analyses. No one was sure how to eliminate the diseases, thus no one could say, "if you do this, it will have this particular effect." But engineers insisted on good sanitation, good drainage, and pure water and as a result infectious diseases are no longer major causes of death. Similarly, when the Environmental Protection Agency (EPA) restricted the use of lead in gasoline, no one was quite sure what the effect would be, but the general principle of reducing lead seemed a good one.
While the subject of this conference is risk management, not risk assessment, Dr. Anderson reminded us that both activities, as well as the gathering of scientific data, are part of the process of coping with risk. A hundred years ago, as Chris Whipple pointed out, one person performed all these functions. There was no need for a set of written rules to facilitate communication among different groups: safety standards were set by an individual or by a professional society. Today, the task of coping with risk is fragmented among different groups, and these groups must be able to communicate with one another.
Scientists, as Dr. Silbergeld mentioned, insist upon flexibility in their research. Once the scientist starts thinking about a problem, he or she will want to contemplate all of its possible ramifications. I think it is very important to protect the scientist's right to free inquiry in the laboratory. Experience tells us that when we infringe upon this right, or insist upon a particular conclusion, scientific inquiry, and thus progress, become blocked.
There is a difference in approach of a research scientist and a risk assessor. If we ask a scientist about the risks involved in a particular activity, the scientist will often respond by requesting an extension of time or money. A risk assessor, on the other hand, cannot dodge this question. He must state what the risks are, according to the best information currently available. He does not have the right to say simply, "I don't know." If he does say so, the risk assessor is obligated to explain the range of his ignorance. Therein lies an important distinction between the scientist and the risk assessor.
The overlap between risk assessment and risk management has been stressed a good deal during this conference. The overlap arises primarily because the risk assessor does not convey to the risk manager all of the information necessary to perform his management function. Perhaps the risk assessor does not know what the risk manager needs. Quite often, he submits a number and perhaps the uncertainty of that number. The risk assessor should go a step further: he should state how and why he arrived at that number, as well as the nature of his assumptions.
I think it important to stress that comparisons are necessary to put the risk assessor's number into perspective. We have heard quite a lot about one-in-a-million risks or risks of 10<-6>. How do we know what these figures mean? Even those of us who are mathematically inclined can only understand these figures by comparing them with other risks we face. Whenever possible, a risk assessor should perform this comparison for us. Otherwise, he is not doing his job properly. I will return to this point later.
Dr. Whipple mentioned several different ways in which we look at risk. For instance, in coal mining we can look at the risk per worker or the risk per ton of coal mined. Let me examine how this differenceoperates. If one plots the risk to the coal miner per ton of coal mined from the turn of the century until the present, one finds that this risk has declined steadily and rapidly. The number of workers in the mines, however, has decreased, and the efficiency of coal mining has increased. For the last 30 years, the risk per miner has therefore stayed almost constant.
Which, then, is the correct approach? The President of the United States, concerned with the total health of the country, might say that the right approach is to measure the risk per unit of coal. Alternatively, the President of the United Mine Workers will be more concerned with the risk per worker.
Furthermore, how does one manage the risk? It may be too difficult to reduce the risk per ton of coal without undue cost. If, however, one considers the risk per worker, reduction is fairly simple: one need only hire twice as many workers and have each one work alternate weeks. The risk per worker will then be cut in half without altering the risk per ton of coal.
David Doniger, among others, complained that risk assessments are complicated. One must recognize, however, that in general, when the risk is large, the risk assessment is usually too simple. For example, we know the risk of cigarette smoking, and we know the risk of automobile travel. Both risks are quite big. We know that 50,000 people die in automobile travel each year and that no matter what we do, the figure next year will probably be between 45 and 55,000.
[16 ELR 10227]
Epidemiology only takes cognizance of risks when their effect is great enough to be noticed. The central concern of risk assessment is to address situations where the risk is too small to be noticed by epidemiology and must be studied by indirect means. The complications associated with risk assessment arise precisely because the risks studied are so small.
The purpose of performing a risk assessment is not merely to arrive at an end-number but also to understand the process by which this number is reached. In assessing the risks of nuclear power plants, for example, an understanding of the stages of the risk assessment might enable one to make nuclear power safer; one will have contemplated in advance all the dangers that might arise.
The uncertainties of risk estimation are at heart as important as the value of the risk, and it is vital that these uncertainties be understood by risk managers. When an assessor presents to a manager his risk assessment, should he present a conservative value — what a statistician would call the upper 98th percentile of a distribution? The most likely value, whatever that may mean? The "maximum likelihood" value might be zero, but the data poor; the chemical might be potent enough to kill one percent of those exposed or might even be good for you. Different decisions might require different degrees of caution.
In this sense, risk management and risk assessment are inextricably linked. It is my view that the assessor should provide a whole risk distribution, together with caveats about how the risk was derived. Only then has the manager all the information.
If a company presents an application for a new chemical, prudent public policy will probably require a conservative approach, and that is the approach EPA usually takes. What factors, for instance, should be considered in a decision about whether to ban a fungicide? Should we compare the product with its alternative? Perhaps we should contemplate what would be the effect of having no fungicide at all. The latter might lead to contamination by highly dangerous mycotoxins which could pose risks far greater than the fungicide. A complete analysis oftentimes requires attributing risks to chemicals which have not been fully tested, which makes people in the scientific community quite uncomfortable. Ignorance may be bliss but it is not very protective of the public's health.
What is the risk of an untested chemical? Let me give you two stereotypical scenarios. Assume, first, that I am a captain of industry, and that I want to use an untested chemical. Since there is no evidence that the chemical is dangerous, the risk is clearly zero, and I can use it.
Assume next that I am Ralph Nader. The chemical is untested. If I am careless, I will say that the risk is infinite. Of course, I can only die once, so the risk is really finite, and cannot be greater than unity.
Between these two extremes, risk assessors have much leeway to find a number for risk. Risk assessors can use whatever evidence is available to derive this number. This point is tremendously important to risk assessment, but is usually ignored. Somehow, the courts must force risk assessors to acknowledge it.
Several times during this conference, panelists raised the question of whether risk assessment and risk comparison inhibit the development of new technologies. As presently practiced, they do, but this is because we do not demand that the uncertainties in risk assessment be addressed, nor do we demand that the risks of old technologies and new technologies be calculated on equivalent bases.
Dr. Silbergeld told us that there is a double standard when it comes to new and old risks, and I believe this statement is partially true. To some exent, this double standard is merely the public's expression of uncertainty toward the unknown. This issue needs to be addressed directly. Public education and debate would begin to close the gap and also enable us to determine how much lower we should reduce new risks compared to old risks. Of course, encouraging the use of new chemicals may lead to new products which can improve the public's health.
Contrary to Professor Stewart, however, I do not believe that there has been any great effort to confront uncertainty. In fact, when the uncertainty is too great, scientisis have a tendency to ignore it. Dr. Anderson ought to address uncertainty better in the EPA Carcinogen Assessment Group; at the moment when a chemical is inadequately tested and declared to be noncarcinogenic, implicitly, the risk is set at zero.
There still exists a set of chemicals said to be noncarcinogenic. That means that these chemicals have not induced statistically significant numbers of tumors in animals tests. Yet the activity (carcinogenic potency) of these chemicals ranges over eight orders of magnitude — with dioxin and aflatoxin on the one end and trichlorethane and saccharin on the other. It is likely that some tests on "non-carcinogens" simply weren't sensitive enough. Also, just below saccharin, there may well be a lot of chemicals that are carcinogenic but which have not been shown to be so by animal tests.
This point is important because it raises the question, "how safe is safe enough?" What is a safe minimum risk? Lawyers know, as well as I do, about de minimis matters. I have long argued with EPA, because I believe the agency sets the de minimis risk too low. This leads to too many chemicals being listed on the agenda for action. The agency probably never gets through the first hundred. Perhaps one should not even discuss the de minimis risk; we should order chemicals according to the calculated risk and work our way down the list. Unfortunately, some people want to start at the bottom and not at the top.
Harold Green reminded us that, in fact, Congress has indirectly set different levels of acceptable, or de minimis, risk for different substances. He pointed out, for instance, that Congress is strict with chemical food additives, and said that is a zero risk. As an aside, I note that even if the exposure is undetectable, there could be some low exposure and the risk could still be finite. So, Professor, I correct you; it is not a zero risk.
Just what does "acceptable risk level" mean? When one starts calculating risk, the public seems always to demand zero risk. To some extent this demand is provoked by certain scientists, whom I call "political scientists," because they are more interested in politics than in science.
We all agree with Professor Green that agencies must be responsive to Congress which is in turn highly responsive to public perception. But when actions required by an agency seem out of line, a general duty exists to inform the public about how the actions differ from ordinary actions. Presently, neither Congress nor the agencies are living up to this duty.
For example, in banning EDB because of its carcinogenicity, and in taking products with 20 parts-per-billion of ethylene dibromide off the shelves, Administrator Ruckelshaus did not inform the public that the Food and Drug Administration (FDA) has allowed aflatoxin B-1, a thousand times more carcinogenic, at the same level — 20 parts-per-billion in [16 ELR 10228] peanut butter. Perhaps Betty Crocker cake mix should be taken off the shelves, but not because of ethylene dibromide.
As another example, in the water quality criteria, EPA allows chloroform and bromoform at 100 parts-per-billion. Trichlorethylene is certainly 50 times less carcinogenic; nonetheless, without explanation, the agency set the level for this chemical at less than 10 parts-per-billion.
This suggests that EPA is setting too low a de minimis level of risk for some chemicals (trichlorethylene, for example). Worse still, the agency sets the same level for chloroform as for bromoform. Let me tell you that every time you exchange bromine for chlorine, the chemical is more toxic. I wager that bromoform will turn out to be one hundred times more carcinogenic than chloroform, yet EPA sets the same level for both, and I am not sure why. Perhaps it is because of a lack of data, and perhaps it is because of bromoform contaminant in water supplies from deep wells. Those of you from Texas will find it in the drinking water in Fort Worth.
And Peter Hutt reminded us that the risk of sassafras tea and of aflatoxins in peanuts, milk, and mustard exceeds many times the risks of all artificial sweeteners and pesticides combined. While FDA should certainly conform to congressional requests to regulate the artificial substances — sweeteners and pesticides — the agency should also launch a campaign to inform the public about the risk of these highly toxic natural substances.
One of the problems we find is that not everybody recognizes or understands good risk assessments. Doctors, for example, do not like to speak in quantitative terms. Early in the space program, doctors were asked about the probability that an astronaut might have a heart attack in flight. The doctors told the Science Advisor that there was no possibility at all of this happening. The doctors said that they had performed all known tests and that the astronauts were perfectly healthy.
Any risk assessor knows that this is a lot of nonsense. There is always a slight chance of a heart attack. On historical data risk assessors estimated that the risk of heart attack was actually quite large. Furthermore, risk assessors calculated the engineering factors to significant figures, and the risk of failure of a rocket emerged. The Science Advisor, Professor Donald Hornig, next had the problem of explaining this risk to President Kennedy, who did not understand much about science, but who was a sensible man. At last, Hornig hit upon the answer: he explained that the risk was no worse than that of sending a test pilot up in a new airplane. This, President Kennedy understood. The risk to a test pilot is about a two or three percent chance of death. It is a large risk, but it is an understandable one. The risk assessor must make this kind of intelligible comparison, and the risk manager must force him to do it.
The risk manager must try to have the risk assessor provide the information that he needs. At every stage in the gathering of scientific data, the assessment and management of risk, there must be constant feedback. The risk manager must also consider the parallel channel of economic assessment. At some stage, one may have to perform a cost-benefit analysis and reach a decision based on the result.
The experts in decision theory at the Harvard Business School remind us that the concepts of "good decisions" and "good outcomes" are crucial factors in decisionmaking. We must believe that the person making the decision is making it for the benefit of humanity. Of course, decisions are usually made by ordinary individuals, all the way down the line. When decisions are made by EPA, they are not necessarily made through a general balancing of risks and benefits, or by a consideration of what is in the best interest of the community in general or any individual in particular.
Richard Stewart pointed out that one of the important results emerging from the recent toxic tort cases is consideration of the question of compensation in our present tort law. First, if one sues a company and there is a general public interest involved, a decision against the company will require the company to pay out a lot of money. That company and other companies will have an incentive to do the right thing in the future; in particular to have considerable concern about public health problems. Second, we believe that when somebody has suffered injury, he should be compensated. It is clear to most of us, I think, that society has not decided how to accommodate both requirements simultaneously. The law and the courts, in particular, are ahead of the res of society in trying to cope with the problem, though, as pointed out by several speakers in this meeting, courts are not entirely happy with this situation.
For example, when a person dies of cancer, it is not always clear what caused the disease. The background may be the dominant contributor. What does society wish to do in this case? Should we always compensate the victim? What if somebody dies of leukemia at age 45, and one does not know whether to blame the nearby chemical company or the victim's diet or the radiation coming from outer space? Does it matter to the victim what caused the cancer? One wants somehow to compensate the victim's family, no matter what caused the cancer. Should one provide compensation only when the pollutor is found?
I am well aware that communist countries have addressed this question. Perhaps we should create a national health insurance system and compensate everybody who is worthy. This approach, of course, does not address the problem of incentives, which the communist countries have not yet figured out how to solve. Moreover, we all know that the communist countries have a worse safety record than the United States. This is partially because there is no one to ride herd over the nationalized industry, which is always supposed to be working in the public interest.
Ellen Silbergeld pointed out that the problem of apportioning risk is well known in the medical profession. We also know that in other areas, such as automobile safety, we must deal with multiple causative factors. If one examines the problem carefully, the most probable factors contributing to auto accidents are human error and bad weather. Each contributing factor, however, is important, and one can either try to eliminate each factor or select the largest factor and try to eliminate that.
In some cases, there is synergism between pollutants, as for example between asbestos and cigarette smoking. The nature of this synergism is fairly definite. Those who worked in the shipyards during World War II and who also smoked have a greater than 50 percent chance of dying of lung cancer. If the legal test is over a 50 percent probability, society will probably act on this situation. Nonetheless, if one tries to apportion the risk between asbestos and cigarette smoking, one finds that it is one to asbestos and four to cigarettes. Yet why is Johns-Manville bankrupt, and not Phillip Morris? At this point, the issue of the public's right-to-know and right to control their lives enters the legal picture. People claim to know what they are doing when they smoke cigarettes, but do not consent to be exposed to asbestos.
The nuclear power industry may be off the hook on the question of failing to inform the public about the dangers of [16 ELR 10229] nuclear power plants. Few Americans can claim seriously to be unaware of the dangers of nuclear power and radiation. The medical industry, however, is not off the hook. How many doctors inform us when giving an X-ray that we are facing a risk of cancer? How many tell us what does we are receiving? How many of us even know what dose we receive when we walk past an X-ray room in a major hospital? In Massachusetts there were many complaints about the amount of radioactive iodine being released from the nuclear power plant in Pilgrim, Massachusetts. This amount, however, was less than was released every year from the urine of patients at the Massachusetts General Hospital who were being tested for a variety of ailments. How many people were informed about that? Nobody. The medical community is vulnerable on the issue of its failure to inform the public about the dangers of medically related exposure to radiation.
In Massachusetts, there used to be a law requiring teachers to have an annual chest X-ray to ensure that they were not exposing the students to tuberculosis. In 1972, the Attorney General intervened in the licensing hearing for a new nuclear power plant on the basis that the calculated dose of 10 millirems-per-year was too large for Massachusetts residents who lived five miles away.
I decided to refuse my chest X-ray, since the Attorney General thought that 10 millirems were too much and the X-ray would expose me to 1,000 millirems.
My colleagues in the Harvard Public Health Service, however, were more intelligent than I had expected. They had reduced the X-ray dosage to seven millirems. My attempt to spark a confrontation thus failed, but I am sure that there will be other opportunities.
The questijon is, how do we handle all of these questions of risk assessment and risk management? We do have to remind ourselves why people object to risk assessment: often they do not understand it. There really is no alternative, however, but to think through the procedure as thoroughly as possible. Of course, there may be those, presently in power, who believe that a knowledge of what is going on will weaken their position and who will prefer the status quo.
Of course, there are also those — scientists in particular — who insist on giving testimony to advise that this or that is dangerous, sometimes exaggerating the situation. For example, there are scientists, with Ph.D. degrees from distinguished universities and faculty members of distinguished universities, who make strong statements and write reports for public hearings and court cases which are not accepted by their peers. Scientists can recognize bad work in their own fields and repudiate it. They have no procedure for doing so in the public arena. I think there should be one.
About a year ago, one judge did repudiate one such scientist. He said that the scientist was wasting the court's time. I presume that this was true, and if so, it was a useful thing to say. The courts are open to the public and serve an educational role in our society. On the whole, I am happy with how the legal system operates. What is said in court may reach a wide audience. I believe, therefore, that university professors should regard going to court as an extension of their university role.
I work basically in particle nuclear physics. My experiments do not have much to do with questions of health, but I became involved with these issues, because, when working with a cyclotron, I do not want my staff exposed to radiation without being able to explain the risks of exposure to them, so I am now in the health business, and I call myself a risk assessor.
I value your invitation here to talk with lawyers and legislators because only through working together can be improve public health. Individuals from all professions must understand the risk assessment management process in order to understand one's own particular role in relation to the other players.
The risk manager must know what sort of information he will need and what information the risk assessor can provide. The risk assessor must have the imagination to think of new risks that have not been suggested before; so must the risk manager. If the risk assessor does not raise a particular subject, the risk manager must do so.
There are some very poor risk assessments being performed today. Let me leave you with a final example — a recent assessment of a liquefied natural gas facility being proposed in the Pacific. The assessment was about 479 pages long and calculated the risks arising from accidents at tiny levels — 10<-32"> per year.
What was left out of this particular assessment was buried in a sentence somewhere in the middle. The authors stated that they had not included the possibility of sabotage in their calculations. If one considers this for a moment, the possibility of sabotage in a liquefied natural gas facility is enormous. Unlike nuclear fuel, liquefied natural gas is almost pure and is highly combustible. If mixed with air, the mixture can cause an enormous explosion. The possibility that someone would deliberately set off an explosion is very real, yet it was not considered in the 479 page document.
Thank you again for your invitation to discuss these problems with you.
16 ELR 10226 | Environmental Law Reporter | copyright © 1986 | All rights reserved
|