In March 2000, I joined an environmental justice field trip that met with women of Washington State’s Shoalwater Bay Indian Tribe. One of the poorest tribes in the West, the Shoalwater people were losing their tiny reservation to erosion and legal battles, and they were losing their future to a mysterious run of miscarriages. One woman after another described losing her fetus. They spoke to us of their grief, anger, sense of confusion, and fear that something in the water they were drinking or the fish they were eating was killing their babies.
The U.S. Center for Disease Control told tribal members that the miscarriages could simply be random events, or possibly the result of genetic flaws. Or they could stem from diet, poverty, alcohol, or drug abuse, all of which can contribute to miscarriages. Few tribal members were reassured; the women on the reservation had taken meticulous care of their health during their pregnancies, yet they still had high rates of pregnancy loss.
Many people in the tribe feared that synthetic chemicals could be the culprits. Farmers spread pesticides on cranberry bogs near the reservation, foresters sprayed herbicides on surrounding forests, and oystermen used chemicals in Willapa Bay to control parasites that threatened the oyster industry. Yet because fetal development is so complex and synthetic chemical exposures are so difficult to monitor, no one could ever determine exactly what was harming the developing fetuses.1
The chemicals that the Shoalwater women were exposed to shared one common factor: they were all endocrine disruptors, chemicals that mimic, magnify, or block the hormones shaping fetal development. Endocrine disruptors are not rare. Since World War II, hormone-disrupting chemicals in plastics, pharmaceuticals, and pesticides have permeated bodies and ecosystems throughout the United States, often with profound health and ecological effects. Yet we have largely failed to control or even monitor them. Why have federal regulatory agencies done so little to protect public and environmental health?
Most chemical regulation in the United States is guided by the Toxic Substances Control Act of 1976 (TSCA), which was intended to give the Environmental Protection Agency (EPA) the power to remove dangerous chemicals from the market. In the 34 years since TSCA was enacted, however, only five chemicals have been restricted through this act—largely because TSCA places the burden of proof on the federal government to prove actual harm. Moreover, the EPA must also show that its regulatory action is the least burdensome of all possible actions. TSCA requirements have proven so unrealistic that the EPA cannot even restrict asbestos, a known human carcinogen.
On July 22, 2010, a bill was introduced in Congress that, if passed, would institute long-overdue reforms to TSCA. The Toxic Chemicals Safety Act (H.R. 5820) takes a precautionary approach, shifting the burden of proof to industry. Rather than requiring the government or the consumer to prove harm after a chemical is already on the market, industry would have to act with precaution. This means that if the chemical might cause severe or irreversible harm to complex systems, the burden of proof would be on the industry to show that it is safe, rather than on affected communities to show that it is harmful.2
Industry-supported voices protest any application of precaution to synthetic chemicals, pointing out that no one can prove that any given case of cancer or reproductive harm has been caused by an environmental exposure. Too much scientific uncertainty remains to act, they say. Precaution is unfeasible, for it would stifle innovation and destroy profits.3
Some of these claims are certainly true: we do lack the experimental evidence that would prove pollutants cause cancer in humans. Scientific uncertainty does remain; no consensus exists about the risks of pollutants. But does this mean chemicals are safe? Not at all. We lack experimental proof of human harm because those experiments are illegal. In 1947, during the Doctors’ Trial in Nuremberg, Germany, horrifying evidence emerged of Nazi medical experimentation that subjected prisoners to chemical poisons such as mustard gas. The resultant Nuremberg Code forbids research that might lead to unnecessary pain, suffering, death, or disability. No researcher can design an experiment that subjects a person to a suspected carcinogen to test whether that chemical induces cancer. By definition, then, we will never have firm proof of the links between cancer and pollutants. Scientific uncertainty will always remain. Yet we cannot refuse to take action. Exposing historical problems is a vital part of a solution.
Will precaution be as destructive to innovation and profit as opponents claim? Historical evidence suggests the opposite. Precaution is not a new idea; it has been at the heart of pharmaceutical regulation since the 1938 Federal Food, Drug, and Cosmetic Act. After a tainted drug named Elixir of Sulfanilamide killed over a hundred people in 1937, the Food and Drug Administration (FDA) required that companies submit evidence of safety before a new drug would be allowed on the market. The burden of proof shifted from the consumer to the company.
Precaution has hardly destroyed the drug industry. Pharmaceuticals have thrived since the 1938 Act—not in spite of precautionary regulation, but because of it. When doctors and patients began to trust that the risks of new drugs would not be hidden, drug sales rose. Currently, pharmaceuticals are among the most profitable industries in the world, refuting the claim that precaution is a death knell.
Drug regulation has not been perfect, of course. All too often, regulators have responded to political pressure by retreating from precaution, putting consumers at heightened risk of harm from chemical exposures. Diethylstilbestrol (DES), the first synthetic estrogen to be marketed to women, provides a tragic example. Beginning in the 1940s, millions of women were prescribed DES by their doctors, initially to treat the symptoms of menopause. In 1947 the FDA approved DES for pregnant women with diabetes, and drug companies advertised it widely, promoting the use of DES in all pregnancies as a way to reduce the risk of miscarriage. Although no evidence ever supported this claim, at least five million pregnant women were prescribed the drug in the United States alone.
Meanwhile, Americans were also being exposed to DES through their diet. Beginning in 1947, DES was approved in the United States as a steroid to promote growth, first in poultry and then in cattle. High levels of DES were soon detected in poultry sold for human consumption—up to a hundred times the concentration necessary to cause breast cancer in mice. Concern over DES effects soon grew among women who used the drug, farmers who handled treated livestock, and workers who manufactured the material. Federal agencies initially dismissed these concerns as unfounded, but eventually, after exposed male agricultural workers suffered sterility, impotence, and breast growth, the FDA banned the use of DES implants in chickens in 1959. Apparently, however, women, human infants, and cattle did not deserve the same protection. At the peak of its use in the 1960s, DES was given to nearly 95 percent of feedlot cattle in the United States, which meant that millions of people consumed meat tainted with the artificial estrogen, and the estrogenic wastes from feedlots made their way into aquatic ecosystems, with unknown effects. Millions of sons and daughters were exposed to DES prenatally, leading to cancer, infertility, and reproductive tract problems. The problems persist even into the third generation, and while DES is no longer used during pregnancy or menopause, we have failed to apply its lessons to other chemicals.
Even before the FDA approved DES in 1941, researchers knew that it caused cancer and problems with sexual development in laboratory animals. These concerns initially led FDA commissioner Walter Campbell to reject the drug in 1940, arguing that regulators must follow what he called the “conservative principle,” essentially adopting the precautionary principle sixty years before that term came into common usage. Yet a year later, pressured by industry, the FDA abandoned its position of precaution. When companies applied for approval to use DES in livestock and to prescribe it to pregnant women, the same pattern unfolded twice more. Each time, the agency refused approval, citing the need for precaution given the known risks of the drug. But each time, it quickly backed down and allowed broad communities to be exposed to a chemical with known toxic effects.4
Regulators were unable or unwilling to resist industry pressure for several reasons. First, they shared cultural and conceptual beliefs that industry lobbyists were quick to exploit. Contemporary scientific models of toxicology and development generally did not allow for the possibility that very low levels of synthetic chemicals could influence hormonal actions in the body. Even when experimental evidence from laboratory animals seemed to provide compelling proof of harm, uncertainty about the validity of animal studies in assessing risks for people made it difficult for regulators to defend principles of precaution in court. Sexist assumptions about women also shaped the ways that regulators and drug companies interpreted risk. For example, when pregnant women were treated with DES in early clinical trials and then gave birth to premature babies, the chemical wasn’t blamed. Instead, the premature births were attributed to women’s inappropriate behavior, including “excessive shopping.”4
Today’s assurances of chemical safety are based upon the same faulty assumptions that kept DES on the market. Risk assessments assume that chemicals are more toxic at high doses than at low ones, and that toxicity is immediately apparent. Yet endocrine disruptors can cause harm at very low doses, especially to developing fetuses and infants, and the effects of exposure may not be apparent until maturity.
Particular concern has focused on chemicals, such as bisphenol A (BPA), that are leached from certain plastics. BPA, like DES, is an estrogenic chemical. In fact, before DES was synthesized, drug companies were investigating the possibility of marketing BPA as the first synthetic estrogen for treatment of menopausal women. After DES was found to be more estrogenic and less expensive, interest in BPA as an estrogen dwindled—until scientists and environmental groups in the 1990s became concerned about its widespread movement into the environment from plastics.
The American Chemistry Council, an industry trade association, reassures consumers that bisphenol A makes our lives “healthier and safer, each and every day.” Yet hundreds of research studies show that DES and BPA have similar estrogenic effects on developing fetuses, linking BPA exposure with infertility, harm to children’s reproductive health, obesity, breast and prostate cancer, and neurological disorders.4,5 Historically, the cases of DES and bisphenol A have important parallels. DES was worth millions of dollars to the pharmaceutical industry and, when experiments that showed harm to laboratory animals threatened profits, the industry hired consultants to testify that laboratory studies had no significance for women. When research then showed that women were dying from DES, the industry argued in congressional hearings that those cases should not restrict DES use in livestock. When government agencies finally stepped in and began attempting to regulate DES, the industry took the agencies to court, buying a few more years of profits by manipulating scientific uncertainty. The same patterns are now unfolding for bisphenol A.
The history of DES offers a powerful argument for why BPA should be restricted, even though uncertainty remains about its effects on humans. Three times regulators reached the limits of their knowledge about the effects of exposure to DES and, each time, in the face of scientific uncertainty, they decided to move ahead and allow people to be exposed. Each time, they vowed to use that new exposure as an experiment that would be monitored, so that policymakers could learn from the experiment. The assumption was that any problems that emerged with the chemicals would become clear and, if no major problems emerged, the exposure must have been safe. Yet when no monitoring is being done, that fundamental assumption is wrong. Time and again the federal agencies failed to learn from their own histories—sometimes because they lacked the funding and political power to insist on monitoring, and sometimes because they refused to pay attention to results.
For four decades, American industry has fought efforts to extend precaution from medical to environmental policy, claiming that such an approach could stall innovation. Historians Gerald Markowitz and David Rosner show that industry’s central strategy has been to accentuate elements of complexity and uncertainty and then to argue that “economic interests should not be challenged until science has proven danger. Precaution is equated with economic and social stagnancy. . . . Progress, as defined by the industrial community, trumps precaution.”3 By the mid-1970s, as consumer concern over environmental pollution placed increasing pressure on industry, industry responded with a “frontal assault on the public health ideals of prevention,” hiring product-defense firms, public relations agencies, and scientists who “systematically attacked environmentalists and labor activists as luddites determined to stifle our economy.”3-7 Industry advocates portray precaution as a novel and reckless idea, rather than a long-held principle at the heart of public health. Industry, in other words, has rewritten history in the public eye, portraying precaution as a new idea and indisputable proof of harm as a historical precedent.
Historians must provide counternarratives that push back against these manipulations of the past. As the historian Peter Perdue writes, “Governing elites resist looking too closely into historical roots of current crises; they suppress evidence and manipulate historical narratives to legitimize themselves. The fact that financial crises, or environmental crises, have reoccurred repeatedly even in recent memory doesn’t guarantee that anyone will really want to address the fundamental causes. Historians have to recognize, and tell their readers, that impulses to denial, willful blindness, and ideological distortion are just as powerful as rational analysis in causing social change.”8
What are the most important lessons to learn from DES to guide us in a rational approach to reforming the Toxic Substances Control Act? A new approach to scientific uncertainty must be at the core of reform. Because TSCA currently places the burden of proof on regulators to provide evidence of harm, industry uses uncertainty as a tool to delay regulation, often for decades. A reformed TSCA should instead use uncertainty as a trigger for precaution, placing the burden of proof on the industry to show safety, not on the government to show harm. Reform must also include safer alternatives assessment. This type of assessment stipulates that, before a chemical can be released for a particular use, regulators must compare a wide range of existing alternatives and choose the least toxic one available for the given purpose. Rather than precluding all action until everything is known about a chemical, alternatives assessment allows for limited use of known toxic chemicals when no less-toxic alternatives exist.9-11
The Toxic Chemicals Safety Act of 2010 deserves support, for it places the burden of proof on the industry and allows for safer alternatives assessment. Ideally, the final version will empower the EPA to authorize alternatives assessment; the act as currently drafted only allows for alternatives assessment if requested by industry. Yet, while the act is not perfect, passage of the Toxic Chemical Safety Act would demonstrate that one of the key lessons of the DES tragedy has finally been learned: the need for precaution in the face of scientific uncertainty.