Wolf in sheep’s clothing: EPA’s TSCA systematic review method

How do you know whether you can trust a conclusion reached in a scientific review assessing the harms of an environmental exposure? In part one of this two-part series, we will explain how scientists evaluate an entire body of evidence to answer a specific research question using systemic review methods and look at why this process is crucial for informing policy- and decision-making to save lives and money. In part two, we will look at how systematic review has been ignored by the Environmental Protection Agency (EPA) under the Toxic Substances Control Act (TSCA), thus potentially jeopardizing public health.

In 2016, Congress passed the Chemical Safety for the 21st Century Act, amending the TSCA. Under the amended TSCA, the law requires that the EPA make decisions about chemical risks based on the “best available science” and the “weight of the scientific evidence” which EPA adopted as:

“…a systematic review method, applied in a manner suited to the nature of the evidence or decision, that uses a pre-established protocol to comprehensively, objectively, transparently, and consistently identify and evaluate each stream of evidence, including strengths, limitations, and relevance of each study and to integrate evidence as necessary and appropriate . . .’’

“Great news!” I hear you say, for those of you familiar with the systematic review process and how it has been used in clinical medicine to provide the best care for patients and inform billions of dollars in health-care spending. You are probably thinking that the public’s health will be protected now that the EPA is using these rigorous, transparent, and proven methods to assess the harms of toxic chemicals.

Unfortunately, this is not the case.

What is the history of systematic review?

Let’s start at the beginning. More than 40 years ago in psychology, systematic review methods originated and were soon adopted in clinical medicine in response to the need to apply scientific principles (rules) not only to individual, primary research studies, but also to the synthesis of multiple research studies (the process used to evaluate multiple studies together to answer a question). The systematic review process was designed specifically to guide research synthesis, reduce bias, and thus more accurately evaluate the effectiveness of clinical interventions in medicine, i.e., to help answer questions about whether one type of patient care is better than another. See Figure 1 below.

Figure 1
Figure 1. General Steps for a Systematic Review

The most famous example of this was demonstrated in a landmark study in 1992, that showed the superiority of systematic review methods for treatment of myocardial infarction (think heart attack). “Expert based” recommendations published in scientific reviews and clinical textbooks were compared to statistical analyses of the combined results of more recently published studies. The landmark study showed that some “expert based” reviews recommended treatments proven to be ineffective or potentially harmful and failed to mention effective new therapies when compared to the combined results of the new trials. This demonstrated how the timely incorporation and synthesis of new experimental evidence is critical to informing patient care that treats the disease and is not harmful.

As highlighted in this landmark study, before the systematic review method, research synthesis relied on “expert-based narrative review” methods, which did not follow pre-specified, consistently applied, and transparent rules. So it was very difficult to know if all of the available evidence had been identified and evaluated consistently to answer a specific review question. Also, the decisions that were being made in the review process were not documented, such as what studies to include or exclude from the review, how the risk of bias (or quality) of the individual studies was evaluated, and how the results of the individual studies were combined to come up with a conclusion to answer the review question.

Yes, I know, it sounds haphazard! These methods were not transparent or reproducible! But science evolves and so have these methods.

We now know empirically (that means it has been observed through scientific testing) that when we adhere to these principles and rules in each step of the systematic review process shown in Figure 1 , we get less biased results, or results that are closer to the truth.

So what does that actually mean?

For example, as demonstrated in the landmark study, when “identifying evidence,” if the review authors fail to conduct a comprehensive search for eligible studies across multiple databases, or fail to read the reference lists of included studies for additional relevant studies, then studies may be missed that could inform the results and thus recommendations on the best form of treatment or action to take for a patient.

Being thorough is so important!

Also, the final selection of studies, to include or exclude, in a review is critical and those decisions involve personal judgement. Evidence shows that by using two review authors, instead of one, to select studies, the possibility that relevant studies are excluded, is reduced. Further, by preestablishing the eligibility criteria that review authors use to select studies and prepublishing it in a protocol, the review authors are less likely to be biased toward selecting studies with results that reinforce their own viewpoint.

The Institute of Medicine (IOM) has even developed 21 standards covering the entire systematic review process that, if adhered to, result in a scientifically valid, transparent, and reproducible systematic review.

How did systematic review evolve in environmental health?

In contrast, even after the clinical sciences had demonstrated that using these rigorous, transparent, and proven systematic review methods led to more transparent, less biased and reliable reviews of the evidence, environmental health had continued to rely on “expert-based narrative review” methods.

This was a problem.

For example, when there are different proclamations on the harms of environmental exposures, such as when one review says an exposure is “safe” and another “not safe” the public and policy-makers are left confused. Robust and transparent methods to synthesize what is known about the environmental drivers of health are crucial to making science actionable, less biased, and to allow the reasons for such conflicting conclusions to be readily identified—therefore, environmental health research needed to develop systematic review methods.

As you can see in Figure 2, here at the Program on Reproductive Health and the Environment (PRHE), we were the first to develop a method for systematic review in environmental health that adopted and adapted these empirically proven methods for research synthesis from clinical medicine, almost a decade ago. These methods were compiled into a Navigation Guide that has now been recommended by the National Academies of Science for chemical risk assessments, demonstrated in multiple case studies in the peer-reviewed literature, used as an example for the National Toxicology Program Office of Health Assessment and Translation (NTP OHAT) method and are currently being implemented at the World Health Organization and International Labor Organization (WHO/ILO) to assess the global burden of work-related injury and disease due to exposure to occupational risk factors. Most importantly, the Navigation Guide has been used to reach robust conclusions about the harms of environmental exposures to inform policy- and decision-making and to save lives and money.

Figure 2
Figure 2. This timeline shows PRHE’s development of a systematic review method for environmental health over the past decade.

“Wow. That’s amazing!” I hear you say. “A method so rigorous and widely agree upon that it has been used by all of these groups and organizations across the world including the World health Organization. How fantastic that the U.S. EPA under TSCA has decided to use it.”

Well, in fact, they haven’t.

They did something that seems almost inconceivable. They created an entirely new method that had never been seen before or tested, and that did not comply with the majority of steps that minimize bias and give more valid and reliable results. Why did they do it? Well, that has been discussed widely both here at PRHE and elsewhere, but how they did it is something that is worth understanding.

In part two of “Wolf in sheep’s clothing: EPA’s TSCA systematic review method,” we will compare the critical differences between the aforementioned methods of the Navigation Guide and National Toxicology Program’s Office of Health Assessment and Translation to the EPA’s systematic review framework under TSCA, and look at the implications for public health.