From The Editor | September 1, 2015

Is ClinicalTrials.gov Impacting Study Results?

Ed Miseta

By Ed Miseta, Chief Editor, Clinical Leader

ClinicalTrialsGov

Could forcing companies to report their trial methods and outcome measures prior to conducting a study actually impact trial results?

When I was majoring in economics, there is a line that I heard one professor state on numerous occasions: “If you squeeze data hard enough, you can get it to say anything you want.” I was never sure of its origin, but in researching this article, I came across a quote by economist Hal Varian which is quite similar: “If you torture the data long enough, it will confess to anything.” Of course if you are able to re-state your intended outcomes at the end of your research, it becomes even easier to get the data to say what you want.  

For example, let’s say a company decides to conduct a trial on a drug’s effectiveness in preventing myocardial infarction on a sample of the patient population. If the only observable effect was a reduction in blood pressure, it’s easy enough to say that was your goal and proclaim the study a success. A slight change in the outcome, cutoff, or perhaps looking at a sub-sample of the population can certainly create a look of success, which most scientists would want to be the result of their hard work. But some might argue that such tactics walk a fine line between research and malfeasance. 

The research methodology should be pretty simple. Create a drug, select your patients, collect the relevant data, and see if the results confirm your intended outcome. Of course not everyone performs research the same way. Some researchers will eventually figure out that it can be easier to state your intended outcome post study. It might be backwards and unethical, but it might enable you to convince at least a few people that you know what you’re doing. As long as you didn’t have to report your relevant outcomes upfront, you could probably get away with it. And if it means getting your results published in a journal, you could argue that the ends justify the means.

I thought of this recently when I read an article by Chris Woolston on Nature.com about ClinicalTrials.gov and its impact on trial findings.  Prior to 2000, companies would perform clinical trials and report the results at the conclusion. Post 2000, companies were required to record their trial methods and outcome measures before collecting the data. That shouldn’t be a problem, right?   

Positive Findings Diminish

Surprisingly, according to a study conducted by PLoS ONE, the launch of the government website does seem to have had an impact on trial results. In a sample of 55 trials testing treatments for heart disease, 57 percent of the studies conducted prior to 2000 reported positive results from the treatments. In studies that were conducted after 2000, the studies reporting a positive result fell to only 8 percent. The difference is shocking, and naturally led to many questioning the possible role of the website.

Veronica Irvin, a scientist at Oregon State University and author of the study, believes the registering of trials is leading to more rigorous research. But the results are also causing some to question the validity of positive results gathered prior to 2000.

The study focused on human randomized controlled trials, and showed the registration of trials to be the dominant driver of the change in study results. There was no evidence to show the discrepancy being explained by two other factors: shifting levels of industry sponsorship and changes in trial methodologies.

“Realize this does not preclude people from finding other results, or effects on sub-groups,” says Irvin. “It merely looks at the reporting of the original intent and seeing if the study was a success in that regard. ClinicalTrials.gov also makes it much clearer to read the journal articles as well as the findings, since you can see what the primary outcome was, whether the study found or did not find that outcome, and then the other outcomes looked at in the next step.”    

Is There Incentive To Get Positive Results?

Positive clinical trials are deemed a success and can lead to being featured in prominent industry journals. Trials that are not successful can teach us just as much, sometimes more, but journals generally do not want to publish poor outcomes. That type of reporting could also be seized upon by competitors and those in the investment community. But would anyone argue that the discovery of Viagra was a failure, even though its primary use today was not the original intended outcome?

Furthermore, does the study prove that prior to 2000 researchers were cherry-picking data once the studies were concluded, and that ClinicalTrials.gov forced them to be more rigorous with their methodologies?

 Let’s Not Jump To Conclusions.

 There are other possible explanations for this discrepancy. Irvin notes improving cardiovascular health care could be making it harder for new treatments to show improved results. It’s also possible that more stringent research methods being adopted by companies are resulting in fewer trial successes. It’s also possible the change is due to other factors that were not examined in this study. 

Even if the findings are determined to be the result of ClinicalTrials.gov, are the findings good or bad for the industry? Irvin is not sure. “All of the trials we looked at were funded by NHLBI (National Heart, Lung, and Blood Institute), so the primary funder was the federal government,” she says. “That makes them a little different from industry-funded trials. The study did not look at any industry-funded trials, which do have to register the same way that federal trials do.”

Although Irvin is unsure of what the impact of this study will be on pharma companies, she notes it is good to know that some drugs are not going to work, which makes things better for patients. But from the industry’s perspective, a lower probability of success might make them a bit more stringent on what will get through to the later stages. According to Irvin, many pharma companies will only be willing to move forward with a large and costly trial if they were convinced it had the potential to make a big impact.  

“One important distinction between NHLBI and industry trials is that NHLBI does not do trials unless they start with true equipoise,” notes Irvin. “In other words they are indifferent to whether the null hypothesis is supported or rejected.  In many cases, they launch a trial because they are suspicious that a commonly used treatment does not work.  In contrast, we presume that pharma supports trials when they suspect that the result will show the treatment in a favorable light.  It seems less likely that they would spend over a hundred million dollars on a trial that is likely to show that their product does not work as intended.”

Whatever the reason, it’s still an interesting finding. I would love to see the study repeated in other areas besides cardiovascular, just to see if the finding repeats itself. Knowing the amount of research, planning, and preparation that goes into a clinical trial, some might find it hard to believe that the simple act of recording methods and outcomes measures in advance would impact a trial’s success.

Do you have an opinion? Could the advent of ClinicalTrials.gov have an impact on trial results, and is there sufficient reason to question earlier positive findings? I’d love to hear your thoughts.