From The Editor | January 27, 2015

Will Genomics Complicate The Reanalysis Of Randomized Clinical Trial Data?

By Ed Miseta, Chief Editor, Clinical Leader

Miseta

Dr. Thomas Nifong is the EVP of diagnostic tests at Definiens and has more than 15 years of experience in the clinical arena. I recently interviewed him regarding a JAMA (Journal of the American Medical Association) study released last year that looked at the reanalysis of randomized clinical trial (RCT) data. A surprising total of thirteen studies (35 percent of the 37 eligible studies discovered) led to interpretations that were different from that of the original article regarding the types and number of patients who should be treated.

Nifong had some interesting comments regarding the study, which you can see here. In this Q&A article, Nifong discusses genomics and diagnostic testing, and how the advent of personalized medicine could complicate the reanalysis process in the future.

Miseta: Going forward, will genomics make the reanalysis of randomized clinical trial data more complex?

Dr. Nifong: I believe so, and a big part of that will be the whole idea of personalized medicine. For example, look at oncology trials. The one-size-fits-all blockbuster drugs are becoming fewer and fewer. If a drug can go across multiple tumor types that could certainly expand the market for a given drug. But in nearly all cases moving forward the drugs are designed to be highly targeted. In the preclinical phase there is a search for some type of differential biomarker that carries into Phase 1. This is where we are generating not only genomic data but also what we call the phenomic data. You are looking at mutations at the genomic level (Genes and RNA expression levels), but you are also looking at tissue-based markers – typically protein markers. In some instances you can have a genomic change in the tumor, but not necessarily a phenomic change.  You can also have a phenomic change in the tumor microenvironment without detecting a genomic mutation in the tumor.

As you move on towards early phase 2, you are then looking at that data and looking at those biomarkers, trying to determine differences between patients that might cause one to respond to the treatment versus one that will not. You look to cement that in early phase 2 and ultimately end up with a biomarker or a set of biomarkers that give you a stratification. This requires complex statistical modeling.  It’s important to not only stratify so that early on you can try to predict which patients are going to be most likely to respond to the drug, but also to identify additional biomarkers where instead of looking for a clinical response you are looking for some sort of a surrogate, or biomarker response, so you can move the drug over into an accelerated pathway.

Miseta: But then the lab data needs to be considered as well?

Dr. Nifong: Ultimately, at the time of filing for the approval on the drug, whatever technology you used for stratifying the patients has to be converted over into a companion diagnostic. That is hugely dependent on the biomarker data that’s being generated. That will obviously include data which is generated in the clinic, which comes later on, but also a huge amount of complex analytical laboratory data that has to get rolled in as well.

Miseta: It seems like the diagnostic companies will play a large role in this.

Dr. Nifong: Yes and that also makes the process a little more complex. The diagnostics companies and the pharma companies are going to have to enter into more partnerships. The diagnostic companies certainly have the skill sets in the development and delivery of the laboratory tests. But now they also have to be able to tie that in with the clinical data integrity and a more robust level of analytics for this to ultimately be submitted to the FDA for approval, which is the overall goal for the pharma companies.

Miseta: Will reanalysis become more complicated going forward?

Dr. Nifong: The trials that were reanalyzed in the Jama article looked to be relatively uncomplicated. Reanalyzing in the future will be more difficult simply because we have so much more data, there is genomic data and the phenomic data coming from different instruments, the methods of collecting and interpreting that data, and the storage of that data. All of that would have to be standardized in some way to better facilitate reanalysis. If you conducted a trial and you bring in data from multiple sources, internally you would need to make the data consistent so that all of it can be correlated with the outcome. That is certainly an additional step beyond simply collecting the raw data. And realize we are going to have raw data and images, and then the clinical data is added and correlations drawn. Tools are going to have to be made available for all that data to be put together, which adds another layer of complexity over what was in the JAMA study.