Guest Column | April 19, 2022

4 Pitfalls To Avoid With RWE For Regulatory Submissions

GettyImages-696448816

The FDA’s release of four draft guidances on real-world evidence (RWE) at the tail end of 2021 is a productive step toward implementing FDA’s RWE program. The draft guidances are expansive, providing recommendations to sponsors seeking to use real-world data (RWD) derived from medical claims data, electronic health records (EHRs), and registries to support regulatory submissions. However, despite the scope of these documents, a survey of comments letters submitted in response to the draft guidances reveals a widespread need, specifically among drug developers, to understand how the agency will implement these guidances in practice. In an attempt to solicit concrete examples, several sponsors requested the agency develop an online dashboard that publishes all of the agency’s decisions, both positive and negative, on regulatory submissions that incorporate RWE.

Despite the lack of a central catalog of all RWE-related regulatory decisions, there are a handful of product approvals and FDA reviews that highlight both best practices and common pitfalls sponsors face when incorporating RWE in regulatory submissions. These decisions show that the use of RWE to support regulatory decision-making is often challenged by both methodological and process issues. In terms of process, the primary mistake made by sponsors is failing to share a prespecified protocol and statistical analysis plan (SAP) with the agency prior to submission. From a methodological perspective, unsuccessful RWE submissions are often plagued by missing data, small patient populations, and a lack of objective endpoints. Approvals based on or supported by RWE, such as Ibrance (palbociclib) for male breast cancer and Prograf (tacrolimus) to prevent organ rejection post lung transplant, demonstrate how to successfully address the myriad challenges of working with RWD.

1. Failing To Share A Prespecified Protocol and SAP

As noted, the most common mistake sponsors make when seeking to leverage observational studies to support or demonstrate a product’s safety and efficacy is failing to share a prespecified protocol and SAP with the agency. The FDA emphasizes the importance of up-front transparency to guard against this risk, noting in its real-world evidence framework that multiple analyses in electronic data sets can be done quickly and inexpensively, “making it possible to conduct numerous retrospective studies until the desired result is obtained....”1 The four recent guidances also recognize this as a potential risk to undermining study validity and recommend sponsors:

“Provide draft versions of their proposed protocol and statistical analysis plan (SAP) for Agency review and comment, prior to finalizing these documents and before conducting the study analyses.”2

2. Missing Data

One of the most common methodological deficiencies identified in FDA reviews of RWE studies is missing data. Unlike clinical trials, wherein patient visits and data collection are prespecified, “real-world” patient visits are often irregular and data collection uneven, leading to spotty and inconsistent electronic health record (EHR) data. In turn, incomplete EHR data can make it difficult to define baseline characteristics for observational cohorts, obfuscating efforts to establish comparability with an active trial cohort.  This problem has hobbled several attempts to use RWE studies to support oncology product approvals. In these cases, the sponsor was unable to provide complete documentation on essential baseline characteristics such as previous treatment regimens, tumor stage, and Eastern Cooperative Oncology Group (ECOG) scores. FDA reviewers noted that the failure to capture previous treatment regimens introduces confounding bias, while missing dates, such as the date of diagnosis or start of first treatment, can introduce bias into the study in favor of the investigational product.

In successful RWE applications, the majority of these important clinical details are well documented. For example, FDA notes in the supplemental review of palbociclib for male breast cancer that “In this study, real world tumor response data was generally available for several lines of therapy.”3 The reviewer also notes that real-world data sets were complete and included birth date, metastatic sites, and first metastatic treatment. Unstructured source data in the form of clinician assessments and radiology reports were also included in the submission.4

3. Small Patient Cohorts

Incomplete EHR data has significant downstream effects on a sponsor’s ability to use RWE for regulatory purposes. In an attempt to achieve comparability with active trial cohorts, sponsors often have to disqualify patients with incomplete data elements from analyses, further whittling down a data set that may already be pretty small.  For oncology and rare disease applications, small patient populations are already a common issue, which the use of RWE can compound. This has hindered several attempts to use RWE to support approval. Even in successful RWE applications, like palbociclib, the reviewer still called out the small sample size as a key deficiency. 

To address the issue of small patient populations, FDA has in recent guidance identified both data linkage and the combining of data as possible solutions. However, the guidance recognizes that these methods also introduce a new set of methodological problems, notably data heterogeneity. In the draft guidance Real-World Data: Assessing Electronic Health Records and Medical Claims Data To Support Regulatory Decision-Making for Drug and Biological Products (September 2021), FDA states:

“For studies that require combining data from multiple data sources or study sites, FDA recommends demonstrating whether and how data from different sources can be obtained and integrated with acceptable quality, given the potential for heterogeneity in population characteristics, clinical practices, and coding across data sources.”5

In addition to undermining attempts to combine two different data sets, differences in population characteristics, clinical practice, and coding have derailed efforts to use observational cohorts to define standard clinical practice in the context of superiority trials. Again, many attempts to use RWE for oncology indications have failed due to differences in treatment locations and treatment classifications which prevented sponsors from being able to draw direct comparisons between the investigational product and the standard of care.

4. Lack of Objective Endpoints

A third common roadblock to the acceptance of RWE studies for regulatory purposes is the difficulty in uniformly capturing clinical outcomes in EHR or claims data. In recent guidance, FDA emphasized the enormous impact that variability in physician practice, specifically in regard to diagnoses and coding, have on capturing clinical outcomes. To address this risk, FDA recommends sponsors use RWD sources to capture outcomes with more objective “well-defined diagnostic criteria,” such as death, stroke, or myocardial infarction.

Using RWE for oncology indications is particularly tricky as many commonly used oncology endpoints are subjective and time-bound. Tumor response rates and progression free survival (PFS), for example, are inherently subjective and rely on visual assessments of radiographic images, often generating wide variability. These outcomes are also difficult to validate, as most EHR records do not incorporate the radiographic image. Meanwhile, time to treatment discontinuation (TTD) and overall survival (OS) is particularly subject to confounding bias due to treatments beyond disease progression. As noted, FDA’s answer to these issues is to encourage sponsors to use RWD for conditions or diseases with straightforward and objective outcomes. It is telling, then, that the only drug to have received approval based on an observational study is tacrolimus for lung transplant recipients, wherein the primary endpoint was post-transplant survival.

Considering the methodological issues sponsors face in working with RWD, FDA’s cautious stepwise approach to implementing the RWE program is understandable, but it is frustrating to some. As drug developers and the agency continue to wrestle with these methodological issues, we can expect that successful RWE regulatory submissions will continue to be for drugs with well-established safety and efficacy profiles.   

References

  1. FDA, Framework for FDA’s Real World Evidence Program, December 2018, p22, https://www.fda.gov/media/120060/download.
  2. FDA, Draft Guidance for Industry, “Considerations for the Use  of Real-World Data and Real World Evidence to Support Regulatory Decision-Making for Drug and Biological Products,” December 2021, https://www.fda.gov/regulatory-information/search-fda-guidance-documents/considerations-use-real-world-data-and-real-world-evidence-support-regulatory-decision-making-drug.
  3. Maria T. Nguyen, Kevin Wright, Barbara Fuller, and Lashawn Griffiths, sNDA 207103-S-008, Multi-Discipline Review, 5 March 2019, 50, https://www.accessdata.fda.gov/drugsatfda_docs/nda/2019/207103Orig1s008.pdf.
  4. Ibid.
  5. Food and Drug Administration, “Real-World Data: Assessing Electronic Health Records and Medical Claims Data To Support Regulatory Decision-Making for Drug and Biological Products, Draft Guidance for Industry,” September 2021, https://www.fda.gov/regulatory-information/search-fda-guidance-documents/real-world-data-assessing-electronic-health-records-and-medical-claims-data-support-regulatory.

About The Author:

Sean Hilscher is vice president of regulatory policy at Greenleaf Health. He works with clients on a range of regulatory and policy issues, including real-world evidence and digital health. Prior to Greenleaf, he managed a suite of real-world evidence platforms for providers, payers, and life science companies. He has an MBA from Georgetown University and an MA in politics, philosophy, and economics from the University of Oxford.