RWE Is Ready — Decision Making For Pharmaceuticals Isn't
By Dan Schell, Chief Editor, Clinical Leader

Real-world evidence (RWE) has reached an inflection point in clinical research. Regulators are embracing it, data sources are expanding rapidly, and the tools to analyze it have never been more powerful. Inside pharma, though, adoption still lags.
That disconnect came into focus for me after Tufts’ Ken Getz, who serves on Clinical Leader’s editorial board, recommended Gorana Capkun of Merck, KGaA, Darmstadt, Germany as a panelist for a recent webinar I hosted on RWD and RWE. The panel was already full, but the conversation was worth having separately. Between COVID-era data visibility, evolving regulatory guidance, and a flood of new data sources, RWE has moved from theoretical to practical. The question now is not whether it matters, but whether pharma organizations are truly ready to use it.
One thing became clear quickly in my conversation with Capkun: The industry has made progress, but adoption is still uneven. “Throughout my career, I’ve been interested in how people make decisions, and what data they leverage and what they don’t,” she told me. That gap between available data and actual decision-making is where the opportunity, and the challenge, really begins.
Regulatory Momentum Is Real — But Expectations Are High
The growing role of RWE is being driven in large part by regulators. The FDA’s guidance on the use of RWD and RWE makes it clear these sources can support regulatory decision-making, including new indications and post-approval requirements. But that acceptance comes with conditions, particularly around data relevance, reliability, and methodological rigor.
That flexibility is also showing up in how the FDA is thinking about trial design itself. The agency has long allowed for a single adequate and well-controlled trial to support approval when backed by confirmatory evidence, as outlined in its guidance on demonstrating substantial evidence of effectiveness.
In practice, though, this approach isn’t new, and it isn’t universal. Capkun noted that single pivotal trial strategies have been used for years in areas like oncology and rare diseases, where traditional trial designs are often less feasible. Outside of those settings, acceptance has been more limited.
Even when regulators signal flexibility, alignment isn’t guaranteed. “Often, FDA and EMA would ask for different clinical studies,” she said, “leading companies to run two pivotal trials to satisfy them both.”
That’s why what the industry now refers to as a “one pivotal trial” model is less a formal policy shift and more an evolving willingness to rely on a broader evidence package around a single study, which still depends heavily on the specific disease area and the expectations of multiple regulators.
In that model, RWE is not just supplemental; it can become part of the confirmatory layer that strengthens a single trial. But as explored in a recent Clinical Leader article, The One-Trial Trap — Why The FDA’s Efficiency Push Actually Doubles Your Operational Risk, that shift cuts both ways. While it creates opportunities for faster development, it also concentrates risk, placing far more pressure on data quality, study design, and the supporting evidence ecosystem around that trial.
Capkun sees that shift as both meaningful and necessary. “I’m really pleased with what I’m seeing with the FDA,” she said. “The openness to accept other sources of evidence than clinical trials is a great opportunity, not just for industry, but for patients.” In some cases, waiting for a traditional randomized trial is not just inefficient, it can be impractical or even unethical. When that happens, RWE offers a viable path forward.
The EMA is moving in a similar direction, advancing its use of RWE through initiatives like Data Analysis and Real World Interrogation Network (DARWIN), a coordinated network designed to generate real-world insights for regulatory decision-making across Europe.
Still, regulatory openness has not automatically translated into operational adoption. Inside pharma organizations, RWE is often treated as an add-on rather than a core component of development strategy. “Do we have full adoption? Not yet,” Capkun said. “Is it part of operations? Not fully.” She was even more direct about the underlying issue. “What became less of a ‘nice to have’ externally is still seen internally as an investment, and often still a nice to have.” That perception gap shows up most clearly in budgeting and planning, where the return on investment for RWE is not always as immediately obvious as it is for a traditional clinical trial.
We Need Better Data Usability
If there is one misconception about RWE, it is that the industry simply needs more data. In reality, the challenge is not volume but usability. “The challenge is fragmentation,” Capkun explained. “Data exists, but it’s not necessarily collected with the question you have in mind.” That creates limitations around depth, completeness, and connectivity. Key endpoints may be missing, datasets may not link together, and in markets like the U.S., patients moving between health plans can disrupt longitudinal tracking.
This is where the concept of “fit-for-purpose” data becomes critical. It is not enough for data to be available. It has to be relevant to the specific question and reliable in how it was collected. “In countries like Scotland and Croatia, you have a unique personal identifier,” Capkun explained. “I come from Croatia, and I was assigned that number at birth. It stays with you throughout your life—when you go to school, when you get married, when you see a doctor, when you start or leave a job. That number allows you to connect all of your data. Not just medical history, but also how you lived and the context around your life. You can begin to see how life events connect to disease, and you understand much more holistically how people navigate their health over time.”
I loved that anecdote because it clearly explained that it’s not just about data quality; you need to consider the data’s relevance and reliability. All of that seems to align with how regulators evaluate RWE submissions and, I’m guessing, is how some pharma companies are approaching this data strategy.
Plan Early For RWE
Similar to the misconception about the need for more data, Capkun pointed to the common misstep of treating RWE as something you analyze after the fact, rather than something you design up front. “You should pre-specify your protocol and your analysis plan before you see the data,” she said. “If that is not done and If I were a regulator, I would ask, ‘Are you cherry-picking?’”
That distinction matters. When RWE is used retrospectively, it raises immediate questions about bias and credibility. When it is designed with the same discipline as a clinical trial —with clear protocols, defined patient cohorts, endpoints, traceable data, agreed-upon analytical methods and transparency — it becomes far more defensible.
And the tools to do this are not new. Approaches like target trial emulation have been around in the statistical community for years. What’s changing is how consistently they are being applied, how early they are discussed and agreed upon with regulators and how closely they are being scrutinized by regulators.
As RWE moves closer to the center of regulatory decision-making, especially in models that rely on a single pivotal trial, that level of discipline is no longer optional. It is the difference between supportive evidence and something that regulators and other decision makers can actually trust.