RBQM And Centralized Monitoring Need Action
By Dan Schell, Chief Editor, Clinical Leader

It’s weird to think of the COVID‑19 pandemic as having been something positive, but in a way, it was for the clinical trial industry. Speed was the name of the game, with sponsors rapidly scaling programs and making (brace yourself) … changes to their old routines! (Gasp!) Outsourcing ruled, and decentralization became both a “revolutionary” concept and what many sites viewed as a fad that would wither away faster than the initial enthusiasm for SIP.
Of course, with six years of hindsight, we know that all of our fears regarding these changes didn’t come true, but neither did the promises. And that got me thinking about some other topics that were frequently being discussed at the time, namely, risk‑based quality management (RBQM), centralized monitoring, and just plain ol’ monitoring oversight. After a quick search on our Clinical Leader website, I saw articles and webinars dedicated to examining what “ownership” truly means in a risk‑based environment. But I wanted to hear from someone who has been on the ClinOps front lines, so I reached out to Marci Thear, MPH, who has worked at Merck, ICON, and Moderna.
The Limits Of A Pandemic‑Era Operating Model
Thear acknowledged that in those early days of large‑scale COVID trials, outsourcing was not a strategy — it was a necessity. “We outsourced everything because we had to get to market as soon as we could,” she explained. “There wasn’t time to build internal infrastructure or hire at the scale that was required.”
But pandemic conditions also masked structural weaknesses. Oversight functions were often fragmented across CROs, third‑party vendors, and internal teams reviewing deliverables rather than owning processes. As development programs diversified beyond emergency use scenarios, that fragmentation became harder to justify. “Monitoring oversight is always the responsibility of the sponsor,” she said. “No matter how much you outsource, accountability doesn’t move.”
From Review To Ownership
A big part of the problem, Thear suggested, was that many sponsors spent the pandemic years reviewing work rather than truly owning it. Risk management plans were being developed, centralized monitoring outputs were being generated, and oversight reports were being circulated, but too often those activities lived in separate buckets. (We do like our silos in this industry.) What looked like a coordinated system was, in reality, a series of handoffs.
That model held together when speed was the only thing that mattered. It becomes harder to defend when programs expand and timelines normalize. “We had all the pieces in place,” she told me. “But they weren’t always connected in a way that gave us real visibility.”
In practical terms, that meant sponsors were often relying on CROs to surface issues, vendors to validate performance, and internal teams to approve what came back. At some point, that stops being oversight and starts looking like layered dependence. You can move fast that way, but you’re not necessarily in control.
That realization is what’s driving a shift now. No, sponsors are not abandoning outsourcing, but they are becoming far more deliberate about what they keep in-house. Thear believes monitoring oversight, in particular, seems to be moving back under the sponsor umbrella.
What that looks like in practice is the creation of independent oversight functions that evaluate both site performance and how CROs are actually executing. Instead of relying on someone else’s summary, teams are building their own reporting structures, issue-management workflows, and escalation paths. To me, this all sounds like it’s less about adding another layer and more about removing the blind spots that come from too many intermediaries.
RBQM As A Living Process
If RBQM felt theoretical a few years ago, Thear’s description of it is anything but. At its core, she sees it as a continuously evolving process tied directly to what matters most in a study. “You start with your critical endpoints and your critical data,” she said. “Then you build your thresholds around that and adjust as the study moves forward.” Sounds obvious, but that last part is where many organizations still struggle. Risk plans are often treated as static documents created at study start, rather than tools that should evolve over time.
In the beginning, the focus may be on enrollment quality and whether sites are applying I/E criteria correctly. As the study progresses, attention shifts to retention, data timeliness, and operational consistency across sites. A meaningful RBQM approach requires teams to adapt along the way, not just check whether pre-defined thresholds have been crossed. “It’s not a set-it-and-forget-it plan,” she said. “You’re constantly looking at what’s trending and deciding where you need to intervene.”
Centralized monitoring plays a key role in making that possible, but not in the way it’s often marketed. Yes, it reduces manual effort, but the bigger advantage is earlier visibility. Instead of reacting to issues after they surface in reports or audits, teams can identify patterns while they’re still manageable. Thear described it as catching problems “while they’re still in the yellow,” when there’s still time to retrain sites, adjust processes, or address gaps before they escalate. That shift — from reactive to proactive — is where RBQM starts to deliver on its promise.
What Regulators Actually Expect From RBQM
One reason this shift is happening now is that regulators have been pushing in this direction for years. The FDA has made it clear that sponsors should focus on critical data and processes, use centralized monitoring where appropriate, and move away from 100% SDV as the default. (See the FDA’s guidance on a risk-based approach to monitoring: https://www.fda.gov/media/116754/download.)
In other words, RBQM isn’t optional. It’s the model regulators expect sponsors to operate under. But adopting RBQM tools is not the same as implementing RBQM. Thear made that distinction clear. “You can have all the outputs in front of you,” she said, “but if you’re not acting on them, you’re not really doing RBQM.”
That gap — between visibility and action — is where many organizations are still catching up. During the pandemic, it was often enough to demonstrate that risks were being tracked. Now, there is greater scrutiny on whether those signals are actually driving decisions that protect patient safety and data integrity.
Centralized monitoring plays directly into that expectation, but regulators have been equally clear that it should complement, not replace, on-site monitoring. The burden is on the sponsor to justify how its monitoring strategy aligns with the specific risks of the study, not just to show that a system is in place. For additional context, this aligns closely with the International Council for Harmonisation E6(R2) guidance (see section 5.18), which emphasizes ongoing risk assessment and adaptive monitoring strategies.
From Signals To Decisions
That expectation brought me — and our conversation — back to execution. Centralized monitoring only works if the signals it generates are tied to real decisions. Otherwise, it becomes just another layer of reporting. “It’s not just about seeing the signal,” Thear said. “It’s about what you do with it.”
In practice, that means acting earlier. A trend in lost-to-follow-up patients, delays in lab reporting, or inconsistencies in how sites are applying protocol criteria may not seem critical on their own. But when identified early, they give teams a chance to intervene before those issues impact study outcomes.
This shift from reactive to proactive is also changing the role of the CRA. As noted in Factors Changing The Way CRAs Monitor Trials, monitoring is evolving from a site-centric activity into a more data-driven function, where CRAs rely on centralized signals, dashboards, and analytics to guide where and how they intervene. In that sense, centralized monitoring isn’t replacing traditional oversight. It’s reshaping how it’s applied, with more emphasis on identifying trends across sites rather than verifying individual data points one visit at a time.
That intervention could take several forms: retraining sites, adjusting operational processes, or revisiting assumptions made at study startup. What matters is that the signal leads to action, not just awareness (seeing a pattern here?). It also requires alignment across functions. Centralized monitoring teams may identify the issue, but ClinOps, data management, and site-facing teams need to respond in a coordinated way. Without that connection, even the best RBQM system becomes a passive tool rather than an active part of study management.
Turns out, during COVID, we learned how to move faster. Now, we’re learning what it really means to manage risk instead of just reacting to it.