Guest Column | March 5, 2026

The Latest On RBM Uptake? Tufts Says Most, But Not All, Pharma On Board

A conversation with Beth Harper, project manager and senior research consultant, Tufts CSDD

Risk protection, business and life-GettyImages-1872872331

The latest Tufts CSDD Impact Report, which explores how far the industry has come with centralized and risk-based monitoring (RBM), reveals steady progress toward their adoption. The report found that three-quarters of large sponsors and about half of smaller ones have moved toward partial or full adoption. But it also revealed that some functions, particularly those in biostatistics, remain hesitant.

Tufts CSDD’s Beth Harper points to encouraging gains in data quality and site oversight, with companies now tailoring monitoring intensity to each study’s and site’s risk level. At the same time, she notes, true confidence in RBM may depend on clearer validation from regulators. (That’s yet to be seen.)

In this Q&A, Harper discusses how companies might better define risk, measure success, and understand central monitoring activities to write the next chapter in evolving clinical oversight.

Clinical Leader: The latest IMPACT report reveals findings from a survey on centralized and risk-based monitoring. A highlight is that most sponsor companies (75% of large and 51% of small) have partially or fully implemented centralized and risk-based monitoring. How did those findings align with expectations or previous data?  

Beth Harper: It’s hard to directly compare the results of our research to prior studies as they looked more at adoption of specific aspects of RBM versus a general definition of overall adoption, which was used in our study.  That said, we saw some uptick in terms of the percentage of sponsors who are reporting relatively high levels of adoption.

The survey also revealed that sponsors who had implemented it portfolio-wide had been at that stage for an average of three years. What does that tell us about implementation timelines and how companies might expect to phase in this new workflow?  

In general, we find adoption of any new initiatives or innovations can take up to 30 years. While we are seeing progress in recent years, it’s been over 13 years since some of the first regulatory guidelines related to RBM were published. 

Data points to time savings, but it appears there are also wins in data quality, integrity, and efficiency. Just over half of sponsors reported that centralized monitoring and RBM greatly or somewhat improved data quality and integrity. How might these findings assure hesitant stakeholders (clinical development and biostatistics) that this monitoring approach is viable?

Our research supported common impressions that medical and biostats functions are still the most resistant to RBM. While the levels of resistance are lower once organizations reach full-scale implementation, they persist nonetheless. Through our interviews and discussions, we believe it will take more compelling data from regulatory agencies to convince these functions that RBM is acceptable. We are hopeful that future research can drill into findings from audits and inspections as well as regulatory submissions to provide more compelling evidence to help the skeptics get more comfortable.

Overall, how did the survey findings reinforce what you already knew or expose something new?

Harper: People are really trying to answer this question that you picked up on: How come we're so resistant? How come our colleagues in stats and medical or clinical development are still so worried?

The regulators said back in 2013, "You should take the risk-based approach.” We're on board. We're trying to do this. Our CROs are becoming more agreeable to this although it's outside of their traditional model. The operations people are changing their resourcing models for CRAs and investing in central monitoring technology. They're all over it, but they still have this uphill battle with some of these functional groups that are so resistant.

Is this data going to change anyone's mind? I think the answer is “no.” Until we get the regulatory agencies that say, “We've done 200 inspections for drug applications under an RBM approach, and there were no significant findings” or “We didn't reject the data,” it's still going to be challenging to get these other groups comfortable with a different approach. And that data is incredibly hard to get.

Still, whether people are reluctant or not, we saw an absolutely clear pattern related to site risk level. As companies determine you're a high-, medium-, or low-risk site, they're deploying their resources on a risk-based approach. And that's the whole point of RBM. It was really encouraging to see evidence that if it's a higher-risk site, monitors go more often, and they spend two days on-site versus one day. I think there was maybe this industry impression that a CRA can significantly more sites now because they're doing everything remotely. We didn't see that. We saw a little bit of an increase in the number of sites allocated to a CRA, but they still have to do the work, whether it's on-site or remote.

It's just that they're focusing their efforts on-site for the sites that really need it. The fact that they are segmenting their sites reinforced that companies get this risk-based approach.

How do you get regulators to share those instances of successful RBM use? How would they publicize that or communicate that to the industry?

Even companies are likely reluctant to share this information because, if they had a bad experience, they may not want to publish it. There's probably lots of other reasons for failure or a poor experience; it's not just because of RBM. So, I think that's going to be a challenge in the coming years — to get solid evidence from that perspective and different agencies around the world, not just the FDA.

How did you define the risk level of sites?

Million-dollar question. And that was one of the limitations of this research. We left it open to interpretation, and everyone has different criteria. Some may do it based on deviations. Some may do it based on higher-than-average screen fails. Some may do it on data anomalies. Some might have a sophisticated algorithm. That's the whole value of central monitoring. You've got these systems that can look at trends and find outliers. For example, sites that have no adverse events reported or sites that are enrolling much faster than anybody else. These systems now are much more sophisticated at looking at these trends and outliers, but how an individual company defines high-, medium-, and low-risk is different.

That's a future opportunity. Can we come up with a standard definition for these key risk indicators or thresholds? And it may vary by study. Certain studies will have different characteristics that could put a site in a category of higher or lower risk.

But with the central monitors, they are driven more by the risk level of the trial. Let's say I'm a Big Pharma company and I have 50 trials. Rare disease, oncology, pediatric studies — those are going to be higher risk. I'm likely to allocate more resources there versus my diabetic outcome studies. Even how they classify the trial and deploy resources based on that, let alone within the trial, and the mix of sites — it's a layered approach.

Tufts also considered the CRA perspective, having found that 80% had sufficient time to fulfill their on-site duties, even finding themselves spending time on other activities outside chart/data review. How might this shift affect timeline pressures on CRAs? And how might it shift their on-site duties?

In terms of shifting CRA’s work, our study found that CRAs believe RBM has greatly helped them focus on more high-value or high-impact areas during on-site monitoring visits compared to traditional monitoring plans using a high percentage of SDV and frequent site visits (e.g., protocol compliance, broader data quality, integrity and consistency activities, as well as site relationship activities).  

As for timeline pressures, I’m not sure we can make any interpretations there. The overall workload (in terms of the number of sites a CRA can support) only increased slightly from an average of nine to 12 sites per CRA. While they spend one to two days on-site (based on the site-risk level), it is more of a shift from on-site activity to remote site assessments.

Reflecting on these findings, what questions still remain? Or what new ones have emerged?

As noted above, we hope we can really explore the regulatory agency experience with RBM in future research.  This will provide a lot of insights. We are also eager to better understand the central monitoring activities (roles, activities, adequacy of technologies supporting remote data evaluation, outliers, trends, etc.).  One of the most notable findings from our research was the extent to which site-risk level assessments inform on-site monitoring visit frequency (the higher the risk, the more often the CRA visits the sites).  This is part and parcel of the entire premise of RBM. However, risk level was left open to interpretation by the respondent, so we are interested in further exploring how sponsors are categorizing site risks as well as how effective RBM is in terms of reducing or eliminating the percentage of high-risk sites (or converting high-risk to low-risk sites).

About The Expert:

Beth Harper is a seasoned clinical research professional who has held multiple roles across all types of organizations (sites, CROs, sponsors, service providers) throughout her 40-year career.  She has passionately pursued opportunities to optimize the clinical trial process, accelerate patient enrollment, enhance sponsor/site relationships and to build the competency of the clinical trial workforce. She is senior research analyst for the Tufts Center for the Study of Drug Development (CSDD) and the president of Clinical Performance Partners, Inc.