From The Editor | April 22, 2024

Why Can't We Compare Site Performance Measures?

Dan_2023_4_72DPI

By Dan Schell, Chief Editor, Clinical Leader

Dan and Cerdi
"I know that I'm not alone thinking about this concept,” says Cerdi Beltré, COO of Innovo Research.

There are so many instances in the pharma industry where everyone agrees that if competing companies would just share their data with each other, the timeline to create new drugs would likely shorten. Of course, as we all know, it's not that simple.

It was one of these “wouldn’t-it-be-great-if-we-all-worked-together” ideas that really grabbed my attention at this year's SCOPE conference. It was a panel discussion called “Modernizing Site Engagement and Enablement for the Trials of the Future” and one of the panelists was Cerdi Beltré,  COO of Innovo Research, a network of 10 multispecialty physician medical groups that regularly conduct clinical research.

Beltré was positing the benefits of having a global and objective way to measure investigator site performance for metrics such as startup duration, screen-pass rates, or rate of enrollment. When I followed up with her after the conference, she admitted this is not a new concept; it’s been discussed for at least 10 years by various companies and associations. While working at IBM many years ago, she even pursued this concept to the extent that she got a patent for the idea. “I believe the patent is called ‘monitoring clinical research’ or something to that effect, and it’s just sitting on a shelf somewhere,” she says. “I wanted to create an industrywide — not study-level — central repository where all this information could reside. I think, overall, people acknowledged this would be helpful, but I don't believe there is one company that has taken on this challenge.”

HELPING DEFINE “THE STANDARD” FOR STUDY ELEMENTS

Beltré says having this comparison data could improve a site’s performance, and thereby help the overall trial. Currently, sponsors, CROs, IRBs, and some tech vendors have portions of this data for each study across multiple sites. The key would be for each of those data aggregators to use the same form of measurement. A simple example would be how long it takes for a site to complete study startup (as long as “startup” is uniform). That data could be averaged and then presented to sites for comparison. “You could look into study conduct, data entry, quality … there are all sorts of options. Then, if you noticed any trends, you could give that information back to the individual sites to drive behavior change. You’d be letting them know what the new standard is for one of these study elements.”

On a smaller scale, Beltré has seen the benefits of this kind of data sharing at Innovo. Each week the network’s sites meet to discuss the status of various performance metrics. This kind of transparency uncovers any challenges a particular site could be having, and the group as a whole works to find a solution rather than leaving it up to the site in question. “So, if a site is struggling with, for example, recruiting for the same study, we gather the leads of all the sites, and we share best practices,” Beltré says. “We may even send an experienced staff member from one site to go to the site that is struggling to help out. We do that because that one site’s success is also our whole network’s success.”

UNDERSTANDABLE TREPIDATION

Beltré isn’t naïve to the difficulties surrounding this data-sharing model. This is, after all, a very risk-averse industry, and giving broad access to this kind of data could be perceived as having the potential to negatively impact existing business relationships. “Recently, I was talking with a risk-based monitoring company,” Beltré says. “They have information about sites, which they give to their customer, the sponsor or the CRO. But they cannot give that data to the sites directly because they do not own the information. So it needs to come from the owners of the information. And there’s hesitation in giving the sites some of that.” She explains that, likely, the hesitation comes from the belief that sites are going to be sensitive about seeing that data. They may not believe the data, or they may be concerned that it may injure their relationship with the sponsor or CRO, which is why the measures need to be objective. So, if a site started later or was a rescue site, for instance, those kinds of situations should be taken into account.

“I know that I'm not alone thinking about this concept,” Beltré says. “We all want to know [about this comparison data] because we care about the kind of work we do. We know how important it is to bring products to market faster, because there are patients waiting for these therapies. And so anything that we can do as an industry to drive that forward faster is super important.”