By Ed Miseta, Chief Editor, Clinical Leader
Follow Me On Twitter @EdClinical
In March 2019 I had the opportunity to interview Jennifer Newman, Global Project Leader, Regulatory Affairs/Clinical Operations for Celldex Therapeutics. Newman was once part of the largest implementation of risk-based monitoring (RBM) and was able to share insights from her experience. Specifically, she was able to discuss the benefits and challenges of RBM and what companies should be prepared for when adopting the technology. This Q&A highlights some of Newman’s comments. You can view the entire interview here.
This is part one of a two-part article. Part two can be viewed here.
Ed Miseta: Let’s start off with a short history of risk-based monitoring. There have been a lot of changes in technology and in regulatory. Can you briefly go over that and let us know where we stand today?
Jennifer Newman: When we think about technology, we hope that when we implement new systems like that, that we’re doing it in a way that we are getting a benefit. Prior to electronic data capture being put in place, people would perform double data entry and triple data entry from the medical records at sites into the databases. The process was very time consuming.
EDC was a direct way for sites to enter their data directly and get the sponsor access to their trial information right away. This had everybody very excited and showed how quickly we can have access to it.
But sometimes technology has a double-edged sword, and I think that also created a big emphasis on 100 percent source data verification. We had speedy access to our data, so let’s make sure that every box is checked, and everything looks exactly as it should.
Unfortunately, what the data shows is that the data we have in our systems is pretty much accurate as it’s entered. If there is some correcting going on, it’s probably on the order of three or four percent of the data. With all this effort in source data verification, it became apparent that it’s awfully costly, time consuming, and rate limiting. Perhaps there’s a better way.
A couple of things then happened in parallel. One was major revisions to the GCP guidance, and there were three areas where that guidance shifted. It took emphasis away from being process and document-based and emphasized the goal of effective, good clinical practice is to ensure data quality and the protection of human subjects.
That really was a very clear, fundamental change in the most recent ICH revision. The second change was that it introduced concepts of risk-based monitoring, and an emphasis on relying on the technology. We have access to the data, so let’s make sure that we’re looking at the data in a centralized way, not worried so much about every single data point, and making sure that we’re not treating those data points equally, because they’re not equal.
Third, probably the most important change out of the revision was CRO oversight. With that, there is an emphasis on making sure that sponsors still need to maintain that quality oversight. That is something they cannot outsource. I think for small companies, this is a very interesting thing, because we do tend to outsource quite a bit.
That is what has gotten us to where we are today. I have a lot of conversations with people and I have found they are struggling with RBM, how to implement it, and where it all fits into the big picture.
Ed Miseta: For a company that wants to get into risk-based monitoring, there are many ways to do it and different solutions they can incorporate. Are you able to give an overview of what those different solutions are and how they would work?
Jennifer Newman: There is no prospective way of doing this. There are tools out there from industry, particularly TransCelerate. There are very large spreadsheets available to anybody, not just TransCelerate company members, that are basically laundry lists for companies to use.
They basically pulled all their ideas and what could possibly go wrong in a clinical trial and tried to capture it. For most people, I think this approach would be a little daunting. If your goal is to implement a quality management system, you first need to convince management that it is important. Then you must show them how you’re going to do it. Then you need to open that spreadsheet, have a meeting, and bring everyone in go through the items and rank every single piece of data according to their importance for that study. Then, for every future study, you’re going to have to do that all over again. It’s not easy.
I think the size of those documents, the sheer volume of information, and confusion over where to even begin is giving people pause. That’s not necessarily a bad thing, because I’m not a firm believer in throwing everything into a spreadsheet and managing it that way.
I was involved in one of the very first studies that was part of TransCelerate’s effort to implement risk-based monitoring. It was a very large study in diabetes. The study was ongoing when it was implemented, and the decision came from the top-down. The implementation decision was made because it was expected to cut 50 percent of the monitoring costs from this huge clinical study.
The budget for the study was in the hundreds of millions, so you can imagine how that got everyone’s attention. The system was very rudimentary, especially compared to the more sophisticated systems we have today. It was a system where things were flagged, and there were algorithms and you hit certain percentages within your metrics. It’s all very calculated. Your spreadsheet would turn red in certain fields, and the team would focus on those red flags. In retrospect, I don’t know if it was a very interesting experience, because people will march in whatever direction you tell them.
For example, if you say, “Here’s a spreadsheet that shows you the issues in your study – take care of them,” they are going to start to focus on getting those red cells out of that spreadsheet. Unfortunately, I don’t think it’s that simple and I don’t think that you really want to rely on something like that.
Today, solutions are a bit more sophisticated, and we now have dashboards, for example. I think the takeaway for me is that you still must be looking at things, holistically and qualitatively in addition to quantitatively, and really understand the drivers of your study. That is how you will get the most cost benefit from your time and effort.
I think, for implementing these kinds of systems, it really starts with the beginning of the study and doing a very, very thorough risk evaluation. You need to take what you learned from past studies and past experiences and use that knowledge to build what we’re going to look at and what we’re going to act on, especially if things start to go south.
To me, that is good, effective risk-based monitoring, and it’s not just about necessarily reducing source data verification. It’s about bringing the intelligence of quality management into a centralized point and making sure that the study team leadership is having those important discussions.
Ed Miseta: You helped implement RBM in a large-scale study. Many of my readers work for small- to mid-sized companies and may wonder if RBM is appropriate for their company or pipeline. Do you have any advice for them?
Jennifer Newman: I have seen different RMB systems. Smaller companies may not have RBM implemented in a formal sense. However, I think of RBM and quality management as more as a process.
RBM vendors will sell you on all the advantages of their system. If it’s not for you, it’s not for you. If it’s something that you’re not at the right scale to implement, then it won’t make sense. Not every company will need to implement a formal risk-based monitoring approach.
Companies can still do 100 percent SDV in a first-in-human study, and I think that’s appropriate. RBM becomes more appropriate when you start to grow and get into later Phase 2 and large Phase 3 studies. That’s when it begins to make more sense. But every company should still implement a risk quality management system.
In other words, go through the exercise of looking at your protocol, looking at the data, and then having processes set up for the courses of study that you are periodically checking in, and you have a few indicators, and now manage it that way.