By Ed Miseta, Chief Editor, Clinical Leader
Follow Me On Twitter @EdClinical
Beth Harper has devoted more than 28 years of her life to the world of clinical research, including time at a sponsor, CRO, service provider, and start-up company while also serving as an adjunct associate professor at George Washington University. She is currently president of Clinical Performance Partners, where she works with companies to optimize study feasibility and site selection and helps to improve site relationship management practices. Although her passion is to work on proactive protocol and site performance optimization strategies, she will tell you that most of her time is spent rescuing studies.
“While I don’t know that we have ever formally defined the term in the industry, most see ‘rescue’ linked to studies where enrollment falls significantly behind plan,” she says. “In my experience it can also refer to situations where the sponsor needs to rescue the study from a CRO who is failing to deliver results. This can involve issues such as site activation, data quality, or other performance shortfalls. Less commonly, the term may be associated with rescuing an individual investigative site that needs intervention, often because of enrollment issues.”
In this Q&A, Harper shares her insights on the factors that will lead to a study rescue and what companies can do to prevent that situation from arising.
Ed Miseta: Why do studies go bad from an enrollment perspective? Are there some common factors that lead to a study requiring rescue?
Beth Harper: That’s a great and challenging question. Having spent over 20 years troubleshooting studies in rescue, I have done a detailed root cause analysis of the problem and have mapped over 150 causes as to why sites fail to enroll. As you can imagine it’s a complex and multifaceted issue.
Miseta: In most cases, are the issues generally related to people or technologies?
Harper: I believe it’s a mix of technology, process, protocol, and other related factors. But the majority of root causes have a people component to them—sponsor/CRO/site relationships, training and communication factors—a whole host of issues related to patient awareness, education, and engagement.
Miseta: Do in-house studies ever require rescue? What are some of the circumstances that will lead to that situation?
Harper: Absolutely. Rescue situations are not unique to outsourced studies at all. And the same root cause issues prevail regardless of whether or not a CRO is involved. Being overly optimistic upfront, failing to spend enough time planning due to the rushed timelines everyone is operating under, not taking advantage of the lessons learned from previous trials, and not asking the sites to provide input early on into the things they need to be successful are all contributing factors. On top of that, overly complex protocols, lack of coordination across many vendors, and not having a clear plan and process for who is engaging and interacting with the site are other fundamental root cause issues. For example, sites can quickly get overwhelmed and frustrated if they aren’t provided with adequate resources, training, and support. If they have to contend with multiple people hounding them with multiple requests, yet aren’t able to get their questions addressed in a timely manner, then sites can shut down. If that happens, the entire implementation of the study can fall apart, meaning sites fail to continue screening and enrolling, entering data, responding to queries, and so forth.
Miseta: What are some of the disconnects that exist between sponsors and CROs that can lead to a troubled study?
Harper: There are many areas where sponsors can be more proactive around recruitment plans, quality management plans, and site training and engagement plans. That can help prevent a rescue situation regardless of whether or not the study will be outsourced to a CRO. Still, there are some specific disconnects that occur in outsourced trials that do compound the problem. Here are just a few questions that everyone involved in the study should be asking themselves:
- First and foremost, do all parties recognize that issues will likely occur and are they on the same page for how these will be communicated and managed? For example, is the sponsor open and receptive to hearing about potential issues before they progress too far?
- Is the CRO willing to ask for help and support or does it feel it’s working in isolation on these issues?
- Does the CRO feel that its processes don’t allow it to communicate issues, thereby leaving it to the sponsor to discover them?
- What communication processes are in place and how transparent will the communication be across the key stakeholders?
- What are the expectations for both parties in terms of the key risk areas that will occur in the trial (enrollment, compliance, and quality risks)?
- What are the specific performance metrics and thresholds that will be used to monitor and manage those risks?
- What are the expectations for the roles each party will play in the risk management process?
- What specifically does the CRO oversight process entail?
If these things are not clearly defined and agreed upon upfront, then when things go wrong, the situation can quickly turn into a finger-pointing exercise with each site blaming the other for the troubled situation. That will only compound the problem.
Miseta: When a study is clearly going bad, what are some things sponsors can do to try and keep it on track and prevent it from requiring a rescue?
Harper: As I mentioned earlier, having clearly defined performance metrics and thresholds to monitor key performance indicators (KPIs) is key. This defines the acceptable ranges within which you can operate. Anything outside of that range would trigger an examination of potential root cause issues. All too often in my experience, these are either not defined or there is no good way to track and measure these metrics.
For example, from an enrollment perspective, sponsors and CROs often just monitor the number of patients screened (enrolled) and the number of randomized patients. Unfortunately, these are lagging indicators. It’s the prescreening work and screening visits scheduled that are the leading indicators of how much enrollment activity is happening. But rarely do sponsors and CROs have a system to track and manage this. By the time screening numbers are falling behind plan, weeks and months have been lost because they had no visibility into the KPIs associated with the start of the recruitment process.
Whether it’s an enrollment or other quality issue, as soon as there are indications that things are getting off track it’s critical to do a root cause analysis of WHY things aren’t progressing. It’s all too tempting to start treating the symptoms (e.g. adding additional sites, sending the CRAs out to do “motivational visits,” hiring a patient recruitment service provider), but if we don’t know the fundamental root causes—why enrollment is falling behind, why sites aren’t complying with the protocol, why the data quality is poor—then any solutions applied won’t have the desired effect.
Miseta: When a trial or CRO relationship is going bad, what factors will finally convince a sponsor that it's time to pull the plug?
Harper: Let’s tackle these as separate issues. If the trial is “going bad,” unless there is a significant safety or regulatory compliance issue that would necessitate putting the study on hold or terminating it, sponsors rarely stop the study for operational issues. It then becomes a matter of adding more time, money, and resources to do whatever it takes to get the study back on track. In other words, sponsors start to systematically address all of the root causes to address the problems creating the troublesome situation.
Deciding to pull the plug and switch CROs is obviously a delicate and complex situation. Much of this comes down to how well the CRO responds to the concerns expressed by the sponsor and how quickly and effectively they act to address the concerns and root cause issues. After giving the CRO reasonable time and opportunity to address things without seeing progress, sponsors may consider evaluating other CROs to come in and take over the management of the trial. Even then, sponsors need to be really convinced that the new CRO will have a fundamentally different process and approach for dealing with the root cause issues. The process of transitioning to a new CRO is costly and lengthy. And from an enrollment perspective, it could actually delay the trial for several more months. As you can imagine, that transition can be highly disruptive to the sites that now need to learn new processes and engage with new personnel. So, depending on where the study stands in terms of enrollment, sponsors may choose to limp along with the CRO, implement as many interventions as possible, and just get through the study as best they can. Later, they can reevaluate the relationship with the CRO for future studies.
Miseta: What are the steps you generally go through when rescuing a study?
Harper: I think you need to start with a very structured and systematic process to conduct the root cause analysis. I first try to gather as much information as possible from the study team to understand the current situation—key problems and issues, what has been done to date, and what is and isn’t working from numerous dimensions (e.g. enrollment, retention, protocol compliance, data quality, site engagement, and CRO engagement). I also try to get as much information as possible directly from the sites via direct interviews, needs assessment surveys and/or on-site visits with representatives. This helps me to understand what the sites are experiencing and the types of things that they believe will help them be more successful in conducting the trial.
Once I have all of the root cause issues identified, I map these out to see what types of interventions are most likely to have an impact. I have several different frameworks developed that allow me to organize the interventions into buckets or groups. I then conduct a strategy session with the study team where we walk through all of the possible interventions, rank and rate them in terms of their effectiveness, and then jointly outline a plan of attack based on top priorities. In some cases, the team simply implements the study rejuvenation or rescue plan on their own. In other cases, I play a role in helping to facilitate the interventions and monitor progress to ensure the actions are having the intended effect. In some situations, depending on the available resources at the sponsor or CRO, I may actually become part of the study team and lead the implementation of the solutions.