Guest Column | March 29, 2017

Do Patient Recruitment Advertising & Awareness Campaigns Really Work?

By Beth Harper

Sponsors, CROs, and investigative sites are often faced with two key challenges when planning and managing clinical trials: getting patients recruited and determining how much to spend to find and enroll them. With the explosion of e-recruitment tactics (e.g., social media and digital advertising) and e-recruitment service providers over the past five years or so, there seems to be an industry-wide perception that the cost to enroll patients should be dramatically reduced compared with the use of more traditional advertising and awareness tactics. While it’s nearly impossible to determine how much is spent on recruiting subjects (I’ve seen estimates from $1.2 billion to $20 billion per year), the amount appears to be increasing, not decreasing. So as an industry, we seem to be investing a lot to recruit patients, but the billion-dollar question remains: Does investing in patient recruitment campaigns really work? The answer, of course, is it depends.

The first “it depends” concerns to whom you are addressing the question. Ask any patient recruitment service provider, and they can quote impressive pay-per-click and click-to-call metrics as a way of demonstrating the return on the e-recruitment investment.1 In the case of centrally supported recruitment campaigns, sites and sponsors may have a different view or metric in mind. The sites, for example, may only care about how many patients were actually enrolled as a result of the recruitment effort, whereas the sponsors may be more interested in the overall cost per patient actually enrolled, not the cost per interested respondent. Getting alignment on which metric to use to determine the effectiveness of the campaign is task number one.

The next “it depends” relates to what you expected. Actual versus planned performance comparisons are the staple of any good budget management process. How many responses to an ad campaign did you expect, how many did you receive, and what’s the variance? More importantly, what is accounting for the variance? If the patient recruitment vendor expected the campaign would generate 3,000 click-throughs and the campaign generated 14,000, then judging by the ability to get the intended audience to take some action, this would be a wildly success campaign. However, if the sponsor expected the campaign would generate 400 enrolled subjects but only 40 enrolled, actual performance did not meet expected performance, and the sponsor is likely to deem that the campaign “didn’t work.”2

Case in point: Consider the patient disposition metrics illustration from a real centralized recruitment program that I was involved in analyzing a few years ago. The sponsor and patient recruitment service provider were at odds over whether the sponsor should pay for the work of the vendor, who the sponsor claimed had not met expectations. The vendor claimed that they had done their job to generate trial awareness, as evidenced by the fact that the campaigns resulted in 40,000 inquiries. Further, they felt that their pre-screening approach was effective because it weeded out 95 percent of patients who were not qualified and only referred a highly “pre-screen qualified” group of potential subjects to the research sites.

From the sponsor’s perspective, however, the campaign only generated 137 consented, or enrolled, subjects after spending $1.3 million (and even fewer were actually randomized). What made this particular situation difficult to resolve is that neither the sponsor or service provider had clear or clearly documented expectations for what the provider should have delivered. At the end of the day, the sponsor needed a total of 850 subjects, but they had never outlined the number of subjects that would be enrolled by the site vs. the number that the vendor would contribute above and beyond that.

What often happens in many trial “rescue” situations in particular is that the sponsor adds one, two, or sometimes more vendors to supplement the enrollment gaps but fails to identify what the expectations are and closely monitor the most meaningful metrics. In the end all that happens is all parties are frustrated, the enrollment goal is not met, and the recruitment spend goes up. The sponsors end up biased for future programs, claiming that the campaigns didn’t work, when in reality the campaigns may play a critical role in supplementing what the sites can enroll from their own pool of patients. Furthermore, the cost to enroll the subject may actually be considerably less than the cost to bring up another site. Setting realistic expectations for the expected contribution the recruitment campaign will make to the overall enrollment goal is task number two.3

A very common pattern with centrally managed recruitment campaigns is the fact that 30 percent — or often more — of the subjects referred to the site are never processed or acted upon by the site (as depicted in the patient disposition illustration above). The reasons why sites don’t follow up are many, and beyond the scope of this article. It has long been established, however, that the longer it takes to contact an interested potential subject, the more likely it is that the lead will go “cold” and the less likely that the candidate will convert to an enrolled subject. 4 Some might argue that the role of the patient recruitment service provider ends at the point they refer a potential candidate to the site, whereas others feel it is the responsibility of the vendor to manage the patient through the consenting and screening process. Again, this depends to some degree on the expectations that are set initially and whether a pay-for-performance model with the vendor is being used. Regardless, site personnel must be appropriately resourced to process the referrals to maximize the return on investment. Ensuring sites are committed to receiving the referrals and acting upon them in a timely fashion is the final step in ensuring a successful outcome.

Sponsors and service providers can be well-served by mapping out the planned patient dispositions up front and carefully monitoring these metrics over the course of the campaign. If actuals meet planned, then all can agree on and celebrate the fact that the recruitment program worked. If not, all parties have a clearer picture of where some of the bottlenecks or issues are, and they can take more immediate and directed action to maximize the conversion of patients to ensure a successful outcome.

References:

  1. Stempel, D. Debunking digital patient recruitment myths for clinical trials: Myth 4. Medical Marketing Insights, May 25, 2016.
  2. Tointon, A. Getting the most from a central advertising campaign. CenterWatch, April 4, 2016.
  3. Harper, B. Effective management of patient recruitment organizations. Journal of Clinical Research Best Practices. Vol. 9, No. 8, August 2013.
  4. Fung, S. et. al.  Development of a patient recruitment program for phase 2 trials in a biotechnology company.  Drug Information Journal.  Vol. 37, No. 3, July 2003.

About The Author:

Beth Harper is the president of Clinical Performance Partners, Inc., a clinical research consulting firm specializing in enrollment and site performance management. She has passionately pursued solutions for optimizing protocols, enhancing patient recruitment and retention, and improving sponsor and site relationships for over 30 years. Beth is an adjunct assistant professor at the George Washington University and has published and presented extensively in the areas of protocol optimization, study feasibility, site selection, patient recruitment, and sponsor-site relationship management. She is currently serving on the CISCRP Advisory Board as well as the Clinical Leader Editorial Advisory Board, among other industry volunteer activities.

Beth received her BS in occupational therapy from the University of Wisconsin and an MBA from the University of Texas.

She can be reached at 817-946-4728 or bharper@clinicalperformancepartners.com.